Merge branch 'master' into jira/SOLR-14383
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 9f2f786..b08a1d8 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -38,6 +38,6 @@
 - [ ] I have created a Jira issue and added the issue ID to my pull request title.
 - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended)
 - [ ] I have developed this patch against the `master` branch.
-- [ ] I have run `ant precommit` and the appropriate test suite.
+- [ ] I have run `./gradlew check`.
 - [ ] I have added tests for my changes.
 - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only).
diff --git a/.github/workflows/ant.yml b/.github/workflows/ant.yml
deleted file mode 100644
index de505da..0000000
--- a/.github/workflows/ant.yml
+++ /dev/null
@@ -1,22 +0,0 @@
-name: Ant precommit (w/ Java11)
-
-on: 
-  pull_request:
-    branches:
-    - '*'
-jobs:
-  test:
-    name: ant precommit w/ Java 11
- 
-    runs-on: ubuntu-latest
-    
-    steps:
-    - uses: actions/checkout@v1
-    - name: Set up JDK 11
-      uses: actions/setup-java@v1
-      with:
-        java-version: 11
-    - name: Ivy bootstrap
-      run: ant ivy-bootstrap
-    - name: Precommit
-      run: ant precommit
diff --git a/README.md b/README.md
index 5fd3581..e045c7c 100644
--- a/README.md
+++ b/README.md
@@ -20,12 +20,12 @@
 Apache Lucene is a high-performance, full featured text search engine library
 written in Java.
 
-Apache Solr is an enterprise search platform written using Apache Lucene.
+Apache Solr is an enterprise search platform written in Java and using Apache Lucene.
 Major features include full-text search, index replication and sharding, and
 result faceting and highlighting.
 
 
-[![Build Status](https://builds.apache.org/view/L/view/Lucene/job/Lucene-Artifacts-master/badge/icon?subject=Lucene)](https://builds.apache.org/view/L/view/Lucene/job/Lucene-Artifacts-master/) [![Build Status](https://builds.apache.org/view/L/view/Lucene/job/Solr-Artifacts-master/badge/icon?subject=Solr)](https://builds.apache.org/view/L/view/Lucene/job/Solr-Artifacts-master/)
+[![Build Status](https://ci-builds.apache.org/job/Lucene/job/Lucene-Artifacts-master/badge/icon?subject=Lucene)](https://ci-builds.apache.org/job/Lucene/job/Lucene-Artifacts-master/) [![Build Status](https://ci-builds.apache.org/job/Lucene/job/Solr-Artifacts-master/badge/icon?subject=Solr)](https://ci-builds.apache.org/job/Lucene/job/Solr-Artifacts-master/)
 
 
 ## Online Documentation
@@ -40,48 +40,30 @@
 
 (You do not need to do this if you downloaded a pre-built package)
 
-### Building with Ant
-
-Lucene and Solr are built using [Apache Ant](http://ant.apache.org/).  To build
-Lucene and Solr, run:
-
-`ant compile`
-
-If you see an error about Ivy missing while invoking Ant (e.g., `.ant/lib does
-not exist`), run `ant ivy-bootstrap` and retry.
-
-Sometimes you may face issues with Ivy (e.g., an incompletely downloaded artifact).
-Cleaning up the Ivy cache and retrying is a workaround for most of such issues: 
-
-`rm -rf ~/.ivy2/cache`
-
-The Solr server can then be packaged and prepared for startup by running the
-following command from the `solr/` directory:
-
-`ant server`
 
 ### Building with Gradle
 
-There is ongoing work (see [LUCENE-9077](https://issues.apache.org/jira/browse/LUCENE-9077))
-to switch the legacy ant-based build system to [gradle](https://gradle.org/).
-Please give it a try!
-
-At the moment of writing, the gradle build requires precisely Java 11 
-(it may or may not work with newer Java versions).
+As of 9.0, Lucene/Solr uses [Gradle](https://gradle.org/) as the build
+system. Ant build support has been removed.
 
 To build Lucene and Solr, run (`./` can be omitted on Windows):
 
 `./gradlew assemble`
 
-The command above also packages a full distribution of Solr server; the 
+
+The command above packages a full distribution of Solr server; the 
 package can be located at:
 
 `solr/packaging/build/solr-*`
 
 Note that the gradle build does not create or copy binaries throughout the
-source repository (like ant build does) so you need to switch to the
-packaging output folder above; the rest of the instructions below remain
-identical.  
+source repository so you need to switch to the packaging output folder above;
+the rest of the instructions below remain identical. The packaging directory 
+is rewritten on each build. 
+
+For development, especially when you have created test indexes etc, use
+the `./gradlew dev` task which will copy binaries to `./solr/packaging/build/dev`
+but _only_ overwrite the binaries which will preserve your test setup.
 
 ## Running Solr
 
@@ -104,16 +86,6 @@
 exhaustive treatment of options, run `bin/solr start -h` from the `solr/`
 directory.
 
-## Development/IDEs
-
-Ant can be used to generate project files compatible with most common IDEs.
-Run the ant command corresponding to your IDE of choice before attempting to
-import Lucene/Solr.
-
-- *Eclipse* - `ant eclipse` (See [this](https://cwiki.apache.org/confluence/display/solr/HowToConfigureEclipse) for details)
-- *IntelliJ* - `ant idea` (See [this](https://cwiki.apache.org/confluence/display/lucene/HowtoConfigureIntelliJ) for details)
-- *Netbeans* - `ant netbeans` (See [this](https://cwiki.apache.org/confluence/display/lucene/HowtoConfigureNetbeans) for details)
-
 ### Gradle build and IDE support
 
 - *IntelliJ* - IntelliJ idea can import the project out of the box. 
@@ -121,23 +93,17 @@
 - *Eclipse*  - Not tested.
 - *Netbeans* - Not tested.
 
-## Running Tests
-
-The standard test suite can be run with the command:
-
-`ant test`
-
-Like Solr itself, the test-running can be customized or tailored in a number or
-ways.  For an exhaustive discussion of the options available, run:
-
-`ant test-help`
 
 ### Gradle build and tests
 
-Run the following command to display an extensive help for running
-tests with gradle:
+`./gradlew assemble` will build a runnable Solr as noted above.
 
-`./gradlew helpTests`
+`./gradlew check` will assemble Lucene/Solr and run all validation
+  tasks unit tests.
+
+`./gradlew help` will print a list of help commands for high-level tasks. One
+  of these is `helpAnt` that shows the gradle tasks corresponding to ant
+  targets you may be familiar with.
 
 ## Contributing
 
diff --git a/build.gradle b/build.gradle
index a38ca1f..6d6ee2d 100644
--- a/build.gradle
+++ b/build.gradle
@@ -21,15 +21,29 @@
 plugins {
   id "base"
   id "com.palantir.consistent-versions" version "1.14.0"
-  id 'de.thetaphi.forbiddenapis' version '3.0.1' apply false
   id "org.owasp.dependencycheck" version "5.3.0"
+  id 'de.thetaphi.forbiddenapis' version '3.0.1' apply false
   id "de.undercouch.download" version "4.0.2" apply false
 }
 
-// Project version.
-version = "9.0.0-SNAPSHOT"
+apply from: file('gradle/defaults.gradle')
 
 // General metadata.
+
+// Calculate project version:
+version = {
+  // Release manager: update base version here after release:
+  String baseVersion = '9.0.0'
+
+  // On a release explicitly set release version in one go:
+  //  -Dversion.release=x.y.z
+  
+  // Jenkins can just set just a suffix, overriding SNAPSHOT, e.g. using build id:
+  //  -Dversion.suffix=jenkins123
+  
+  String versionSuffix = propertyOrDefault('version.suffix', 'SNAPSHOT')
+  return propertyOrDefault('version.release', "${baseVersion}-${versionSuffix}")
+}()
 description = 'Grandparent project for Apache Lucene Core and Apache Solr'
 
 // Propagate version and derived properties across projects.
@@ -63,6 +77,8 @@
   buildTime = DateTimeFormatter.ofPattern("HH:mm:ss").format(tstamp)
   buildYear = DateTimeFormatter.ofPattern("yyyy").format(tstamp)
 
+  minJavaVersion = JavaVersion.VERSION_11
+
   // Declare script dependency versions outside of palantir's
   // version unification control. These are not our main dependencies.
   scriptDepVersions = [
@@ -73,6 +89,13 @@
       "jgit": "5.3.0.201903130848-r",
       "flexmark": "0.61.24",
   ]
+  
+  // Allow definiting external tool locations using system props.
+  externalTool = { name ->
+    def resolved = propertyOrDefault("${name}.exe", name as String)
+    logger.info("External tool '${name}' resolved to: ${resolved}")
+    return resolved
+  }
 }
 
 // Include smaller chunks configuring dedicated build areas.
@@ -89,7 +112,6 @@
 
 // Set up defaults and configure aspects for certain modules or functionality
 // (java, tests)
-apply from: file('gradle/defaults.gradle')
 apply from: file('gradle/defaults-java.gradle')
 apply from: file('gradle/testing/defaults-tests.gradle')
 apply from: file('gradle/testing/randomization.gradle')
@@ -102,6 +124,7 @@
 
 // IDE support, settings and specials.
 apply from: file('gradle/ide/intellij-idea.gradle')
+apply from: file('gradle/ide/eclipse.gradle')
 
 // Validation tasks
 apply from: file('gradle/validation/precommit.gradle')
@@ -132,6 +155,7 @@
 apply from: file('gradle/testing/slowest-tests-at-end.gradle')
 apply from: file('gradle/testing/failed-tests-at-end.gradle')
 apply from: file('gradle/testing/profiling.gradle')
+apply from: file('gradle/testing/beasting.gradle')
 apply from: file('gradle/help.gradle')
 
 // Ant-compatibility layer. ALL OF THESE SHOULD BE GONE at some point. They are
@@ -139,12 +163,9 @@
 // of potential problems with the build conventions, dependencies, etc.
 apply from: file('gradle/ant-compat/force-versions.gradle')
 apply from: file('gradle/ant-compat/misc.gradle')
-apply from: file('gradle/ant-compat/resolve.gradle')
 apply from: file('gradle/ant-compat/post-jar.gradle')
 apply from: file('gradle/ant-compat/test-classes-cross-deps.gradle')
 apply from: file('gradle/ant-compat/artifact-naming.gradle')
-apply from: file('gradle/ant-compat/solr-forbidden-apis.gradle')
-apply from: file('gradle/ant-compat/forbidden-api-rules-in-sync.gradle')
 
 apply from: file('gradle/documentation/documentation.gradle')
 apply from: file('gradle/documentation/changes-to-html.gradle')
@@ -152,3 +173,8 @@
 apply from: file('gradle/documentation/render-javadoc.gradle')
 
 apply from: file('gradle/hacks/findbugs.gradle')
+apply from: file('gradle/hacks/gradle.gradle')
+apply from: file('gradle/hacks/hashmapAssertions.gradle')
+
+apply from: file('gradle/solr/packaging.gradle')
+apply from: file('gradle/solr/solr-forbidden-apis.gradle')
diff --git a/build.xml b/build.xml
deleted file mode 100755
index a1e8ccb..0000000
--- a/build.xml
+++ /dev/null
@@ -1,697 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="lucene-solr" default="-projecthelp" basedir=".">
-  <import file="lucene/common-build.xml"/>
-
-  <property name="jgit-version" value="5.3.0.201903130848-r"/>
-  
-  <property name="tests.heap-dump-dir" location="heapdumps"/>
-  
-  <property name="maven-build-dir" value="maven-build"/>
-  <property name="maven-version" value="3.5.0"/>
-  <property name="maven.dependencies.filters.file" location="lucene/build/maven.dependencies.filters.properties"/>
-
-  <property name="smokeTestRelease.dir" location="lucene/build/smokeTestRelease/dist"/>
-  <property name="smokeTestRelease.tmp" location="lucene/build/smokeTestRelease/tmp"/>
-  <property name="smokeTestRelease.testArgs" value=""/>
-
-  <target name="-projecthelp">
-    <java fork="false" classname="org.apache.tools.ant.Main" taskname="-">
-      <arg value="-projecthelp"/>
-      <arg value="-f"/>
-      <arg value="${ant.file}"/>
-    </java>
-  </target>
-
-  <target name="test-help" description="Test runner help">
-    <subant buildpath="lucene" target="test-help" inheritall="false" failonerror="true"/>
-  </target>
-
-  <target name="precommit" description="Run basic checks before committing"
-          depends="check-working-copy,validate,documentation-lint"/>
-
-  <target name="test" description="Test both Lucene and Solr" depends="resolve-groovy">
-    <mkdir dir="lucene/build" />
-    <tempfile property="tests.totals.tmpfile"
-          destdir="lucene/build"
-          prefix=".test-totals-"
-          suffix=".tmp"
-          deleteonexit="true"
-          createfile="true" />
-
-    <subant target="test" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset>
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <propertyref name="tests.totals.tmpfile" />
-      </propertyset>      
-    </subant>
-
-    <property name="tests.totals.toplevel" value="true" />
-    <antcall target="-check-totals" />
-  </target>
-
-  <target name="jacoco" depends="resolve-groovy" description="Generates JaCoCo code coverage reports">
-    <subant target="jacoco" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="pitest" depends="resolve-groovy" description="Run PITest on both Lucene and Solr">
-    <subant target="pitest" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="beast">
-    <fail message="The Beast only works inside of individual modules"/>
-  </target>
-
-  <target name="documentation" depends="resolve-markdown" description="Generate Lucene and Solr Documentation">
-    <subant target="documentation" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="documentation-lint" depends="resolve-markdown,-ecj-javadoc-lint-unsupported,-ecj-resolve" description="Validates the generated documentation (HTML errors, broken links,...)">
-    <subant target="documentation-lint" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="check-forbidden-apis" depends="-install-forbidden-apis" description="Check forbidden API calls in compiled class files.">
-    <subant target="check-forbidden-apis" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="validate" description="Validate dependencies, licenses, etc." depends="validate-source-patterns,resolve-groovy,rat-sources-typedef,-install-forbidden-apis">
-    <subant target="validate" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-    <subant buildpath="lucene" target="check-lib-versions" inheritall="false" failonerror="true">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-  
-  <target name="validate-source-patterns" description="Validate source code (invalid code patterns,...)" unless="disable.source-patterns" depends="resolve-groovy,rat-sources-typedef">
-    <groovy taskname="source-patterns" classpathref="rat.classpath" src="${common.dir}/tools/src/groovy/check-source-patterns.groovy"/>
-  </target>
-  
-  <target name="rat-sources" description="Runs rat across all sources and tests" depends="common.rat-sources">
-    <subant target="rat-sources" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="resolve" description="Resolves all dependencies">
-    <subant target="resolve" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <!-- lucene/test-framework and solr/test-framework are excluded from compilation -->
-  <target name="compile" description="Compile Lucene and Solr">
-    <subant target="compile" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="compile-core" description="Compile Lucene Core">
-    <subant target="compile-core" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="compile-test" description="Compile Lucene and Solr tests and test-frameworks">
-    <subant target="compile-test" inheritAll="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="jar" description="Build Lucene and Solr Jar files">
-    <subant target="jar" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml"/>
-      <fileset dir="solr" includes="build.xml"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="jar-src" description="Build Lucene and Solr Source Jar files">
-    <subant target="jar-src" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml"/>
-      <fileset dir="solr" includes="build.xml"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="get-maven-poms" depends="resolve" 
-          description="Copy Maven POMs from dev-tools/maven/ to maven-build/">
-    <ant dir="lucene" target="-get-maven-poms" inheritall="false"/>
-  </target>
-
-  <target name="clean-maven-build"
-          description="Clean up Maven POMs in working copy">
-    <delete failonerror="true" dir="${maven-build-dir}/"/>
-  </target>
-
-  <target name="generate-maven-artifacts" depends="resolve,resolve-groovy,resolve-markdown,install-maven-tasks"
-          description="Generate Maven Artifacts for Lucene and Solr">
-    <property name="maven.dist.dir"  location="dist/maven" />
-    <mkdir dir="${maven.dist.dir}" />
-    <ant dir="lucene" inheritall="false">
-      <target name="-unpack-lucene-tgz"/>
-      <target name="-filter-pom-templates"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <ant dir="solr" target="-unpack-solr-tgz" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <subant target="-dist-maven" inheritall="false" failonerror="true">
-      <property name="maven.dist.dir"  location="${maven.dist.dir}" />
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="-install-maven-artifacts" depends="resolve,resolve-groovy,resolve-markdown,install-maven-tasks">
-    <ant dir="lucene" inheritall="false">
-      <target name="-unpack-lucene-tgz"/>
-      <target name="-filter-pom-templates"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <ant dir="solr" target="-unpack-solr-tgz" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <subant target="-install-to-maven-local-repo" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="validate-maven-dependencies" depends="-install-maven-artifacts"
-          description="Validates maven dependencies, licenses, etc.">
-    <subant target="-validate-maven-dependencies" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml"/>
-      <fileset dir="solr" includes="build.xml"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-  
-  <target name="run-maven-build" depends="get-maven-poms,install-maven-tasks,resolve-groovy" description="Runs the Maven build using automatically generated POMs">
-    <groovy src="${common.dir}/tools/src/groovy/run-maven-build.groovy"/>
-  </target>
-  
-  <target name="remove-maven-artifacts" description="Removes all Lucene/Solr Maven artifacts from the local repository">
-    <echo message="Removing all Lucene/Solr Maven artifacts from '${user.home}/.m2/repository'..."/>
-    <delete includeemptydirs="true">
-      <fileset dir="${user.home}/.m2/repository" erroronmissingdir="false">
-        <include name="org/apache/lucene/**"/>
-        <include name="org/apache/solr/**"/>
-      </fileset>
-    </delete>
-  </target>
-
-  <target name="netbeans" depends="resolve" description="Setup Netbeans configuration">
-    <pathconvert property="netbeans.fileset.sourcefolders" pathsep="|" dirsep="/">
-      <dirset dir="${basedir}/lucene" includes="**/src/java, **/src/examples, **/src/test, **/src/resources" 
-              excludes="tools/**, build/**" />
-      <dirset dir="${basedir}/solr" includes="**/src/java, **/src/examples, **/src/test, **/src/resources" 
-              excludes="build/**" />
-      <map from="${basedir}/" to=""/>
-    </pathconvert>
-    <!-- TODO: find a better way to exclude duplicate JAR files & fix the servlet-api mess! -->
-    <pathconvert property="netbeans.path.libs" pathsep=":" dirsep="/">
-      <fileset dir="${basedir}/lucene" includes="**/lib/*.jar" 
-               excludes="**/*servlet-api*.jar, tools/**, build/**"/>
-      <fileset dir="${basedir}/solr" includes="**/test-lib/*.jar,**/lib/*.jar" 
-               excludes="core/test-lib/*servlet-api*.jar, contrib/analysis-extras/**, test-framework/lib/junit*, test-framework/lib/ant*, test-framework/lib/randomizedtesting*, build/**, dist/**, package/**, server/solr-webapp/**" />
-      <map from="${basedir}/" to=""/>
-    </pathconvert>
-    <mkdir dir="nbproject"/>
-    <copy todir="nbproject" overwrite="true">
-      <fileset dir="dev-tools/netbeans/nbproject"/>
-    </copy>
-    <xslt in="${ant.file}" out="nbproject/project.xml" style="dev-tools/netbeans/nb-project.xsl" force="true">
-      <outputproperty name="indent" value="yes"/>
-      <param name="netbeans.fileset.sourcefolders" expression="${netbeans.fileset.sourcefolders}"/>
-      <param name="netbeans.path.libs" expression="${netbeans.path.libs}"/>
-      <param name="netbeans.source-level" expression="1.7"/>
-    </xslt>
-  </target>
-
-  <target name="clean-netbeans" description="Removes all Netbeans configuration files">
-    <delete dir="nbproject" failonerror="true"/>
-    <delete dir="nb-build" failonerror="true"/>
-  </target>
-
-  <target name="eclipse" depends="resolve" description="Setup Eclipse configuration">
-    <basename file="${basedir}" property="eclipseprojectname"/>
-    <copy file="dev-tools/eclipse/dot.project" tofile=".project" overwrite="false" encoding="UTF-8">
-      <filterset>
-        <filter token="ECLIPSEPROJECTNAME" value="${eclipseprojectname}"/>
-      </filterset>
-    </copy>
-    <copy overwrite="false" todir="eclipse-build" encoding="UTF-8">
-      <fileset dir="dev-tools/eclipse" includes="*.launch"/>
-      <filterset>
-        <filter token="ECLIPSEPROJECTNAME" value="${eclipseprojectname}"/>
-      </filterset>
-    </copy>
-    <mkdir dir=".settings"/>
-    <copy todir=".settings/" overwrite="true">
-      <fileset dir="dev-tools/eclipse/dot.settings" includes="*.prefs" />
-    </copy>
-
-    <pathconvert property="eclipse.fileset.sourcefolders" pathsep="|" dirsep="/">
-      <dirset dir="${basedir}/lucene" includes="**/src/java, **/src/resources, **/src/test, **/src/test-files, **/src/examples" excludes="tools/**, build/**" />
-      <dirset dir="${basedir}/solr" includes="**/src/java, **/src/resources, **/src/test, **/src/test-files, **/src/examples" excludes="build/**" />
-      <map from="${basedir}/" to=""/>
-    </pathconvert>
-    <!-- TODO: find a better way to exclude duplicate JAR files & fix the servlet-api mess! -->
-    <pathconvert property="eclipse.fileset.libs" pathsep="|" dirsep="/">
-      <fileset dir="${basedir}/lucene" includes="**/lib/*.jar" excludes="**/*servlet-api*.jar, tools/**, build/**"/>
-      <fileset dir="${basedir}/solr" includes="**/test-lib/*.jar,**/lib/*.jar" excludes="core/test-lib/*servlet-api*.jar, contrib/analysis-extras/**, test-framework/lib/junit*, test-framework/lib/ant*, test-framework/lib/randomizedtesting*, build/**, dist/**, package/**" />
-      <map from="${basedir}/" to=""/>
-    </pathconvert>
-    <pathconvert property="eclipse.fileset.webfolders" pathsep="|" dirsep="/">
-      <dirset dir="${basedir}/solr/server/contexts" excludes="**/*" />
-      <dirset dir="${basedir}/solr/server/etc" excludes="**/*" />
-      <dirset dir="${basedir}/solr/server/modules" excludes="**/*" />
-      <dirset dir="${basedir}/solr/server/solr" excludes="**/*" />
-      <dirset dir="${basedir}/solr/webapp/web" excludes="**/*" />
-      <map from="${basedir}/" to=""/>
-    </pathconvert>
-    <xslt in="${ant.file}" out=".classpath" style="dev-tools/eclipse/dot.classpath.xsl" force="true">
-      <outputproperty name="indent" value="yes"/>
-      <param name="eclipse.fileset.libs" expression="${eclipse.fileset.libs}"/>
-      <param name="eclipse.fileset.sourcefolders" expression="${eclipse.fileset.sourcefolders}"/>
-      <param name="eclipse.fileset.webfolders" expression="${eclipse.fileset.webfolders}"/>
-    </xslt>
-
-    <echo>
-      SUCCESS: You must right-click your project and choose Refresh.
-               Your project must use a Java 11 JRE.
-    </echo>
-  </target>
-
-  <target name="clean-eclipse" description="Removes all Eclipse configuration files">
-    <delete dir=".settings" failonerror="true"/>
-    <delete failonerror="true">
-      <fileset dir="." includes=".classpath,.project"/>
-    </delete>
-    <delete dir="eclipse-build" failonerror="true"/>
-  </target>
-
-  <target name="idea" depends="resolve" description="Setup IntelliJ IDEA configuration">
-    <condition property="idea.jdk.is.set">
-      <isset property="idea.jdk"/>
-    </condition>
-    <!-- Define ${idea.jdk} if it's not yet defined - otherwise literal "${idea.jdk}" is substituted -->
-    <property name="idea.jdk" value=""/>
-    <!-- delete those files first, so they are regenerated by the filtering below
-      (add more files with dynamic properties like versions here): -->
-    <delete dir=".idea" includes="misc.xml workspace.xml"/>
-    <!-- Copy files with filtering: -->
-    <copy todir="." overwrite="false" encoding="UTF-8">
-      <fileset dir="dev-tools/idea"/>
-      <filterset begintoken="subst.=&quot;" endtoken="&quot;">
-        <filter token="idea.jdk" value="${idea.jdk}"/>
-      </filterset>
-      <filterset>
-        <filter token="version" value="${version}"/>
-        <filter token="version.base" value="${version.base}"/>
-      </filterset>
-    </copy>
-    <antcall target="-post-idea-instructions"/>
-  </target>
-  
-  <target name="-post-idea-instructions" unless="idea.jdk.is.set">
-    <echo>
-To complete IntelliJ IDEA setup, you must manually configure
-File | Project Structure | Project | Project SDK.
-      
-You won't have to do this in the future if you define property
-$${idea.jdk}, e.g. in ~/lucene.build.properties, ~/build.properties
-or lucene/build.properties, with a value consisting of the
-following two XML attributes/values (adjust values according to
-JDKs you have defined locally - see 
-File | Project Structure | Platform Settings | SDKs):
-
-    idea.jdk = project-jdk-name="11" project-jdk-type="JavaSDK"
-    </echo>
-  </target>
-
-  <target name="clean-idea"
-          description="Removes all IntelliJ IDEA configuration files">
-    <delete dir=".idea" failonerror="true"/>
-    <delete failonerror="true">
-      <fileset dir="." includes="*.iml,*.ipr,*.iws"/>
-      <fileset dir="solr" includes="**/*.iml"/>
-      <fileset dir="lucene" includes="**/*.iml"/>
-    </delete>
-    <delete dir="idea-build" failonerror="true"/>
-  </target>
-
-  <target name="clean" description="Clean Lucene and Solr build dirs">
-    <delete dir="dist" />
-    <delete dir="${tests.heap-dump-dir}" />
-    <subant target="clean" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="clean-jars" description="Remove all JAR files from lib folders in the checkout">
-    <delete failonerror="true">
-      <fileset dir=".">
-        <include name="**/*.jar"/>
-        <exclude name="*/build/**"/>
-        <exclude name="*/dist/**"/>
-        <exclude name="*/package/**"/>
-        <exclude name="*/example/exampledocs/**"/>
-        <exclude name="gradle/**" />
-      </fileset>
-    </delete>
-  </target>
-
-  <target name="jar-checksums" description="Recompute SHA1 checksums for all JAR files.">
-    <subant target="jar-checksums" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <target name="-nightly-smoke-java12params" if="smokeTestRelease.java12">
-    <!-- convert path to UNIX style, so windows backslashes don't hurt escaping: -->
-    <pathconvert targetos="unix" property="-smokeTestRelease.java12params">
-      <regexpmapper from="^(.*)$" to="--test-java12 '\1'"/>
-      <path location="${smokeTestRelease.java12}"/>
-    </pathconvert>
-  </target>
-  
-  <target name="nightly-smoke" description="Builds an unsigned release and smoke tests it  (pass '-DsmokeTestRelease.java12=/path/to/jdk-12' to additionally test with Java 12 or later)"
-    depends="clean,resolve-groovy,resolve-markdown,install-maven-tasks,-nightly-smoke-java12params">
-    <fail message="To run nightly smoke, the JDK must be exactly Java 11, was: ${java.specification.version}">
-      <condition>
-        <not><equals arg1="${java.specification.version}" arg2="11"/></not>
-      </condition>
-    </fail>
-    <property name="-smokeTestRelease.java12params" value=""/><!-- (if not yet defined) -->
-    <exec executable="${python3.exe}" failonerror="true" taskname="python3">
-      <arg value="-V"/>
-    </exec>
-    <subant target="prepare-release-no-sign" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <!-- pass ${version.base} here to emulate a real release, without appendix like "-SNAPSHOT": -->
-      <property name="version" value="${version.base}" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-   </subant>
-    <mkdir dir="${smokeTestRelease.dir}"/>
-    <copy todir="${smokeTestRelease.dir}/lucene">
-      <fileset dir="lucene/dist"/>
-    </copy>
-    <copy todir="${smokeTestRelease.dir}/solr">
-      <fileset dir="solr/package"/>
-    </copy>
-    <local name="url"/>
-    <makeurl file="${smokeTestRelease.dir}" validate="false" property="url"/>
-    <exec executable="${python3.exe}" failonerror="true" taskname="smoker">
-      <arg value="-u"/>
-      <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-      <arg value="-B"/>
-      <arg file="dev-tools/scripts/smokeTestRelease.py"/>
-      <arg line="${-smokeTestRelease.java12params}"/>
-      <arg value="--revision"/>
-      <arg value="skip"/>
-      <!-- pass ${version.base} here to emulate a real release, without appendix like "-SNAPSHOT": -->
-      <arg value="--version"/>
-      <arg value="${version.base}"/>
-      <arg value="--tmp-dir"/>
-      <arg file="${smokeTestRelease.tmp}"/>
-      <arg value="--not-signed"/>
-      <arg value="${url}"/>
-      <arg value="${smokeTestRelease.testArgs}"/>
-    </exec>
-    <delete dir="${smokeTestRelease.dir}"/>
-    <delete dir="${smokeTestRelease.tmp}"/>
-  </target>
-  
-  <macrodef xmlns:ivy="antlib:org.apache.ivy.ant" name="wc-checker">
-    <attribute name="failonmodifications"/><!-- fail if modifications were found, otherwise it only fails on unversioned files -->
-    <sequential>
-      <local name="wc.unversioned.files"/>
-      <local name="wc.modified.files"/>
-      <ivy:cachepath xmlns:ivy="antlib:org.apache.ivy.ant" transitive="true" resolveId="jgit" pathid="jgit.classpath">
-        <ivy:dependency org="org.eclipse.jgit" name="org.eclipse.jgit" rev="${jgit-version}" conf="default" />
-        <ivy:dependency org="org.slf4j" name="slf4j-nop" rev="1.7.2" conf="default" />
-      </ivy:cachepath>
-      <groovy taskname="wc-checker" classpathref="jgit.classpath" src="${common.dir}/tools/src/groovy/check-working-copy.groovy"/>
-      <fail if="wc.unversioned.files"
-        message="Source checkout is dirty (unversioned/missing files) after running tests!!! Offending files:${line.separator}${wc.unversioned.files}"/>
-      <fail message="Source checkout is modified!!! Offending files:${line.separator}${wc.modified.files}">
-        <condition>
-          <and>
-             <istrue value="@{failonmodifications}"/>
-             <isset property="wc.modified.files"/>
-          </and>
-        </condition>
-      </fail>
-    </sequential>
-  </macrodef>
-  
-  <target name="check-working-copy" description="Checks working copy for unversioned changes" depends="resolve-groovy">
-    <wc-checker failonmodifications="${is.jenkins.build}"/>
-  </target>
-
-  <target name="run-clover" description="Runs all tests to measure coverage and generates report (pass &quot;ANT_OPTS=-Xmx2G&quot; as environment)" depends="clean">
-    <antcall inheritAll="false">
-      <param name="run.clover" value="true"/>
-      <!-- must be 1, as clover does not like parallel test runs: -->
-      <param name="tests.jvms.override" value="1"/>
-      <!-- Also override some other props to be fast: -->
-      <param name="tests.multiplier" value="1"/>
-      <param name="tests.nightly" value="false"/>
-      <param name="tests.weekly" value="false"/>
-      <param name="tests.badapples" value="true"/>
-      <!-- The idea behind Clover is to determine test coverage, so be immune to failing tests: -->
-      <param name="tests.haltonfailure" value="false"/>
-      
-      <target name="clover"/>
-      <target name="test"/>
-      <target name="-generate-clover-reports"/>
-      
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-  </target>
-
-  <!--
-   Run after Junit tests.
-   
-   This target is in a separate file, as it needs to include common-build.xml,
-   but must run from top-level!
-   -->
-  <target name="-generate-clover-reports" depends="clover">
-    <fail unless="run.clover">Clover not enabled!</fail>
-    <mkdir dir="${clover.report.dir}"/>
-    <fileset dir="." id="clover.test.result.files">
-      <include name="*/build/**/test/TEST-*.xml"/>
-    </fileset>
-    <fileset dir="." id="clover.test.src.files">
-      <include name="**/src/test/**/*.java"/>
-      <!-- test framework source files are all test code: -->
-      <include name="*/test-framework/src/**/*.java"/>
-    </fileset>
-    <clover-report projectName="Apache Lucene/Solr">
-      <current outfile="${clover.report.dir}" title="Apache Lucene/Solr ${version}" numThreads="0">
-        <format type="html" filter="assert"/>
-        <testresults refid="clover.test.result.files"/>
-        <testsources refid="clover.test.src.files"/>
-      </current>
-      <current outfile="${clover.report.dir}/clover.xml" title="Apache Lucene/Solr ${version}">
-        <format type="xml" filter="assert"/>
-        <testresults refid="clover.test.result.files"/>
-        <testsources refid="clover.test.src.files"/>
-      </current>
-    </clover-report>
-    <echo>You can find the merged Lucene/Solr Clover report in '${clover.report.dir}'.</echo>
-  </target>
-
-  <target name="test-with-heapdumps" depends="resolve-groovy,-test-with-heapdumps-enabled,-test-with-heapdumps-disabled" description="Runs tests with heap dumps on OOM enabled (if VM supports this)"/>
-  
-  <condition property="vm.supports.heapdumps">
-    <or>
-      <contains string="${java.vm.name}" substring="hotspot" casesensitive="false"/>
-      <contains string="${java.vm.name}" substring="openjdk" casesensitive="false"/>
-    </or>
-  </condition>
-
-  <target name="-test-with-heapdumps-enabled" if="vm.supports.heapdumps">
-    <echo level="info" message="${java.vm.name}: Enabling heap dumps on OutOfMemoryError to dir '${tests.heap-dump-dir}'."/>
-    <mkdir dir="${tests.heap-dump-dir}"/>
-    <delete includeEmptyDirs="true">
-      <fileset dir="${tests.heap-dump-dir}"  includes="**/*"/>
-    </delete>
-    <antcall inheritAll="false" target="test">
-      <param name="tests.heapdump.args" value="-XX:+HeapDumpOnOutOfMemoryError &quot;-XX:HeapDumpPath=${tests.heap-dump-dir}&quot;"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-    <pathconvert property="heapdumps.list" setonempty="false" pathsep="${line.separator}">
-      <fileset dir="${tests.heap-dump-dir}"/>
-      <map from="${tests.heap-dump-dir}${file.separator}" to="* "/>
-    </pathconvert>
-    <fail if="heapdumps.list" message="Some of the tests produced a heap dump, but did not fail. Maybe a suppressed OutOfMemoryError? Dumps created:${line.separator}${heapdumps.list}"/>
-    <delete dir="${tests.heap-dump-dir}"/>
-  </target>
-
-  <target name="-test-with-heapdumps-disabled" unless="vm.supports.heapdumps">
-    <echo level="warning" message="WARN: The used JVM (${java.vm.name}) does not support HPROF heap dumps on OutOfMemoryError."/>
-    <antcall target="test">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-  </target>
-
-  <target name="regenerate" description="Runs all code regenerators">
-    <subant target="regenerate" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <!-- todo:
-      <fileset dir="solr" includes="build.xml" />-->
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-
-  <!-- should only be called by jenkins, not precommit! -->
-  <target name="-check-after-regeneration" depends="resolve-groovy">
-    <wc-checker failonmodifications="true"/>
-  </target>
-
-  <!-- TODO: remove me when jenkins works -->
-  <target name="regenerateAndCheck" depends="regenerate,-check-after-regeneration"/>
-  
-  <target name="-append-all-modules-dependencies-properties">
-    <delete file="lucene/build/module.dependencies.properties"/>
-    <subant target="-append-module-dependencies-properties" inheritall="false" failonerror="true">
-      <fileset dir="lucene" includes="build.xml" />
-      <fileset dir="solr" includes="build.xml" />
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </subant>
-  </target>
-  
-  <!-- Jenkins tasks -->
-  <target name="-jenkins-base" depends="-print-java-info,clean,test-with-heapdumps,validate,documentation-lint,jar-checksums,check-working-copy"/>
-  
-  <target name="-print-java-info">
-    <echo level="info" taskname="java-info">java version &quot;${java.version}&quot;
-${java.runtime.name} (${java.runtime.version}, ${java.vendor})
-${java.vm.name} (${java.vm.version}, ${java.vm.vendor})
-Test args: [${args}]</echo>
-  </target>
-  
-  <target name="jenkins-hourly">
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <param name="tests.haltonfailure" value="false"/>
-      <param name="tests.badapples" value="false"/>
-      <target name="-jenkins-base"/>
-    </antcall>
-  </target>
-  
-  <target name="jenkins-badapples">
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <param name="tests.haltonfailure" value="false"/>
-      <param name="tests.badapples" value="true"/>
-      <target name="-jenkins-base"/>
-    </antcall>
-  </target>
-  
-  <target name="jenkins-nightly">
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <param name="tests.haltonfailure" value="false"/>
-      <param name="tests.nightly" value="true"/>
-      <param name="tests.badapples" value="false"/>
-      <target name="-jenkins-base"/>
-    </antcall>
-  </target>
-
-  <target name="jenkins-nightly-badapples">
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <param name="tests.haltonfailure" value="false"/>
-      <param name="tests.nightly" value="true"/>
-      <param name="tests.badapples" value="true"/>
-      <target name="-jenkins-base"/>
-    </antcall>
-  </target>
-
-  <target name="jenkins-maven-nightly" depends="-print-java-info,clean,clean-maven-build,resolve-groovy,resolve-markdown,install-maven-tasks">
-    <!-- step 1: build, install, validate and publish ANT-generated maven artifacts: -->
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <target name="remove-maven-artifacts"/>
-      <target name="validate-maven-dependencies"/>
-      <target name="generate-maven-artifacts"/>
-    </antcall>
-    <!-- step 2: run the maven build to check that the pom templates also work to drive "mvn": -->
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <target name="remove-maven-artifacts"/>
-      <target name="run-maven-build"/>
-    </antcall>
-  </target>
-  
-  <target name="jenkins-clover" depends="-print-java-info">
-    <antcall>
-      <param name="is.jenkins.build" value="true"/>
-      <target name="run-clover"/>
-    </antcall>
-  </target>
-
-  <!-- useless targets (override common-build.xml): -->
-  <target name="generate-test-reports"/>
-</project>
diff --git a/dev-tools/README.txt b/dev-tools/README.txt
index b0b7e4a..9d64fcf 100644
--- a/dev-tools/README.txt
+++ b/dev-tools/README.txt
@@ -6,11 +6,7 @@
 Description of dev-tools/ contents:
 
 ./size-estimator-lucene-solr.xls -- Spreadsheet for estimating memory and disk usage in Lucene/Solr
+./missing-doclet -- JavaDoc validation doclet subproject
 ./doap/       -- Lucene and Solr project descriptors in DOAP RDF format.
-./eclipse/    -- Used to generate project descriptors for the Eclipse IDE.
-./git/        -- Git documentation and resources.
-./idea/       -- Used to generate project descriptors for IntelliJ's IDEA IDE.
-./maven/      -- Mavenizes the Lucene/Solr packages
-./netbeans/   -- Used to generate project descriptors for the Netbeans IDE.
 ./scripts/    -- Odds and ends for building releases, etc.
 ./test-patch/ -- Scripts for automatically validating patches
diff --git a/dev-tools/doap/lucene.rdf b/dev-tools/doap/lucene.rdf
index ebfb1d3..320e0dd 100644
--- a/dev-tools/doap/lucene.rdf
+++ b/dev-tools/doap/lucene.rdf
@@ -67,6 +67,20 @@
     </maintainer>
 
     <!-- NOTE: please insert releases in numeric order, NOT chronologically. -->
+    <release>
+       <Version>
+         <name>lucene-8.6.2</name>
+         <created>2020-09-01</created>
+         <revision>8.6.2</revision>
+       </Version>
+   </release>
+   <release>
+       <Version>
+         <name>lucene-8.6.1</name>
+         <created>2020-08-13</created>
+         <revision>8.6.1</revision>
+       </Version>
+   </release>
    <release>
        <Version>
          <name>lucene-8.6.0</name>
diff --git a/dev-tools/doap/solr.rdf b/dev-tools/doap/solr.rdf
index b201865..d63f677 100644
--- a/dev-tools/doap/solr.rdf
+++ b/dev-tools/doap/solr.rdf
@@ -69,6 +69,20 @@
     <!-- NOTE: please insert releases in numeric order, NOT chronologically. -->
     <release>
          <Version>
+           <name>solr-8.6.2</name>
+           <created>2020-09-01</created>
+           <revision>8.6.2</revision>
+         </Version>
+    </release>
+    <release>
+         <Version>
+           <name>solr-8.6.1</name>
+           <created>2020-08-13</created>
+           <revision>8.6.1</revision>
+         </Version>
+    </release>
+    <release>
+         <Version>
            <name>solr-8.6.0</name>
            <created>2020-07-15</created>
            <revision>8.6.0</revision>
diff --git a/dev-tools/idea/.idea/ant.xml b/dev-tools/idea/.idea/ant.xml
deleted file mode 100644
index c270538..0000000
--- a/dev-tools/idea/.idea/ant.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="AntConfiguration">
-    <buildFile url="file://$PROJECT_DIR$/build.xml" />
-
-    <buildFile url="file://$PROJECT_DIR$/lucene/build.xml" />
-
-    <buildFile url="file://$PROJECT_DIR$/lucene/core/build.xml" />
-
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/common/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/icu/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/kuromoji/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/morfologik/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/opennlp/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/phonetic/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/smartcn/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/analysis/stempel/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/benchmark/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/classification/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/codecs/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/demo/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/expressions/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/facet/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/grouping/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/highlighter/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/join/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/luke/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/luwak/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/memory/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/misc/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/queries/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/queryparser/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/replicator/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/sandbox/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/spatial/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/spatial-extras/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/suggest/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/test-framework/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/lucene/tools/build.xml" />
-
-    <buildFile url="file://$PROJECT_DIR$/solr/build.xml" />
-
-    <buildFile url="file://$PROJECT_DIR$/solr/core/build.xml" />
-
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/analysis-extras/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/clustering/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler-extras/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/extraction/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/langid/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/prometheus-exporter/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/contrib/velocity/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/solrj/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/test-framework/build.xml" />
-    <buildFile url="file://$PROJECT_DIR$/solr/webapp/build.xml" />
-  </component>
-</project>
diff --git a/dev-tools/idea/.idea/codeStyleSettings.xml b/dev-tools/idea/.idea/codeStyleSettings.xml
deleted file mode 100644
index 976fbcd..0000000
--- a/dev-tools/idea/.idea/codeStyleSettings.xml
+++ /dev/null
@@ -1,102 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="ProjectCodeStyleSettingsManager">
-    <option name="PER_PROJECT_SETTINGS">
-      <value>
-        <option name="USE_SAME_INDENTS" value="true" />
-        <option name="IGNORE_SAME_INDENTS_FOR_LANGUAGES" value="true" />
-        <option name="OTHER_INDENT_OPTIONS">
-          <value>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-            <option name="USE_TAB_CHARACTER" value="false" />
-            <option name="SMART_TABS" value="false" />
-            <option name="LABEL_INDENT_SIZE" value="0" />
-            <option name="LABEL_INDENT_ABSOLUTE" value="false" />
-            <option name="USE_RELATIVE_INDENTS" value="false" />
-          </value>
-        </option>
-        <option name="CLASS_COUNT_TO_USE_IMPORT_ON_DEMAND" value="20" />
-        <option name="NAMES_COUNT_TO_USE_IMPORT_ON_DEMAND" value="20" />
-        <option name="PACKAGES_TO_USE_IMPORT_ON_DEMAND">
-          <value />
-        </option>
-        <option name="IMPORT_LAYOUT_TABLE">
-          <value>
-            <package name="javax" withSubpackages="true" static="false" />
-            <package name="java" withSubpackages="true" static="false" />
-            <emptyLine />
-            <package name="" withSubpackages="true" static="false" />
-            <emptyLine />
-            <package name="" withSubpackages="true" static="true" />
-          </value>
-        </option>
-        <XML>
-          <option name="XML_LEGACY_SETTINGS_IMPORTED" value="true" />
-        </XML>
-        <codeStyleSettings language="CSS">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="Groovy">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="HTML">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="JAVA">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="JSON">
-          <indentOptions>
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="JavaScript">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="Python">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="TypeScript">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-        <codeStyleSettings language="XML">
-          <indentOptions>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-          </indentOptions>
-        </codeStyleSettings>
-      </value>
-    </option>
-    <option name="USE_PER_PROJECT_SETTINGS" value="true" />
-  </component>
-</project>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/compiler.xml b/dev-tools/idea/.idea/compiler.xml
deleted file mode 100644
index 006fea1..0000000
--- a/dev-tools/idea/.idea/compiler.xml
+++ /dev/null
@@ -1,13 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="CompilerConfiguration">
-    <wildcardResourcePatterns>
-      <entry name="!*.(cpp|java|jflex|jflex-macro|jj|js|pl|py)"/>
-      <entry name="test-files:*"/>
-    </wildcardResourcePatterns>
-  </component>
-  <component name="JavacSettings">
-    <option name="ADDITIONAL_OPTIONS_STRING" value="-encoding utf-8" />
-  </component>
-</project>
-
diff --git a/dev-tools/idea/.idea/copyright/Apache_Software_Foundation.xml b/dev-tools/idea/.idea/copyright/Apache_Software_Foundation.xml
deleted file mode 100644
index b745847..0000000
--- a/dev-tools/idea/.idea/copyright/Apache_Software_Foundation.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-<component name="CopyrightManager">
-  <copyright>
-    <option name="notice" value="Licensed to the Apache Software Foundation (ASF) under one or more&#10;contributor license agreements.  See the NOTICE file distributed with&#10;this work for additional information regarding copyright ownership.&#10;The ASF licenses this file to You under the Apache License, Version 2.0&#10;(the &quot;License&quot;); you may not use this file except in compliance with&#10;the License.  You may obtain a copy of the License at&#10;&#10;    http://www.apache.org/licenses/LICENSE-2.0&#10;&#10;Unless required by applicable law or agreed to in writing, software&#10;distributed under the License is distributed on an &quot;AS IS&quot; BASIS,&#10;WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.&#10;See the License for the specific language governing permissions and&#10;limitations under the License." />
-    <option name="keyword" value="http://www.apache.org/licenses/LICENSE-2.0" />
-    <option name="allowReplaceKeyword" value="Copyright 20" />
-    <option name="myName" value="Apache Software Foundation" />
-    <option name="myLocal" value="true" />
-  </copyright>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/copyright/profiles_settings.xml b/dev-tools/idea/.idea/copyright/profiles_settings.xml
deleted file mode 100644
index a09f629..0000000
--- a/dev-tools/idea/.idea/copyright/profiles_settings.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<component name="CopyrightManager">
-  <settings default="Apache Software Foundation">
-    <module2copyright>
-      <element module="All" copyright="Apache Software Foundation" />
-    </module2copyright>
-    <LanguageOptions name="HTML">
-      <option name="fileTypeOverride" value="3" />
-      <option name="prefixLines" value="false" />
-    </LanguageOptions>
-    <LanguageOptions name="JAVA">
-      <option name="fileTypeOverride" value="3" />
-    </LanguageOptions>
-    <LanguageOptions name="JSP">
-      <option name="fileTypeOverride" value="3" />
-      <option name="prefixLines" value="false" />
-    </LanguageOptions>
-    <LanguageOptions name="JSPX">
-      <option name="fileTypeOverride" value="3" />
-      <option name="prefixLines" value="false" />
-    </LanguageOptions>
-    <LanguageOptions name="XML">
-      <option name="fileTypeOverride" value="3" />
-      <option name="prefixLines" value="false" />
-    </LanguageOptions>
-  </settings>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Derby.xml b/dev-tools/idea/.idea/libraries/Derby.xml
deleted file mode 100644
index a23a28e..0000000
--- a/dev-tools/idea/.idea/libraries/Derby.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-<component name="libraryTable">
-  <library name="Derby">
-    <CLASSES>
-      <root url="jar://$PROJECT_DIR$/solr/example/example-DIH/solr/db/lib/derby-10.9.1.0.jar!/" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/HSQLDB.xml b/dev-tools/idea/.idea/libraries/HSQLDB.xml
deleted file mode 100644
index 39efcbf..0000000
--- a/dev-tools/idea/.idea/libraries/HSQLDB.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-<component name="libraryTable">
-  <library name="HSQLDB">
-    <CLASSES>
-      <root url="jar://$PROJECT_DIR$/solr/example/example-DIH/solr/db/lib/hsqldb-2.4.0.jar!/" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/ICU_library.xml b/dev-tools/idea/.idea/libraries/ICU_library.xml
deleted file mode 100644
index 477a039..0000000
--- a/dev-tools/idea/.idea/libraries/ICU_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="ICU library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/lucene/analysis/icu/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/lucene/analysis/icu/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Ivy.xml b/dev-tools/idea/.idea/libraries/Ivy.xml
deleted file mode 100644
index 798dc0f..0000000
--- a/dev-tools/idea/.idea/libraries/Ivy.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-<component name="libraryTable">
-  <library name="Ivy">
-    <CLASSES>
-      <root url="jar://$PROJECT_DIR$/lucene/tools/lib/ivy-2.3.0.jar!/" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-  </library>
-</component>
diff --git a/dev-tools/idea/.idea/libraries/JUnit.xml b/dev-tools/idea/.idea/libraries/JUnit.xml
deleted file mode 100644
index bb159b0..0000000
--- a/dev-tools/idea/.idea/libraries/JUnit.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="JUnit">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/lucene/test-framework/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/lucene/test-framework/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Morfologik_library.xml b/dev-tools/idea/.idea/libraries/Morfologik_library.xml
deleted file mode 100644
index 72c1c39..0000000
--- a/dev-tools/idea/.idea/libraries/Morfologik_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Morfologik library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/lucene/analysis/morfologik/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/lucene/analysis/morfologik/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_DIH_core_library.xml b/dev-tools/idea/.idea/libraries/Solr_DIH_core_library.xml
deleted file mode 100644
index d363b92..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_DIH_core_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr DIH core library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler/lib" recursive="false" />
-  </library>
-</component>
diff --git a/dev-tools/idea/.idea/libraries/Solr_DIH_extras_library.xml b/dev-tools/idea/.idea/libraries/Solr_DIH_extras_library.xml
deleted file mode 100644
index 1bfc63b..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_DIH_extras_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr DIH extras library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler-extras/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler-extras/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_DIH_test_library.xml b/dev-tools/idea/.idea/libraries/Solr_DIH_test_library.xml
deleted file mode 100644
index 304589c..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_DIH_test_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr DIH test library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler/test-lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/dataimporthandler/test-lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_core_library.xml b/dev-tools/idea/.idea/libraries/Solr_core_library.xml
deleted file mode 100644
index bf6c10a..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_core_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr core library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/core/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/core/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_core_test_library.xml b/dev-tools/idea/.idea/libraries/Solr_core_test_library.xml
deleted file mode 100644
index 60937e9..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_core_test_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr core test library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/core/test-lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/core/test-lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_example_library.xml b/dev-tools/idea/.idea/libraries/Solr_example_library.xml
deleted file mode 100644
index 4003bf7..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_example_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr example library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/server/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/server/lib" recursive="true" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_extraction_library.xml b/dev-tools/idea/.idea/libraries/Solr_extraction_library.xml
deleted file mode 100644
index 30959d6..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_extraction_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr extraction library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/extraction/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/extraction/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_jaeger_tracer_configurator_library.xml b/dev-tools/idea/.idea/libraries/Solr_jaeger_tracer_configurator_library.xml
deleted file mode 100644
index f365d75..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_jaeger_tracer_configurator_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr jaeger tracer configurator library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/jaegertracer-configurator/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/jaegertracer-configurator/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_prometheus_exporter_library.xml b/dev-tools/idea/.idea/libraries/Solr_prometheus_exporter_library.xml
deleted file mode 100644
index 0fd8670..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_prometheus_exporter_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr prometheus exporter library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/prometheus-exporter/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/prometheus-exporter/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_test_framework_library.xml b/dev-tools/idea/.idea/libraries/Solr_test_framework_library.xml
deleted file mode 100644
index bcd4c44..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_test_framework_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr test framework library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/test-framework/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/test-framework/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solr_velocity_library.xml b/dev-tools/idea/.idea/libraries/Solr_velocity_library.xml
deleted file mode 100644
index 2d51cfe..0000000
--- a/dev-tools/idea/.idea/libraries/Solr_velocity_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solr velocity library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/contrib/velocity/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/contrib/velocity/lib" recursive="false" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/libraries/Solrj_library.xml b/dev-tools/idea/.idea/libraries/Solrj_library.xml
deleted file mode 100644
index c067eae..0000000
--- a/dev-tools/idea/.idea/libraries/Solrj_library.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<component name="libraryTable">
-  <library name="Solrj library">
-    <CLASSES>
-      <root url="file://$PROJECT_DIR$/solr/solrj/lib" />
-    </CLASSES>
-    <JAVADOC />
-    <SOURCES />
-    <jarDirectory url="file://$PROJECT_DIR$/solr/solrj/lib" recursive="true" />
-  </library>
-</component>
\ No newline at end of file
diff --git a/dev-tools/idea/.idea/misc.xml b/dev-tools/idea/.idea/misc.xml
deleted file mode 100755
index 8414ae3..0000000
--- a/dev-tools/idea/.idea/misc.xml
+++ /dev/null
@@ -1,5 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="ProjectRootManager" version="2" languageLevel="JDK_11" subst.="idea.jdk" />
-</project>
-
diff --git a/dev-tools/idea/.idea/modules.xml b/dev-tools/idea/.idea/modules.xml
deleted file mode 100644
index e87ff94..0000000
--- a/dev-tools/idea/.idea/modules.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="ProjectModuleManager">
-    <modules>
-      <module filepath="$PROJECT_DIR$/parent.iml" />
-
-      <module group="Lucene/Core" filepath="$PROJECT_DIR$/lucene/core/src/lucene-core.iml" />
-      <module group="Lucene/Core" filepath="$PROJECT_DIR$/lucene/core/src/test/lucene-core-tests.iml" />
-      <module group="Lucene/Core" filepath="$PROJECT_DIR$/lucene/backward-codecs/backward-codecs.iml" />
-      <module group="Lucene/Core" filepath="$PROJECT_DIR$/lucene/codecs/src/codecs.iml" />
-      <module group="Lucene/Core" filepath="$PROJECT_DIR$/lucene/codecs/src/test/codecs-tests.iml" />
-      <module group="Lucene/Core" filepath="$PROJECT_DIR$/lucene/test-framework/lucene-test-framework.iml" />
-
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/common/analysis-common.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/icu/icu.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/nori/nori.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/kuromoji/kuromoji.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/morfologik/morfologik.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/opennlp/opennlp.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/phonetic/phonetic.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/smartcn/smartcn.iml" />
-      <module group="Lucene/Analysis" filepath="$PROJECT_DIR$/lucene/analysis/stempel/stempel.iml" />
-
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/benchmark/src/benchmark.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/benchmark/conf/benchmark-conf.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/classification/classification.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/demo/demo.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/expressions/expressions.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/facet/facet.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/grouping/grouping.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/highlighter/highlighter.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/join/join.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/luke/luke.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/monitor/monitor.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/memory/memory.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/misc/misc.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/queries/queries.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/queryparser/queryparser.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/replicator/replicator.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/sandbox/sandbox.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/spatial-extras/spatial-extras.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/spatial3d/spatial3d.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/suggest/suggest.iml" />
-      <module group="Lucene/Other" filepath="$PROJECT_DIR$/lucene/tools/tools.iml" />
-
-      <module group="Solr" filepath="$PROJECT_DIR$/solr/core/src/java/solr-core.iml" />
-      <module group="Solr" filepath="$PROJECT_DIR$/solr/core/src/solr-core-tests.iml" />
-      <module group="Solr" filepath="$PROJECT_DIR$/solr/server/server.iml" />
-      <module group="Solr" filepath="$PROJECT_DIR$/solr/solrj/src/java/solrj.iml" />
-      <module group="Solr" filepath="$PROJECT_DIR$/solr/solrj/src/solrj-tests.iml" />
-      <module group="Solr" filepath="$PROJECT_DIR$/solr/test-framework/solr-test-framework.iml" />
-
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/analysis-extras/analysis-extras.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/analytics/analytics.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/clustering/clustering.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/dataimporthandler-extras/dataimporthandler-extras.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/dataimporthandler/dataimporthandler.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/extraction/extraction.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/langid/langid.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/ltr/ltr.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/prometheus-exporter/prometheus-exporter.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/jaegertracer-configurator/jaegertracer-configurator.iml" />
-      <module group="Solr/Contrib" filepath="$PROJECT_DIR$/solr/contrib/velocity/velocity.iml" />
-    </modules>
-  </component>
-</project>
-
diff --git a/dev-tools/idea/.idea/projectCodeStyle.xml b/dev-tools/idea/.idea/projectCodeStyle.xml
deleted file mode 100644
index 31093bd..0000000
--- a/dev-tools/idea/.idea/projectCodeStyle.xml
+++ /dev/null
@@ -1,69 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="CodeStyleSettingsManager">
-    <option name="PER_PROJECT_SETTINGS">
-      <value>
-        <option name="USE_SAME_INDENTS" value="true" />
-        <option name="OTHER_INDENT_OPTIONS">
-          <value>
-            <option name="INDENT_SIZE" value="2" />
-            <option name="CONTINUATION_INDENT_SIZE" value="4" />
-            <option name="TAB_SIZE" value="2" />
-            <option name="USE_TAB_CHARACTER" value="false" />
-            <option name="SMART_TABS" value="false" />
-            <option name="LABEL_INDENT_SIZE" value="0" />
-            <option name="LABEL_INDENT_ABSOLUTE" value="false" />
-            <option name="USE_RELATIVE_INDENTS" value="false" />
-          </value>
-        </option>
-        <option name="CLASS_COUNT_TO_USE_IMPORT_ON_DEMAND" value="20" />
-        <option name="NAMES_COUNT_TO_USE_IMPORT_ON_DEMAND" value="20" />
-        <option name="PACKAGES_TO_USE_IMPORT_ON_DEMAND">
-          <value />
-        </option>
-        <option name="IMPORT_LAYOUT_TABLE">
-          <value>
-            <package name="javax" withSubpackages="true" static="false" />
-            <package name="java" withSubpackages="true" static="false" />
-            <emptyLine />
-            <package name="" withSubpackages="true" static="false" />
-            <emptyLine />
-            <package name="" withSubpackages="true" static="true" />
-          </value>
-        </option>
-        <ADDITIONAL_INDENT_OPTIONS fileType="groovy">
-          <option name="INDENT_SIZE" value="2" />
-          <option name="CONTINUATION_INDENT_SIZE" value="4" />
-          <option name="TAB_SIZE" value="2" />
-          <option name="USE_TAB_CHARACTER" value="false" />
-          <option name="SMART_TABS" value="false" />
-          <option name="LABEL_INDENT_SIZE" value="0" />
-          <option name="LABEL_INDENT_ABSOLUTE" value="false" />
-          <option name="USE_RELATIVE_INDENTS" value="false" />
-        </ADDITIONAL_INDENT_OPTIONS>
-        <ADDITIONAL_INDENT_OPTIONS fileType="java">
-          <option name="INDENT_SIZE" value="2" />
-          <option name="CONTINUATION_INDENT_SIZE" value="4" />
-          <option name="TAB_SIZE" value="2" />
-          <option name="USE_TAB_CHARACTER" value="false" />
-          <option name="SMART_TABS" value="false" />
-          <option name="LABEL_INDENT_SIZE" value="0" />
-          <option name="LABEL_INDENT_ABSOLUTE" value="false" />
-          <option name="USE_RELATIVE_INDENTS" value="false" />
-        </ADDITIONAL_INDENT_OPTIONS>
-        <ADDITIONAL_INDENT_OPTIONS fileType="xml">
-          <option name="INDENT_SIZE" value="2" />
-          <option name="CONTINUATION_INDENT_SIZE" value="4" />
-          <option name="TAB_SIZE" value="2" />
-          <option name="USE_TAB_CHARACTER" value="false" />
-          <option name="SMART_TABS" value="false" />
-          <option name="LABEL_INDENT_SIZE" value="0" />
-          <option name="LABEL_INDENT_ABSOLUTE" value="false" />
-          <option name="USE_RELATIVE_INDENTS" value="false" />
-        </ADDITIONAL_INDENT_OPTIONS>
-      </value>
-    </option>
-    <option name="USE_PER_PROJECT_SETTINGS" value="true" />
-  </component>
-</project>
-
diff --git a/dev-tools/idea/.idea/vcs.xml b/dev-tools/idea/.idea/vcs.xml
deleted file mode 100644
index a7314d1..0000000
--- a/dev-tools/idea/.idea/vcs.xml
+++ /dev/null
@@ -1,14 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="IssueNavigationConfiguration">
-    <option name="links">
-      <list>
-        <IssueNavigationLink>
-          <option name="issueRegexp" value="[A-Z]+\-\d+" />
-          <option name="linkRegexp" value="https://issues.apache.org/jira/browse/$0" />
-        </IssueNavigationLink>
-      </list>
-    </option>
-  </component>
-</project>
-
diff --git a/dev-tools/idea/.idea/workspace.xml b/dev-tools/idea/.idea/workspace.xml
deleted file mode 100644
index 8503297..0000000
--- a/dev-tools/idea/.idea/workspace.xml
+++ /dev/null
@@ -1,388 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
-  <component name="RunManager" selected="JUnit.Lucene core">
-    <configuration default="true" type="JUnit" factoryName="JUnit">
-      <option name="VM_PARAMETERS" value="-ea -Djava.security.egd=file:/dev/./urandom" />
-    </configuration>
-    <configuration default="false" name="Lucene core" type="JUnit" factoryName="JUnit">
-      <module name="lucene-core-tests" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/core" />
-      <option name="VM_PARAMETERS" value="-Xmx256m -ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-common" type="JUnit" factoryName="JUnit">
-      <module name="analysis-common" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/common" />
-      <option name="VM_PARAMETERS" value="-Xmx256m -ea  -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-icu" type="JUnit" factoryName="JUnit">
-      <module name="icu" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/icu" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-kuromoji" type="JUnit" factoryName="JUnit">
-      <module name="kuromoji" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/kuromoji" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-morfologik" type="JUnit" factoryName="JUnit">
-      <module name="morfologik" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/morfologik" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-opennlp" type="JUnit" factoryName="JUnit">
-      <module name="opennlp" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/opennlp" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-phonetic" type="JUnit" factoryName="JUnit">
-      <module name="phonetic" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/phonetic" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-smartcn" type="JUnit" factoryName="JUnit">
-      <module name="smartcn" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/smartcn" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module analyzers-stempel" type="JUnit" factoryName="JUnit">
-      <module name="stempel" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/analysis/stempel" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module backward-codecs" type="JUnit" factoryName="JUnit">
-      <module name="backward-codecs" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/backward-codecs" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module benchmark" type="JUnit" factoryName="JUnit">
-      <module name="benchmark" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/benchmark" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module classification" type="JUnit" factoryName="JUnit">
-      <module name="classification" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/classification" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module codecs" type="JUnit" factoryName="JUnit">
-      <module name="codecs-tests" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/codecs" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module expressions" type="JUnit" factoryName="JUnit">
-      <module name="expressions" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/expressions" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module facet" type="JUnit" factoryName="JUnit">
-      <module name="facet" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/facet" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module grouping" type="JUnit" factoryName="JUnit">
-      <module name="grouping" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/grouping" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module highlighter" type="JUnit" factoryName="JUnit">
-      <module name="highlighter" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/highlighter" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module join" type="JUnit" factoryName="JUnit">
-      <module name="join" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/join" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module luke" type="JUnit" factoryName="JUnit">
-      <module name="luke" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/luke" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module memory" type="JUnit" factoryName="JUnit">
-      <module name="memory" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/memory" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module misc" type="JUnit" factoryName="JUnit">
-      <module name="misc" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/misc" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module monitor" type="JUnit" factoryName="JUnit">
-      <module name="monitor" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/monitor" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module queries" type="JUnit" factoryName="JUnit">
-      <module name="queries" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/queries" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module queryparser" type="JUnit" factoryName="JUnit">
-      <module name="queryparser" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/queries" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module replicator" type="JUnit" factoryName="JUnit">
-      <module name="replicator" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/replicator" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module sandbox" type="JUnit" factoryName="JUnit">
-      <module name="sandbox" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/sandbox" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module spatial-extras" type="JUnit" factoryName="JUnit">
-      <module name="spatial-extras" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/spatial-extras" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module spatial3d" type="JUnit" factoryName="JUnit">
-      <module name="spatial3d" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/spatial3d" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Module suggest" type="JUnit" factoryName="JUnit">
-      <module name="suggest" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/lucene/suggest" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="solrcloud" type="Application" factoryName="Application" singleton="true">
-      <option name="MAIN_CLASS_NAME" value="org.eclipse.jetty.start.Main" />
-      <option name="VM_PARAMETERS" value="-DzkRun -Dhost=127.0.0.1 -Duser.timezone=UTC -Djetty.home=$PROJECT_DIR$/solr/server -Dsolr.solr.home=$PROJECT_DIR$/solr/server/solr -Dsolr.install.dir=$PROJECT_DIR$/solr -Dsolr.log=$PROJECT_DIR$/solr/server/logs/solr.log" />
-      <option name="PROGRAM_PARAMETERS" value="--module=http" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/solr/server" />
-      <option name="PARENT_ENVS" value="true" />
-      <module name="server" />
-    </configuration>
-    <configuration default="false" name="Solr core" type="JUnit" factoryName="JUnit">
-      <module name="solr-core-tests" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/solr-core" />
-      <option name="VM_PARAMETERS" value="-Xmx512m -XX:MaxPermSize=256m -ea -Dtests.luceneMatchVersion=@version.base@ -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solrj" type="JUnit" factoryName="JUnit">
-      <module name="solrj-tests" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/solr-solrj" />
-      <option name="VM_PARAMETERS" value="-ea -Dtests.luceneMatchVersion=@version.base@ -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr analysis-extras contrib" type="JUnit" factoryName="JUnit">
-      <module name="analysis-extras" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-analysis-extras" />
-      <option name="VM_PARAMETERS" value="-ea -Dtests.luceneMatchVersion=@version.base@ -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr analytics contrib" type="JUnit" factoryName="JUnit">
-      <module name="analytics" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-analytics" />
-      <option name="VM_PARAMETERS" value="-ea" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr clustering contrib" type="JUnit" factoryName="JUnit">
-      <module name="clustering" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-clustering" />
-      <option name="VM_PARAMETERS" value="-ea -Dtests.luceneMatchVersion=@version.base@ -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr dataimporthandler contrib" type="JUnit" factoryName="JUnit">
-      <module name="dataimporthandler" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-dataimporthandler" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr dataimporthandler-extras contrib" type="JUnit" factoryName="JUnit">
-      <module name="dataimporthandler-extras" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-dataimporthandler-extras" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr extraction contrib" type="JUnit" factoryName="JUnit">
-      <module name="extraction" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-cell" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr langid contrib" type="JUnit" factoryName="JUnit">
-      <module name="langid" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-langid" />
-      <option name="VM_PARAMETERS" value="-ea -DtempDir=temp -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr ltr contrib" type="JUnit" factoryName="JUnit">
-      <module name="ltr" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/ltr" />
-      <option name="VM_PARAMETERS" value="-ea" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr prometheus-exporter contrib" type="JUnit" factoryName="JUnit">
-      <module name="prometheus-exporter" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/prometheus-exporter" />
-      <option name="VM_PARAMETERS" value="-ea" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-    <configuration default="false" name="Solr velocity contrib" type="JUnit" factoryName="JUnit">
-      <module name="velocity" />
-      <option name="TEST_OBJECT" value="pattern" />
-      <option name="WORKING_DIRECTORY" value="file://$PROJECT_DIR$/idea-build/solr/contrib/solr-velocity" />
-      <option name="VM_PARAMETERS" value="-ea" />
-      <option name="TEST_SEARCH_SCOPE"><value defaultName="singleModule" /></option>
-      <patterns><pattern testClass=".*\.Test[^.]*|.*\.[^.]*Test" /></patterns>
-    </configuration>
-
-    <list size="42">
-      <item index="0" class="java.lang.String" itemvalue="JUnit.Lucene core" />
-      <item index="1" class="java.lang.String" itemvalue="JUnit.Module analyzers-common" />
-      <item index="2" class="java.lang.String" itemvalue="JUnit.Module analyzers-icu" />
-      <item index="3" class="java.lang.String" itemvalue="JUnit.Module analyzers-kuromoji" />
-      <item index="4" class="java.lang.String" itemvalue="JUnit.Module analyzers-morfologik" />
-      <item index="5" class="java.lang.String" itemvalue="JUnit.Module analyzers-opennlp" />
-      <item index="6" class="java.lang.String" itemvalue="JUnit.Module analyzers-phonetic" />
-      <item index="7" class="java.lang.String" itemvalue="JUnit.Module analyzers-smartcn" />
-      <item index="8" class="java.lang.String" itemvalue="JUnit.Module analyzers-stempel" />
-      <item index="10" class="java.lang.String" itemvalue="JUnit.Module backward-codecs" />
-      <item index="11" class="java.lang.String" itemvalue="JUnit.Module benchmark" />
-      <item index="12" class="java.lang.String" itemvalue="JUnit.Module classification" />
-      <item index="13" class="java.lang.String" itemvalue="JUnit.Module codecs" />
-      <item index="14" class="java.lang.String" itemvalue="JUnit.Module expressions" />
-      <item index="15" class="java.lang.String" itemvalue="JUnit.Module facet" />
-      <item index="16" class="java.lang.String" itemvalue="JUnit.Module grouping" />
-      <item index="17" class="java.lang.String" itemvalue="JUnit.Module highlighter" />
-      <item index="18" class="java.lang.String" itemvalue="JUnit.Module join" />
-      <item index="19" class="java.lang.String" itemvalue="JUnit.Module memory" />
-      <item index="20" class="java.lang.String" itemvalue="JUnit.Module misc" />
-      <item index="21" class="java.lang.String" itemvalue="JUnit.Module queries" />
-      <item index="22" class="java.lang.String" itemvalue="JUnit.Module queryparser" />
-      <item index="23" class="java.lang.String" itemvalue="JUnit.Module replicator" />
-      <item index="24" class="java.lang.String" itemvalue="JUnit.Module sandbox" />
-      <item index="25" class="java.lang.String" itemvalue="JUnit.Module spatial" />
-      <item index="26" class="java.lang.String" itemvalue="JUnit.Module spatial-extras" />
-      <item index="27" class="java.lang.String" itemvalue="JUnit.Module spatial3d" />
-      <item index="28" class="java.lang.String" itemvalue="JUnit.Module suggest" />
-      <item index="29" class="java.lang.String" itemvalue="Application.solrcloud" />
-      <item index="30" class="java.lang.String" itemvalue="JUnit.Solr core" />
-      <item index="31" class="java.lang.String" itemvalue="JUnit.Solrj" />
-      <item index="32" class="java.lang.String" itemvalue="JUnit.Solr analysis-extras contrib" />
-      <item index="33" class="java.lang.String" itemvalue="JUnit.Solr analytics contrib" />
-      <item index="34" class="java.lang.String" itemvalue="JUnit.Solr clustering contrib" />
-      <item index="35" class="java.lang.String" itemvalue="JUnit.Solr dataimporthandler contrib" />
-      <item index="36" class="java.lang.String" itemvalue="JUnit.Solr dataimporthandler-extras contrib" />
-      <item index="37" class="java.lang.String" itemvalue="JUnit.Solr extraction contrib" />
-      <item index="38" class="java.lang.String" itemvalue="JUnit.Solr langid contrib" />
-      <item index="39" class="java.lang.String" itemvalue="JUnit.Solr ltr contrib" />
-      <item index="40" class="java.lang.String" itemvalue="JUnit.Solr prometheus-exporter contrib" />
-      <item index="42" class="java.lang.String" itemvalue="JUnit.Solr velocity contrib" />
-    </list>
-  </component>
-</project>
diff --git a/dev-tools/idea/dev-tools/scripts/scripts.iml b/dev-tools/idea/dev-tools/scripts/scripts.iml
deleted file mode 100644
index 88ad541..0000000
--- a/dev-tools/idea/dev-tools/scripts/scripts.iml
+++ /dev/null
@@ -1,9 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="PYTHON_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="true">
-    <exclude-output />
-    <content url="file://$MODULE_DIR$" />
-    <orderEntry type="jdk" jdkName="Python 3.7" jdkType="Python SDK" />
-    <orderEntry type="sourceFolder" forTests="false" />
-  </component>
-</module>
\ No newline at end of file
diff --git a/dev-tools/idea/lucene/analysis/common/analysis-common.iml b/dev-tools/idea/lucene/analysis/common/analysis-common.iml
deleted file mode 100644
index a98ab53..0000000
--- a/dev-tools/idea/lucene/analysis/common/analysis-common.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/common/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/common/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/tools/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/icu/icu.iml b/dev-tools/idea/lucene/analysis/icu/icu.iml
deleted file mode 100644
index d9235b6..0000000
--- a/dev-tools/idea/lucene/analysis/icu/icu.iml
+++ /dev/null
@@ -1,31 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/icu/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/icu/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/tools/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/kuromoji/kuromoji.iml b/dev-tools/idea/lucene/analysis/kuromoji/kuromoji.iml
deleted file mode 100644
index 67489d8..0000000
--- a/dev-tools/idea/lucene/analysis/kuromoji/kuromoji.iml
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/kuromoji/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/kuromoji/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/tools/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/tools/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="ICU library" level="project" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/morfologik/morfologik.iml b/dev-tools/idea/lucene/analysis/morfologik/morfologik.iml
deleted file mode 100644
index 2896c8f..0000000
--- a/dev-tools/idea/lucene/analysis/morfologik/morfologik.iml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/morfologik/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/morfologik/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/nori/nori.iml b/dev-tools/idea/lucene/analysis/nori/nori.iml
deleted file mode 100644
index aa2d18e..0000000
--- a/dev-tools/idea/lucene/analysis/nori/nori.iml
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/godori/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/godori/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/tools/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/tools/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="ICU library" level="project" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/opennlp/opennlp.iml b/dev-tools/idea/lucene/analysis/opennlp/opennlp.iml
deleted file mode 100644
index 7725065..0000000
--- a/dev-tools/idea/lucene/analysis/opennlp/opennlp.iml
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/opennlp/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/opennlp/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/phonetic/phonetic.iml b/dev-tools/idea/lucene/analysis/phonetic/phonetic.iml
deleted file mode 100644
index 7133534..0000000
--- a/dev-tools/idea/lucene/analysis/phonetic/phonetic.iml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/phonetic/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/phonetic/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/smartcn/smartcn.iml b/dev-tools/idea/lucene/analysis/smartcn/smartcn.iml
deleted file mode 100644
index bc63d96..0000000
--- a/dev-tools/idea/lucene/analysis/smartcn/smartcn.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/smartcn/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/smartcn/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/analysis/stempel/stempel.iml b/dev-tools/idea/lucene/analysis/stempel/stempel.iml
deleted file mode 100644
index ca0176e..0000000
--- a/dev-tools/idea/lucene/analysis/stempel/stempel.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/stempel/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/analysis/stempel/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/backward-codecs/backward-codecs.iml b/dev-tools/idea/lucene/backward-codecs/backward-codecs.iml
deleted file mode 100644
index 044f079..0000000
--- a/dev-tools/idea/lucene/backward-codecs/backward-codecs.iml
+++ /dev/null
@@ -1,18 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/backward-codecs/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/backward-codecs/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/benchmark/conf/benchmark-conf.iml b/dev-tools/idea/lucene/benchmark/conf/benchmark-conf.iml
deleted file mode 100644
index 4d3b7f1..0000000
--- a/dev-tools/idea/lucene/benchmark/conf/benchmark-conf.iml
+++ /dev/null
@@ -1,13 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/benchmark/classes/java/conf" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/benchmark/classes/test/conf" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/benchmark/src/benchmark.iml b/dev-tools/idea/lucene/benchmark/src/benchmark.iml
deleted file mode 100644
index 509d5ec..0000000
--- a/dev-tools/idea/lucene/benchmark/src/benchmark.iml
+++ /dev/null
@@ -1,39 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/benchmark/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/lucene/benchmark/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/../lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/../lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" name="ICU library" level="project" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="benchmark-conf" />
-    <orderEntry type="module" module-name="spatial-extras" />
-    <orderEntry type="module" module-name="facet" />
-    <orderEntry type="module" module-name="highlighter" />
-    <orderEntry type="module" module-name="icu" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="memory" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queryparser" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="join" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/classification/classification.iml b/dev-tools/idea/lucene/classification/classification.iml
deleted file mode 100644
index 25810ed..0000000
--- a/dev-tools/idea/lucene/classification/classification.iml
+++ /dev/null
@@ -1,23 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/classification/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/classification/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/temp" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="grouping" />
-    <orderEntry type="module" module-name="misc" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/codecs/src/codecs.iml b/dev-tools/idea/lucene/codecs/src/codecs.iml
deleted file mode 100644
index 76da54d..0000000
--- a/dev-tools/idea/lucene/codecs/src/codecs.iml
+++ /dev/null
@@ -1,14 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/codecs/classes/java" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/codecs/src/test/codecs-tests.iml b/dev-tools/idea/lucene/codecs/src/test/codecs-tests.iml
deleted file mode 100644
index 0b30e1b..0000000
--- a/dev-tools/idea/lucene/codecs/src/test/codecs-tests.iml
+++ /dev/null
@@ -1,17 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output-test url="file://$MODULE_DIR$/../../../../idea-build/lucene/codecs/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="codecs" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-core" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/lucene/core/src/lucene-core.iml b/dev-tools/idea/lucene/core/src/lucene-core.iml
deleted file mode 100644
index f57e2c3..0000000
--- a/dev-tools/idea/lucene/core/src/lucene-core.iml
+++ /dev/null
@@ -1,14 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/lucene/core/classes/java" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/lucene/core/src/test/lucene-core-tests.iml b/dev-tools/idea/lucene/core/src/test/lucene-core-tests.iml
deleted file mode 100644
index d375b2c..0000000
--- a/dev-tools/idea/lucene/core/src/test/lucene-core-tests.iml
+++ /dev/null
@@ -1,17 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output-test url="file://$MODULE_DIR$/../../../../idea-build/lucene/core/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="sourceFolder" forTests="true" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="codecs" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-core" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/lucene/demo/demo.iml b/dev-tools/idea/lucene/demo/demo.iml
deleted file mode 100644
index 80997f4..0000000
--- a/dev-tools/idea/lucene/demo/demo.iml
+++ /dev/null
@@ -1,32 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/demo/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/demo/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="jar://$MODULE_DIR$/lib/servlet-api-2.4.jar!/" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-      </library>
-    </orderEntry>
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="facet" />
-    <orderEntry type="module" module-name="queryparser" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="expressions" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/expressions/expressions.iml b/dev-tools/idea/lucene/expressions/expressions.iml
deleted file mode 100644
index 805db2d..0000000
--- a/dev-tools/idea/lucene/expressions/expressions.iml
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/expressions/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/expressions/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queries" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/lucene/facet/facet.iml b/dev-tools/idea/lucene/facet/facet.iml
deleted file mode 100644
index 43a8c79..0000000
--- a/dev-tools/idea/lucene/facet/facet.iml
+++ /dev/null
@@ -1,31 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/facet/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/facet/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library" exported="">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="queries" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/grouping/grouping.iml b/dev-tools/idea/lucene/grouping/grouping.iml
deleted file mode 100644
index 956c263..0000000
--- a/dev-tools/idea/lucene/grouping/grouping.iml
+++ /dev/null
@@ -1,18 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/grouping/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/grouping/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queries" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/highlighter/highlighter.iml b/dev-tools/idea/lucene/highlighter/highlighter.iml
deleted file mode 100644
index 9e54ec8..0000000
--- a/dev-tools/idea/lucene/highlighter/highlighter.iml
+++ /dev/null
@@ -1,23 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/highlighter/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/highlighter/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="codecs" />
-    <orderEntry type="module" module-name="memory" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="join" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/join/join.iml b/dev-tools/idea/lucene/join/join.iml
deleted file mode 100644
index 1f9e80b..0000000
--- a/dev-tools/idea/lucene/join/join.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/join/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/join/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="grouping" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/luke/luke.iml b/dev-tools/idea/lucene/luke/luke.iml
deleted file mode 100644
index 9bd08ef..0000000
--- a/dev-tools/idea/lucene/luke/luke.iml
+++ /dev/null
@@ -1,33 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/luke/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/luke/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="queryparser" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/memory/memory.iml b/dev-tools/idea/lucene/memory/memory.iml
deleted file mode 100644
index 84019e0..0000000
--- a/dev-tools/idea/lucene/memory/memory.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/memory/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/memory/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queryparser" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/misc/misc.iml b/dev-tools/idea/lucene/misc/misc.iml
deleted file mode 100644
index 5fb33de..0000000
--- a/dev-tools/idea/lucene/misc/misc.iml
+++ /dev/null
@@ -1,17 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/misc/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/misc/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/monitor/monitor.iml b/dev-tools/idea/lucene/monitor/monitor.iml
deleted file mode 100644
index 5c63df4..0000000
--- a/dev-tools/idea/lucene/monitor/monitor.iml
+++ /dev/null
@@ -1,32 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/monitor/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/monitor/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="memory" />
-    <orderEntry type="module" module-name="queryparser" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/queries/queries.iml b/dev-tools/idea/lucene/queries/queries.iml
deleted file mode 100644
index e995df2..0000000
--- a/dev-tools/idea/lucene/queries/queries.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/queries/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/queries/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="expressions" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/queryparser/queryparser.iml b/dev-tools/idea/lucene/queryparser/queryparser.iml
deleted file mode 100644
index cd2915f..0000000
--- a/dev-tools/idea/lucene/queryparser/queryparser.iml
+++ /dev/null
@@ -1,21 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/queryparser/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/queryparser/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <excludeFolder url="file://$MODULE_DIR$/work" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="sandbox" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/replicator/replicator.iml b/dev-tools/idea/lucene/replicator/replicator.iml
deleted file mode 100644
index 4922225..0000000
--- a/dev-tools/idea/lucene/replicator/replicator.iml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/replicator/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/replicator/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="facet" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/sandbox/sandbox.iml b/dev-tools/idea/lucene/sandbox/sandbox.iml
deleted file mode 100644
index b3c8713..0000000
--- a/dev-tools/idea/lucene/sandbox/sandbox.iml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/sandbox/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/sandbox/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="jar://$MODULE_DIR$/lib/jakarta-regexp-1.4.jar!/" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" module-name="lucene-test-framework" scope="TEST" />
-    <orderEntry type="module" module-name="codecs" scope="TEST" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
\ No newline at end of file
diff --git a/dev-tools/idea/lucene/spatial-extras/spatial-extras.iml b/dev-tools/idea/lucene/spatial-extras/spatial-extras.iml
deleted file mode 100644
index 8e9d887..0000000
--- a/dev-tools/idea/lucene/spatial-extras/spatial-extras.iml
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/spatial-extras/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/spatial-extras/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library" exported="">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="spatial3d" />
-    <orderEntry type="module" module-name="analysis-common" scope="TEST"/>
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/spatial3d/spatial3d.iml b/dev-tools/idea/lucene/spatial3d/spatial3d.iml
deleted file mode 100644
index c52b38b..0000000
--- a/dev-tools/idea/lucene/spatial3d/spatial3d.iml
+++ /dev/null
@@ -1,17 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/spatial3d/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/spatial3d/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/suggest/suggest.iml b/dev-tools/idea/lucene/suggest/suggest.iml
deleted file mode 100644
index 5e58bc2..0000000
--- a/dev-tools/idea/lucene/suggest/suggest.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/suggest/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/suggest/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" isTestSource="false" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/test-framework/lucene-test-framework.iml b/dev-tools/idea/lucene/test-framework/lucene-test-framework.iml
deleted file mode 100644
index 91e0dae..0000000
--- a/dev-tools/idea/lucene/test-framework/lucene-test-framework.iml
+++ /dev/null
@@ -1,18 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/test-framework/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/test-framework/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="JUnit" level="project" />
-    <orderEntry type="module" module-name="codecs" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/lucene/tools/tools.iml b/dev-tools/idea/lucene/tools/tools.iml
deleted file mode 100644
index 4e878cd..0000000
--- a/dev-tools/idea/lucene/tools/tools.iml
+++ /dev/null
@@ -1,24 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/lucene/tools/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/lucene/tools/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="Ivy" level="project" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="true" />
-      </library>
-    </orderEntry>
-  </component>
-</module>
diff --git a/dev-tools/idea/parent.iml b/dev-tools/idea/parent.iml
deleted file mode 100644
index 913d64f..0000000
--- a/dev-tools/idea/parent.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="true">
-    <content url="file://$MODULE_DIR$">
-      <excludeFolder url="file://$MODULE_DIR$/.idea" />
-      <excludeFolder url="file://$MODULE_DIR$/dist" />
-      <excludeFolder url="file://$MODULE_DIR$/lucene/build" />
-      <excludeFolder url="file://$MODULE_DIR$/lucene/dist" />
-      <excludeFolder url="file://$MODULE_DIR$/lucene/benchmark/temp" />
-      <excludeFolder url="file://$MODULE_DIR$/lucene/benchmark/work" />
-      <excludeFolder url="file://$MODULE_DIR$/solr/build" />
-      <excludeFolder url="file://$MODULE_DIR$/solr/dist" />
-      <excludeFolder url="file://$MODULE_DIR$/solr/package" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/solr/contrib/analysis-extras/analysis-extras.iml b/dev-tools/idea/solr/contrib/analysis-extras/analysis-extras.iml
deleted file mode 100644
index 7c0c0c1..0000000
--- a/dev-tools/idea/solr/contrib/analysis-extras/analysis-extras.iml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-analysis-extras/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-analysis-extras/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Morfologik library" level="project" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="morfologik" />
-    <orderEntry type="module" module-name="icu" />
-    <orderEntry type="module" module-name="smartcn" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="stempel" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="sandbox" />
-    <orderEntry type="module" module-name="opennlp" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/analytics/analytics.iml b/dev-tools/idea/solr/contrib/analytics/analytics.iml
deleted file mode 100644
index d63d3e2..0000000
--- a/dev-tools/idea/solr/contrib/analytics/analytics.iml
+++ /dev/null
@@ -1,27 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-analytics/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-analytics/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr example library" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="backward-codecs" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/solr/contrib/clustering/clustering.iml b/dev-tools/idea/solr/contrib/clustering/clustering.iml
deleted file mode 100644
index 2c7db4b..0000000
--- a/dev-tools/idea/solr/contrib/clustering/clustering.iml
+++ /dev/null
@@ -1,39 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-clustering/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-clustering/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" module-name="lucene-test-framework" scope="TEST" />
-    <orderEntry type="module" module-name="solr-test-framework" scope="TEST" />
-    <orderEntry type="module" module-name="highlighter" />
-    <orderEntry type="module" module-name="memory" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="phonetic" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="suggest" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
\ No newline at end of file
diff --git a/dev-tools/idea/solr/contrib/dataimporthandler-extras/dataimporthandler-extras.iml b/dev-tools/idea/solr/contrib/dataimporthandler-extras/dataimporthandler-extras.iml
deleted file mode 100644
index 8bc21aa..0000000
--- a/dev-tools/idea/solr/contrib/dataimporthandler-extras/dataimporthandler-extras.iml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-dataimporthandler-extras/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-dataimporthandler-extras/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-core" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr DIH extras library" level="project" />
-    <orderEntry type="library" name="Solr extraction library" level="project" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="dataimporthandler" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/solr/contrib/dataimporthandler/dataimporthandler.iml b/dev-tools/idea/solr/contrib/dataimporthandler/dataimporthandler.iml
deleted file mode 100644
index 8240ff2..0000000
--- a/dev-tools/idea/solr/contrib/dataimporthandler/dataimporthandler.iml
+++ /dev/null
@@ -1,31 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-dataimporthandler/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-dataimporthandler/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/webapp" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" scope="TEST" name="HSQLDB" level="project" />
-    <orderEntry type="library" scope="TEST" name="Derby" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr DIH test library" level="project" />
-    <orderEntry type="library" name="Solr example library" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr DIH core library" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" scope="TEST" module-name="join" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/extraction/extraction.iml b/dev-tools/idea/solr/contrib/extraction/extraction.iml
deleted file mode 100644
index 15dad16..0000000
--- a/dev-tools/idea/solr/contrib/extraction/extraction.iml
+++ /dev/null
@@ -1,26 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-cell/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-cell/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr extraction library" level="project" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/jaegertracer-configurator/jaegertracer-configurator.iml b/dev-tools/idea/solr/contrib/jaegertracer-configurator/jaegertracer-configurator.iml
deleted file mode 100644
index 7efc6a8..0000000
--- a/dev-tools/idea/solr/contrib/jaegertracer-configurator/jaegertracer-configurator.iml
+++ /dev/null
@@ -1,37 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/jaegertracer-configurator/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/jaegertracer-configurator/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="module-library" scope="TEST">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr jaeger tracer configurator library" level="project" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/langid/langid.iml b/dev-tools/idea/solr/contrib/langid/langid.iml
deleted file mode 100644
index afeb125..0000000
--- a/dev-tools/idea/solr/contrib/langid/langid.iml
+++ /dev/null
@@ -1,36 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-langid/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-langid/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr extraction library" level="project" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/ltr/ltr.iml b/dev-tools/idea/solr/contrib/ltr/ltr.iml
deleted file mode 100644
index 37369e6..0000000
--- a/dev-tools/idea/solr/contrib/ltr/ltr.iml
+++ /dev/null
@@ -1,37 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/ltr/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/ltr/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="module-library" scope="TEST">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/test-lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/test-lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="Solr example library" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr core test library" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/prometheus-exporter/prometheus-exporter.iml b/dev-tools/idea/solr/contrib/prometheus-exporter/prometheus-exporter.iml
deleted file mode 100644
index b3d115b..0000000
--- a/dev-tools/idea/solr/contrib/prometheus-exporter/prometheus-exporter.iml
+++ /dev/null
@@ -1,37 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/prometheus-exporter/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/prometheus-exporter/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="module-library" scope="TEST">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr prometheus exporter library" level="project" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/contrib/velocity/velocity.iml b/dev-tools/idea/solr/contrib/velocity/velocity.iml
deleted file mode 100644
index 06283ac..0000000
--- a/dev-tools/idea/solr/contrib/velocity/velocity.iml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-velocity/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/contrib/solr-velocity/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test/velocity" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr velocity library" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="analysis-common" />
-  </component>
-</module>
-
diff --git a/dev-tools/idea/solr/core/src/java/solr-core.iml b/dev-tools/idea/solr/core/src/java/solr-core.iml
deleted file mode 100644
index 5ac2c79..0000000
--- a/dev-tools/idea/solr/core/src/java/solr-core.iml
+++ /dev/null
@@ -1,38 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../../idea-build/solr/solr-core/classes/java" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$" isTestSource="false" />
-    </content>
-    <content url="file://$MODULE_DIR$/../resources">
-      <sourceFolder url="file://$MODULE_DIR$/../resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr example library" level="project" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="kuromoji" />
-    <orderEntry type="module" module-name="spatial-extras" />
-    <orderEntry type="module" module-name="grouping" />
-    <orderEntry type="module" module-name="highlighter" />
-    <orderEntry type="module" module-name="icu" />
-    <orderEntry type="module" module-name="queries" />
-    <orderEntry type="module" module-name="misc" />
-    <orderEntry type="module" module-name="phonetic" />
-    <orderEntry type="module" module-name="suggest" />
-    <orderEntry type="module" module-name="expressions" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="lucene-core" />
-    <orderEntry type="module" module-name="classification" />
-    <orderEntry type="module" module-name="queryparser" />
-    <orderEntry type="module" module-name="join" />
-    <orderEntry type="module" module-name="sandbox" />
-    <orderEntry type="module" module-name="backward-codecs" />
-    <orderEntry type="module" module-name="codecs" />
-    <orderEntry type="module" module-name="nori" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/core/src/solr-core-tests.iml b/dev-tools/idea/solr/core/src/solr-core-tests.iml
deleted file mode 100644
index 99297d0..0000000
--- a/dev-tools/idea/solr/core/src/solr-core-tests.iml
+++ /dev/null
@@ -1,37 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/solr-core/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr core library" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr core test library" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solrj library" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr example library" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr test framework library" level="project" />
-    <orderEntry type="library" scope="TEST" name="ICU library" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-core" />
-    <orderEntry type="module" scope="TEST" module-name="solrj" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-core" />
-    <orderEntry type="module" scope="TEST" module-name="classification" />
-    <orderEntry type="module" scope="TEST" module-name="analysis-common" />
-    <orderEntry type="module" scope="TEST" module-name="queryparser" />
-    <orderEntry type="module" scope="TEST" module-name="queries" />
-    <orderEntry type="module" scope="TEST" module-name="suggest" />
-    <orderEntry type="module" scope="TEST" module-name="spatial-extras" />
-    <orderEntry type="module" scope="TEST" module-name="misc" />
-    <orderEntry type="module" scope="TEST" module-name="join" />
-    <orderEntry type="module" scope="TEST" module-name="expressions" />
-    <orderEntry type="module" scope="TEST" module-name="icu" />
-    <orderEntry type="module" scope="TEST" module-name="analysis-extras" />
-    <orderEntry type="module" scope="TEST" module-name="backward-codecs" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/server/server.iml b/dev-tools/idea/solr/server/server.iml
deleted file mode 100644
index 3b742a6..0000000
--- a/dev-tools/idea/solr/server/server.iml
+++ /dev/null
@@ -1,19 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/solr/server/classes/java" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$" />
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="module-library">
-      <library>
-        <CLASSES>
-          <root url="jar://$MODULE_DIR$/start.jar!/" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-      </library>
-    </orderEntry>
-  </component>
-</module>
\ No newline at end of file
diff --git a/dev-tools/idea/solr/solrj/src/java/solrj.iml b/dev-tools/idea/solr/solrj/src/java/solrj.iml
deleted file mode 100644
index e5dde1d..0000000
--- a/dev-tools/idea/solr/solrj/src/java/solrj.iml
+++ /dev/null
@@ -1,16 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../../../idea-build/solr/solr-solrj/classes/java" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$" isTestSource="false" />
-    </content>
-    <content url="file://$MODULE_DIR$/../resources">
-      <sourceFolder url="file://$MODULE_DIR$/../resources" type="java-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/solrj/src/solrj-tests.iml b/dev-tools/idea/solr/solrj/src/solrj-tests.iml
deleted file mode 100644
index 13cdb58..0000000
--- a/dev-tools/idea/solr/solrj/src/solrj-tests.iml
+++ /dev/null
@@ -1,33 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output-test url="file://$MODULE_DIR$/../../../idea-build/solr/solr-solrj/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" scope="TEST" name="JUnit" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solr core library" level="project" />
-    <orderEntry type="library" scope="TEST" name="Solrj library" level="project" />
-    <orderEntry type="module-library" scope="TEST">
-      <library>
-        <CLASSES>
-          <root url="file://$MODULE_DIR$/../test-lib" />
-        </CLASSES>
-        <JAVADOC />
-        <SOURCES />
-        <jarDirectory url="file://$MODULE_DIR$/../test-lib" recursive="false" />
-      </library>
-    </orderEntry>
-    <orderEntry type="library" scope="TEST" name="Solr example library" level="project" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solr-test-framework" />
-    <orderEntry type="module" scope="TEST" module-name="solrj" />
-    <orderEntry type="module" scope="TEST" module-name="solr-core" />
-    <orderEntry type="module" scope="TEST" module-name="analysis-common" />
-    <orderEntry type="module" scope="TEST" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/idea/solr/test-framework/solr-test-framework.iml b/dev-tools/idea/solr/test-framework/solr-test-framework.iml
deleted file mode 100644
index c3da2c9..0000000
--- a/dev-tools/idea/solr/test-framework/solr-test-framework.iml
+++ /dev/null
@@ -1,26 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
-  <component name="NewModuleRootManager" inherit-compiler-output="false">
-    <output url="file://$MODULE_DIR$/../../idea-build/solr/solr-test-framework/classes/java" />
-    <output-test url="file://$MODULE_DIR$/../../idea-build/solr/solr-test-framework/classes/test" />
-    <exclude-output />
-    <content url="file://$MODULE_DIR$">
-      <sourceFolder url="file://$MODULE_DIR$/src/java" isTestSource="false" />
-      <sourceFolder url="file://$MODULE_DIR$/src/resources" type="java-resource" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test" isTestSource="true" />
-      <sourceFolder url="file://$MODULE_DIR$/src/test-files" type="java-test-resource" />
-    </content>
-    <orderEntry type="inheritedJdk" />
-    <orderEntry type="sourceFolder" forTests="false" />
-    <orderEntry type="library" name="JUnit" level="project" />
-    <orderEntry type="library" name="Solr core library" level="project" />
-    <orderEntry type="library" name="Solr example library" level="project" />
-    <orderEntry type="library" name="Solrj library" level="project" />
-    <orderEntry type="library" name="Solr test framework library" level="project" />
-    <orderEntry type="module" module-name="lucene-test-framework" />
-    <orderEntry type="module" module-name="analysis-common" />
-    <orderEntry type="module" module-name="solrj" />
-    <orderEntry type="module" module-name="solr-core" />
-    <orderEntry type="module" module-name="lucene-core" />
-  </component>
-</module>
diff --git a/dev-tools/maven/README.maven b/dev-tools/maven/README.maven
deleted file mode 100644
index 0d9c7b7..0000000
--- a/dev-tools/maven/README.maven
+++ /dev/null
@@ -1,159 +0,0 @@
-====================================
-Lucene/Solr Maven build instructions
-====================================
-
-Contents:
-
-A. How to use nightly Jenkins-built Lucene/Solr Maven artifacts
-B. How to generate Maven artifacts
-C. How to deploy Maven artifacts to a repository
-D. How to use Maven to build Lucene/Solr
-
------
-
-A. How to use nightly Jenkins-built Lucene/Solr Maven artifacts
-
-   The most recently produced nightly Jenkins-built Lucene and Solr Maven
-   snapshot artifacts are available in the Apache Snapshot repository here:
-
-      https://repository.apache.org/snapshots
-
-   An example POM snippet:
-
-     <project ...>
-       ...
-       <repositories>
-         ...
-         <repository>
-           <id>apache.snapshots</id>
-           <name>Apache Snapshot Repository</name>
-           <url>https://repository.apache.org/snapshots</url>
-           <releases>
-             <enabled>false</enabled>
-           </releases>
-         </repository>
-
-
-B. How to generate Lucene/Solr Maven artifacts
-
-   Prerequisites: OpenJDK 11+ and Ant 1.8.2+
-
-   Run 'ant generate-maven-artifacts' to create an internal Maven
-   repository, including POMs, binary .jars, source .jars, and javadoc
-   .jars.
-
-   You can run the above command in three possible places: the top-level
-   directory; under lucene/; or under solr/.  From the top-level directory
-   or from lucene/, the internal repository will be located at dist/maven/.
-   From solr/, the internal repository will be located at package/maven/.
-
-
-C. How to deploy Maven artifacts to a repository
-
-   Prerequisites: OpenJDK 11+ and Ant 1.8.2+
-
-   You can deploy targets for all of Lucene/Solr, only Lucene, or only Solr,
-   as in B. above.  To deploy to a Maven repository, the command is the same
-   as in B. above, with the addition of two system properties:
-
-      ant -Dm2.repository.id=my-repo-id \
-          -Dm2.repository.url=https://example.org/my/repo \
-          generate-maven-artifacts
-
-   The repository ID given in the above command corresponds to a <server>
-   entry in either your ~/.m2/settings.xml or ~/.ant/settings.xml.  See
-   <https://maven.apache.org/settings.html#Servers> for more information.
-   (Note that as of version 2.1.3, Maven Ant Tasks cannot handle encrypted
-   passwords.)
-
-
-D. How to use Maven to build Lucene/Solr
-
-   In summary, to enable Maven builds, perform the following:
-
-         ant get-maven-poms
-         cd maven-build
-
-   The details, followed by some example Maven commands:
-
-   1. Prerequisites: OpenJDK 11+ and Maven 2.2.1 or 3.X
-
-   2. Make sure your sources are up to date.
-
-   3. Copy the Maven POM templates from under dev-tools/maven/ to the
-      maven-build/ directory using the following command from the top-level
-      directory:
-
-         ant get-maven-poms
-
-      Note that you will need to do this whenever changes to the POM
-      templates are committed.  For this reason, it's a good idea run
-      "ant get-maven-poms" after you update from origin.
-
-      The above command copies all of the POM templates from dev-tools/maven/,
-      filling in the project version with the default "X.X-SNAPSHOT".  If you
-      want the POMs and the Maven-built artifacts to have a version other than
-      the default, you can supply an alternate version on the command line
-      with the above command, e.g. "my-special-version":
-
-         ant -Dversion=my-special-version get-maven-poms
-
-      or to append "my-special-version" to the current base version, e.g. 5.0,
-      resulting in version "5.0-my-special-version":
-
-         ant -Ddev.version.suffix=my-special-version get-maven-poms
-
-      Note: if you change the version in the POMs, there is one test method
-      that will fail under maven-surefire-plugin:
-      o.a.l.index.TestCheckIndex#testLuceneConstantVersion().  It's safe to
-      @Ignore this test method, since it's just comparing the value of the
-      lucene.version system property (set in the maven-surefire-plugin
-      configuration in the lucene-core POM) against a hard-wired official
-      version (o.a.l.util.Constants.LUCENE_MAIN_VERSION).
-
-   4. To remove the maven-build/ directory and its contents, use the following
-      command from the top-level directory:
-
-         ant clean-maven-build
-
-   5. Please keep in mind that this is just a minimal Maven build. The resulting
-      artifacts are not the same as those created by the native Ant-based build.
-      It should be fine to enable Lucene builds in several Maven-based IDEs,
-      but should never be used for Lucene/Solr production usage, as they may lack
-      optimized class files (e.g., Java 9 MR-JAR support). To install Lucene/Solr
-      in your local repository, see instructions above.
-
-
-   Some example Maven commands you can use after you perform the above
-   preparatory steps:
-
-   - Compile, package, and install all binary artifacts to your local
-     repository:
-
-         mvn install
-
-     After compiling and packaging, but before installing each module's 
-     artifact, the above command will also run all the module's tests.
-     
-     The resulting artifacts are not the same as those created by the native
-     Ant-based build. They should never be used for Lucene/Solr production
-     usage, as they may lack optimized class files (e.g., Java 9 MR-JAR
-     support).
-
-   - Compile, package, and install all binary artifacts to your local
-     repository, without running any tests:
-
-         mvn -DskipTests install
-
-   - Compile, package, and install all binary and source artifacts to your
-     local repository, without running any tests:
-
-         mvn -DskipTests source:jar-no-fork install
-
-   - Run all tests:
-
-         mvn test
-
-   - Run all test methods defined in a test class:
-
-         mvn -Dtest=TestClassName test
diff --git a/dev-tools/maven/lucene/analysis/common/pom.xml.template b/dev-tools/maven/lucene/analysis/common/pom.xml.template
deleted file mode 100644
index 86d8202..0000000
--- a/dev-tools/maven/lucene/analysis/common/pom.xml.template
+++ /dev/null
@@ -1,86 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-common</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Common Analyzers</name>
-  <description>Additional Analyzers</description>
-  <properties>
-    <module-directory>lucene/analysis/common</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-common.internal.dependencies@
-@lucene-analyzers-common.external.dependencies@
-@lucene-analyzers-common.internal.test.dependencies@
-@lucene-analyzers-common.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <goals>
-              <goal>test-jar</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/icu/pom.xml.template b/dev-tools/maven/lucene/analysis/icu/pom.xml.template
deleted file mode 100644
index 34c343e..0000000
--- a/dev-tools/maven/lucene/analysis/icu/pom.xml.template
+++ /dev/null
@@ -1,76 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-icu</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene ICU Analysis Components</name>
-  <description>    
-    Provides integration with ICU (International Components for Unicode) for
-    stronger Unicode and internationalization support.
-  </description>
-  <properties>
-    <module-directory>lucene/analysis/icu</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>${project.groupId}</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-icu.internal.dependencies@
-@lucene-analyzers-icu.external.dependencies@
-@lucene-analyzers-icu.internal.test.dependencies@
-@lucene-analyzers-icu.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/kuromoji/pom.xml.template b/dev-tools/maven/lucene/analysis/kuromoji/pom.xml.template
deleted file mode 100644
index 21d92f7..0000000
--- a/dev-tools/maven/lucene/analysis/kuromoji/pom.xml.template
+++ /dev/null
@@ -1,75 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-kuromoji</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Kuromoji Japanese Morphological Analyzer</name>
-  <description>
-    Lucene Kuromoji Japanese Morphological Analyzer
-  </description>
-  <properties>
-    <module-directory>lucene/analysis/kuromoji</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-kuromoji.internal.dependencies@
-@lucene-analyzers-kuromoji.external.dependencies@
-@lucene-analyzers-kuromoji.internal.test.dependencies@
-@lucene-analyzers-kuromoji.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/morfologik/pom.xml.template b/dev-tools/maven/lucene/analysis/morfologik/pom.xml.template
deleted file mode 100644
index bd63d3a..0000000
--- a/dev-tools/maven/lucene/analysis/morfologik/pom.xml.template
+++ /dev/null
@@ -1,78 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-morfologik</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Morfologik Polish Lemmatizer</name>
-  <description>
-    A dictionary-driven lemmatizer for Polish (includes morphosyntactic annotations)
-  </description>
-  <properties>
-    <module-directory>lucene/analysis/morfologik</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-morfologik.internal.dependencies@
-@lucene-analyzers-morfologik.external.dependencies@
-@lucene-analyzers-morfologik.internal.test.dependencies@
-@lucene-analyzers-morfologik.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/nori/pom.xml.template b/dev-tools/maven/lucene/analysis/nori/pom.xml.template
deleted file mode 100644
index ac37a08..0000000
--- a/dev-tools/maven/lucene/analysis/nori/pom.xml.template
+++ /dev/null
@@ -1,75 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-nori</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Nori Korean Morphological Analyzer</name>
-  <description>
-    Lucene Nori Korean Morphological Analyzer
-  </description>
-  <properties>
-    <module-directory>lucene/analysis/nori</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-nori.internal.dependencies@
-@lucene-analyzers-nori.external.dependencies@
-@lucene-analyzers-nori.internal.test.dependencies@
-@lucene-analyzers-nori.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/opennlp/pom.xml.template b/dev-tools/maven/lucene/analysis/opennlp/pom.xml.template
deleted file mode 100644
index 4109a0a..0000000
--- a/dev-tools/maven/lucene/analysis/opennlp/pom.xml.template
+++ /dev/null
@@ -1,78 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-opennlp</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene OpenNLP integration</name>
-  <description>
-    Lucene OpenNLP integration
-  </description>
-  <properties>
-    <module-directory>lucene/analysis/opennlp</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    @lucene-analyzers-opennlp.internal.dependencies@
-    @lucene-analyzers-opennlp.external.dependencies@
-    @lucene-analyzers-opennlp.internal.test.dependencies@
-    @lucene-analyzers-opennlp.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/phonetic/pom.xml.template b/dev-tools/maven/lucene/analysis/phonetic/pom.xml.template
deleted file mode 100644
index a8e8e93..0000000
--- a/dev-tools/maven/lucene/analysis/phonetic/pom.xml.template
+++ /dev/null
@@ -1,75 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-phonetic</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Phonetic Filters</name>
-  <description>
-    Provides phonetic encoding via Commons Codec.
-  </description>
-  <properties>
-    <module-directory>lucene/analysis/phonetic</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-phonetic.internal.dependencies@
-@lucene-analyzers-phonetic.external.dependencies@
-@lucene-analyzers-phonetic.internal.test.dependencies@
-@lucene-analyzers-phonetic.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/pom.xml.template b/dev-tools/maven/lucene/analysis/pom.xml.template
deleted file mode 100644
index f41aa06..0000000
--- a/dev-tools/maven/lucene/analysis/pom.xml.template
+++ /dev/null
@@ -1,55 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analysis-modules-aggregator</artifactId>
-  <name>Lucene Analysis Modules aggregator POM</name>
-  <packaging>pom</packaging>
-  <modules>
-    <module>common</module>
-    <module>icu</module>
-    <module>kuromoji</module>
-    <module>morfologik</module>
-    <module>nori</module>
-    <module>opennlp</module>
-    <module>phonetic</module>
-    <module>smartcn</module>
-    <module>stempel</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/smartcn/pom.xml.template b/dev-tools/maven/lucene/analysis/smartcn/pom.xml.template
deleted file mode 100644
index a795b76..0000000
--- a/dev-tools/maven/lucene/analysis/smartcn/pom.xml.template
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-smartcn</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Smart Chinese Analyzer</name>
-  <description>Smart Chinese Analyzer</description>
-  <properties>
-    <module-directory>lucene/analysis/smartcn</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-smartcn.internal.dependencies@
-@lucene-analyzers-smartcn.external.dependencies@
-@lucene-analyzers-smartcn.internal.test.dependencies@
-@lucene-analyzers-smartcn.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/analysis/stempel/pom.xml.template b/dev-tools/maven/lucene/analysis/stempel/pom.xml.template
deleted file mode 100644
index f32e067..0000000
--- a/dev-tools/maven/lucene/analysis/stempel/pom.xml.template
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-analyzers-stempel</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Stempel Analyzer</name>
-  <description>Stempel Analyzer</description>
-  <properties>
-    <module-directory>lucene/analysis/stempel</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-analyzers-stempel.internal.dependencies@
-@lucene-analyzers-stempel.external.dependencies@
-@lucene-analyzers-stempel.internal.test.dependencies@
-@lucene-analyzers-stempel.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/backward-codecs/pom.xml.template b/dev-tools/maven/lucene/backward-codecs/pom.xml.template
deleted file mode 100644
index 271276d..0000000
--- a/dev-tools/maven/lucene/backward-codecs/pom.xml.template
+++ /dev/null
@@ -1,88 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-backward-codecs</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Memory</name>
-  <description>
-    Codecs for older versions of Lucene.
-  </description>
-  <properties>
-    <module-directory>lucene/backward-codecs</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-backward-codecs.internal.dependencies@
-@lucene-backward-codecs.external.dependencies@
-@lucene-backward-codecs.internal.test.dependencies@
-@lucene-backward-codecs.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <goals>
-              <goal>test-jar</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/benchmark/pom.xml.template b/dev-tools/maven/lucene/benchmark/pom.xml.template
deleted file mode 100644
index 6c4a245..0000000
--- a/dev-tools/maven/lucene/benchmark/pom.xml.template
+++ /dev/null
@@ -1,86 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-benchmark</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Benchmark</name>
-  <description>Lucene Benchmarking Module</description>
-  <properties>
-    <module-directory>lucene/benchmark</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-benchmark.internal.dependencies@
-@lucene-benchmark.external.dependencies@
-@lucene-benchmark.internal.test.dependencies@
-@lucene-benchmark.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-      <testResource>
-        <directory>${module-path}</directory>
-        <includes>
-          <include>conf/**/*</include>
-        </includes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-check-sysout-forbidden-apis</id>
-            <phase>none</phase>  <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/classification/pom.xml.template b/dev-tools/maven/lucene/classification/pom.xml.template
deleted file mode 100644
index 397d07d..0000000
--- a/dev-tools/maven/lucene/classification/pom.xml.template
+++ /dev/null
@@ -1,68 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-classification</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Classification</name>
-  <description>Lucene Classification</description>
-  <properties>
-    <module-directory>lucene/classification</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-classification.internal.dependencies@
-@lucene-classification.external.dependencies@
-@lucene-classification.internal.test.dependencies@
-@lucene-classification.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/codecs/pom.xml.template b/dev-tools/maven/lucene/codecs/pom.xml.template
deleted file mode 100644
index 269c23c..0000000
--- a/dev-tools/maven/lucene/codecs/pom.xml.template
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-codecs-aggregator</artifactId>
-  <packaging>pom</packaging>
-  <name>Lucene codecs aggregator POM</name>
-  <modules>
-    <module>src/java</module>
-    <module>src/test</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/codecs/src/java/pom.xml.template b/dev-tools/maven/lucene/codecs/src/java/pom.xml.template
deleted file mode 100644
index 4d2f027..0000000
--- a/dev-tools/maven/lucene/codecs/src/java/pom.xml.template
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-codecs</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene codecs</name>
-  <description>
-    Codecs and postings formats for Apache Lucene.
-  </description>
-  <properties>
-    <module-directory>lucene/codecs</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/java</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-@lucene-codecs.internal.dependencies@
-@lucene-codecs.external.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/../resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory/>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- Tests are run from lucene-codecs-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- This skips test compilation - tests are run from lucene-codecs-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-test-check-forbidden-apis</id>
-            <phase>none</phase>  <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/codecs/src/test/pom.xml.template b/dev-tools/maven/lucene/codecs/src/test/pom.xml.template
deleted file mode 100644
index d64f899..0000000
--- a/dev-tools/maven/lucene/codecs/src/test/pom.xml.template
+++ /dev/null
@@ -1,84 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-codecs-tests</artifactId>
-  <name>Lucene codecs tests</name>
-  <packaging>jar</packaging>
-  <properties>
-    <module-directory>lucene/codecs</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/test</module-path>
-  </properties>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-codecs</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-codecs.internal.test.dependencies@
-@lucene-codecs.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory/>
-    <testSourceDirectory>${module-path}</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/core/pom.xml.template b/dev-tools/maven/lucene/core/pom.xml.template
deleted file mode 100644
index f06e1e2..0000000
--- a/dev-tools/maven/lucene/core/pom.xml.template
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-core-aggregator</artifactId>
-  <packaging>pom</packaging>
-  <name>Lucene Core aggregator POM</name>
-  <modules>
-    <module>src/java</module>
-    <module>src/test</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/core/src/java/pom.xml.template b/dev-tools/maven/lucene/core/src/java/pom.xml.template
deleted file mode 100644
index 89cdfe8..0000000
--- a/dev-tools/maven/lucene/core/src/java/pom.xml.template
+++ /dev/null
@@ -1,79 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-core</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Core</name>
-  <description>Apache Lucene Java Core</description>
-  <properties>
-    <module-directory>lucene/core</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/java</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <build>
-    <sourceDirectory>${module-path}</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/../resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory/>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- Tests are run from lucene-codecs-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- This skips test compilation - tests are run from lucene-codecs-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-test-check-forbidden-apis</id>
-            <phase>none</phase>  <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/core/src/test/pom.xml.template b/dev-tools/maven/lucene/core/src/test/pom.xml.template
deleted file mode 100644
index d50e7e6..0000000
--- a/dev-tools/maven/lucene/core/src/test/pom.xml.template
+++ /dev/null
@@ -1,84 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-core-tests</artifactId>
-  <name>Lucene Core tests</name>
-  <packaging>jar</packaging>
-  <properties>
-    <module-directory>lucene/core</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/test</module-path>
-  </properties>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-core</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-core.internal.test.dependencies@
-@lucene-core.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory/>
-    <testSourceDirectory>${module-path}</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/demo/pom.xml.template b/dev-tools/maven/lucene/demo/pom.xml.template
deleted file mode 100644
index fc550ad..0000000
--- a/dev-tools/maven/lucene/demo/pom.xml.template
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-demo</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Demo</name>
-  <description>This is the demo for Apache Lucene Java</description>
-  <properties>
-    <module-directory>lucene/demo</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-demo.internal.dependencies@
-@lucene-demo.external.dependencies@
-@lucene-demo.internal.test.dependencies@
-@lucene-demo.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-check-sysout-forbidden-apis</id>
-            <phase>none</phase>  <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/expressions/pom.xml.template b/dev-tools/maven/lucene/expressions/pom.xml.template
deleted file mode 100644
index 18ea18a..0000000
--- a/dev-tools/maven/lucene/expressions/pom.xml.template
+++ /dev/null
@@ -1,62 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-expressions</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Expressions</name>
-  <description>    
-     Dynamically computed values to sort/facet/search on based on a pluggable grammar.
-  </description>
-  <properties>
-    <module-directory>lucene/expressions</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-expressions.internal.dependencies@
-@lucene-expressions.external.dependencies@
-@lucene-expressions.internal.test.dependencies@
-@lucene-expressions.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/facet/pom.xml.template b/dev-tools/maven/lucene/facet/pom.xml.template
deleted file mode 100644
index 9d6d80d..0000000
--- a/dev-tools/maven/lucene/facet/pom.xml.template
+++ /dev/null
@@ -1,75 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-facet</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Facets</name>
-  <description>
-    Package for Faceted Indexing and Search
-  </description>
-  <properties>
-    <module-directory>lucene/facet</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-facet.internal.dependencies@
-@lucene-facet.external.dependencies@
-@lucene-facet.internal.test.dependencies@
-@lucene-facet.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/grouping/pom.xml.template b/dev-tools/maven/lucene/grouping/pom.xml.template
deleted file mode 100644
index 6314b72..0000000
--- a/dev-tools/maven/lucene/grouping/pom.xml.template
+++ /dev/null
@@ -1,68 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-grouping</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Grouping</name>
-  <description>Lucene Grouping Module</description>
-  <properties>
-    <module-directory>lucene/grouping</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-grouping.internal.dependencies@
-@lucene-grouping.external.dependencies@
-@lucene-grouping.internal.test.dependencies@
-@lucene-grouping.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/highlighter/pom.xml.template b/dev-tools/maven/lucene/highlighter/pom.xml.template
deleted file mode 100644
index ea31e63..0000000
--- a/dev-tools/maven/lucene/highlighter/pom.xml.template
+++ /dev/null
@@ -1,70 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-highlighter</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Highlighter</name>
-  <description>
-    This is the highlighter for apache lucene java
-  </description>
-  <properties>
-    <module-directory>lucene/highlighter</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-highlighter.internal.dependencies@
-@lucene-highlighter.external.dependencies@
-@lucene-highlighter.internal.test.dependencies@
-@lucene-highlighter.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/join/pom.xml.template b/dev-tools/maven/lucene/join/pom.xml.template
deleted file mode 100644
index 161e558..0000000
--- a/dev-tools/maven/lucene/join/pom.xml.template
+++ /dev/null
@@ -1,68 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-join</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Join</name>
-  <description>Lucene Join Module</description>
-  <properties>
-    <module-directory>lucene/join</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-join.internal.dependencies@
-@lucene-join.external.dependencies@
-@lucene-join.internal.test.dependencies@
-@lucene-join.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/memory/pom.xml.template b/dev-tools/maven/lucene/memory/pom.xml.template
deleted file mode 100644
index dcaeb12..0000000
--- a/dev-tools/maven/lucene/memory/pom.xml.template
+++ /dev/null
@@ -1,70 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-memory</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Memory</name>
-  <description>
-    High-performance single-document index to compare against Query
-  </description>
-  <properties>
-    <module-directory>lucene/memory</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-memory.internal.dependencies@
-@lucene-memory.external.dependencies@
-@lucene-memory.internal.test.dependencies@
-@lucene-memory.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/misc/pom.xml.template b/dev-tools/maven/lucene/misc/pom.xml.template
deleted file mode 100644
index a4bcac2..0000000
--- a/dev-tools/maven/lucene/misc/pom.xml.template
+++ /dev/null
@@ -1,68 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-misc</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Miscellaneous</name>
-  <description>Miscellaneous Lucene extensions</description>
-  <properties>
-    <module-directory>lucene/misc</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-     <scope>test</scope>
-    </dependency>
-@lucene-misc.internal.dependencies@
-@lucene-misc.external.dependencies@
-@lucene-misc.internal.test.dependencies@
-@lucene-misc.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/monitor/pom.xml.template b/dev-tools/maven/lucene/monitor/pom.xml.template
deleted file mode 100644
index 3d915e9..0000000
--- a/dev-tools/maven/lucene/monitor/pom.xml.template
+++ /dev/null
@@ -1,70 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-monitor</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Monitor</name>
-  <description>
-    High-performance single-document index to compare against Query
-  </description>
-  <properties>
-    <module-directory>lucene/monitor</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-monitor.internal.dependencies@
-@lucene-monitor.external.dependencies@
-@lucene-monitor.internal.test.dependencies@
-@lucene-monitor.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/pom.xml.template b/dev-tools/maven/lucene/pom.xml.template
deleted file mode 100644
index 1093bc4..0000000
--- a/dev-tools/maven/lucene/pom.xml.template
+++ /dev/null
@@ -1,127 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-solr-grandparent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-parent</artifactId>
-  <packaging>pom</packaging>
-  <name>Lucene parent POM</name>
-  <description>Lucene parent POM</description>
-  <properties>
-    <module-directory>lucene</module-directory>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <modules>
-    <module>core</module>
-    <module>backward-codecs</module>
-    <module>codecs</module>
-    <module>test-framework</module>
-    <module>analysis</module>
-    <module>benchmark</module>
-    <module>classification</module>
-    <module>demo</module>
-    <module>expressions</module>
-    <module>facet</module>
-    <module>grouping</module>
-    <module>highlighter</module>
-    <module>join</module>
-    <module>memory</module>
-    <module>misc</module>
-    <module>monitor</module>
-    <module>queries</module>
-    <module>queryparser</module>
-    <module>replicator</module>
-    <module>sandbox</module>
-    <module>spatial-extras</module>
-    <module>spatial3d</module>
-    <module>suggest</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-check-forbidden-apis</id>
-            <configuration>
-              <!-- disallow undocumented classes like sun.misc.Unsafe: -->
-              <bundledSignatures>
-                <bundledSignature>jdk-unsafe</bundledSignature>
-                <bundledSignature>jdk-deprecated</bundledSignature>
-                <bundledSignature>jdk-non-portable</bundledSignature>
-                <bundledSignature>jdk-reflection</bundledSignature>
-              </bundledSignatures>
-              <signaturesFiles>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/base.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/lucene.txt</signaturesFile>
-              </signaturesFiles>
-            </configuration>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-          <execution>
-            <id>lucene-shared-check-sysout-forbidden-apis</id>
-            <configuration>
-              <bundledSignatures>
-                <bundledSignature>jdk-system-out</bundledSignature>
-              </bundledSignatures>
-            </configuration>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-          <execution>
-            <id>lucene-shared-test-check-forbidden-apis</id>
-            <configuration>
-              <!-- disallow undocumented classes like sun.misc.Unsafe: -->
-              <bundledSignatures>
-                <bundledSignature>jdk-unsafe</bundledSignature>
-                <bundledSignature>jdk-deprecated</bundledSignature>
-                <bundledSignature>jdk-non-portable</bundledSignature>
-                <bundledSignature>jdk-reflection</bundledSignature>
-              </bundledSignatures>
-              <signaturesFiles>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/tests.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/base.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/lucene.txt</signaturesFile>
-              </signaturesFiles>
-            </configuration>
-            <goals>
-              <goal>testCheck</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/queries/pom.xml.template b/dev-tools/maven/lucene/queries/pom.xml.template
deleted file mode 100644
index 778d75c..0000000
--- a/dev-tools/maven/lucene/queries/pom.xml.template
+++ /dev/null
@@ -1,68 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-queries</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Queries</name>
-  <description>Lucene Queries Module</description>
-  <properties>
-    <module-directory>lucene/queries</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-queries.internal.dependencies@
-@lucene-queries.external.dependencies@
-@lucene-queries.internal.test.dependencies@
-@lucene-queries.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/queryparser/pom.xml.template b/dev-tools/maven/lucene/queryparser/pom.xml.template
deleted file mode 100644
index 3f42d23..0000000
--- a/dev-tools/maven/lucene/queryparser/pom.xml.template
+++ /dev/null
@@ -1,86 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-queryparser</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene QueryParsers</name>
-  <description>Lucene QueryParsers module</description>
-  <properties>
-    <module-directory>lucene/queryparser</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-queryparser.internal.dependencies@
-@lucene-queryparser.external.dependencies@
-@lucene-queryparser.internal.test.dependencies@
-@lucene-queryparser.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <goals>
-              <goal>test-jar</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/replicator/pom.xml.template b/dev-tools/maven/lucene/replicator/pom.xml.template
deleted file mode 100644
index 3fbeb09..0000000
--- a/dev-tools/maven/lucene/replicator/pom.xml.template
+++ /dev/null
@@ -1,74 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-replicator</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Replicator</name>
-  <description>Lucene Replicator Module</description>
-  <properties>
-    <module-directory>lucene/replicator</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-replicator.internal.dependencies@
-@lucene-replicator.external.dependencies@
-@lucene-replicator.internal.test.dependencies@
-@lucene-replicator.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/sandbox/pom.xml.template b/dev-tools/maven/lucene/sandbox/pom.xml.template
deleted file mode 100644
index a59187b..0000000
--- a/dev-tools/maven/lucene/sandbox/pom.xml.template
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-sandbox</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Sandbox</name>
-  <description>Lucene Sandbox</description>
-  <properties>
-    <module-directory>lucene/sandbox</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-sandbox.internal.dependencies@
-@lucene-sandbox.external.dependencies@
-@lucene-sandbox.internal.test.dependencies@
-@lucene-sandbox.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/spatial-extras/pom.xml.template b/dev-tools/maven/lucene/spatial-extras/pom.xml.template
deleted file mode 100644
index 9100ab4..0000000
--- a/dev-tools/maven/lucene/spatial-extras/pom.xml.template
+++ /dev/null
@@ -1,69 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-spatial-extras</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Spatial Extras</name>
-  <description>
-    Advanced Spatial Shape Strategies for Apache Lucene
-  </description>
-  <properties>
-    <module-directory>lucene/spatial-extras</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-spatial3d</artifactId>
-      <version>${project.version}</version>
-      <type>test-jar</type>
-      <scope>test</scope>
-    </dependency>
-@lucene-spatial-extras.internal.dependencies@
-@lucene-spatial-extras.external.dependencies@
-@lucene-spatial-extras.internal.test.dependencies@
-@lucene-spatial-extras.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/spatial3d/pom.xml.template b/dev-tools/maven/lucene/spatial3d/pom.xml.template
deleted file mode 100644
index 43b29a8..0000000
--- a/dev-tools/maven/lucene/spatial3d/pom.xml.template
+++ /dev/null
@@ -1,70 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-spatial3d</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Spatial 3D</name>
-  <description>
-    Lucene Spatial shapes implemented using 3D planar geometry
-  </description>
-  <properties>
-    <module-directory>lucene/spatial3d</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-spatial3d.internal.dependencies@
-@lucene-spatial3d.external.dependencies@
-@lucene-spatial3d.internal.test.dependencies@
-@lucene-spatial3d.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <goals>
-              <goal>test-jar</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/suggest/pom.xml.template b/dev-tools/maven/lucene/suggest/pom.xml.template
deleted file mode 100644
index f48f264..0000000
--- a/dev-tools/maven/lucene/suggest/pom.xml.template
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-suggest</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Suggest</name>
-  <description>Lucene Suggest Module</description>
-  <properties>
-    <module-directory>lucene/suggest</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency> 
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@lucene-suggest.internal.dependencies@
-@lucene-suggest.external.dependencies@
-@lucene-suggest.internal.test.dependencies@
-@lucene-suggest.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/lucene/test-framework/pom.xml.template b/dev-tools/maven/lucene/test-framework/pom.xml.template
deleted file mode 100644
index df8a7bd..0000000
--- a/dev-tools/maven/lucene/test-framework/pom.xml.template
+++ /dev/null
@@ -1,109 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-test-framework</artifactId>
-  <packaging>jar</packaging>
-  <name>Lucene Test Framework</name>
-  <description>Apache Lucene Java Test Framework</description>
-  <properties>
-    <module-directory>lucene/test-framework</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-@lucene-test-framework.internal.dependencies@
-@lucene-test-framework.external.dependencies@
-@lucene-test-framework.internal.test.dependencies@
-@lucene-test-framework.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-      <resource>
-        <directory>${project.build.sourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </resource>
-    </resources>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>lucene-shared-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-          <execution>
-            <id>lucene-shared-test-check-forbidden-apis</id>
-            <goals>
-              <goal>check</goal>
-              <goal>testCheck</goal>
-            </goals>
-          </execution>
-          <execution>
-            <id>lucene-shared-check-sysout-forbidden-apis</id>
-            <phase>none</phase>  <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-  <repositories>
-    <repository>
-      <id>sonatype.releases</id>
-      <name>Sonatype Releases Repository</name>
-      <url>https://oss.sonatype.org/content/repositories/releases</url>
-      <releases>
-        <enabled>true</enabled>
-      </releases>
-      <snapshots>
-        <updatePolicy>never</updatePolicy>
-      </snapshots>
-    </repository>
-  </repositories>
-</project>
diff --git a/dev-tools/maven/pom.xml.template b/dev-tools/maven/pom.xml.template
deleted file mode 100644
index b5e98b1..0000000
--- a/dev-tools/maven/pom.xml.template
+++ /dev/null
@@ -1,475 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache</groupId>
-    <artifactId>apache</artifactId>
-    <version>21</version>
-    <relativePath/>
-  </parent>
-  <groupId>org.apache.lucene</groupId>
-  <artifactId>lucene-solr-grandparent</artifactId>
-  <version>@version@</version>
-  <packaging>pom</packaging>
-  <name>Grandparent POM for Apache Lucene Core and Apache Solr</name>
-  <description>Grandparent POM for Apache Lucene Core and Apache Solr</description>
-  <url>https://lucene.apache.org</url>
-  <modules>
-    <module>lucene</module>
-    <module>solr</module>
-  </modules>
-  <properties>
-    <vc-anonymous-base-url>https://gitbox.apache.org/repos/asf/lucene-solr.git</vc-anonymous-base-url>
-    <vc-dev-base-url>https://gitbox.apache.org/repos/asf/lucene-solr.git</vc-dev-base-url>
-    <vc-browse-base-url>https://gitbox.apache.org/repos/asf?p=lucene-solr.git</vc-browse-base-url>
-    <specification.version>@spec.version@</specification.version>
-    <maven.build.timestamp.format>yyyy-MM-dd HH:mm:ss</maven.build.timestamp.format>
-    <java.compat.version>11</java.compat.version>
-    <jetty.version>9.3.8.v20160314</jetty.version>
-
-    <!-- RandomizedTesting library system properties -->
-    <tests.iters>1</tests.iters>
-    <tests.seed/>
-    <tests.nightly/>
-    <tests.weekly/>
-    <tests.awaitsfix/>
-    <tests.slow/>
-
-    <!-- Lucene/Solr-specific test system properties -->
-    <tests.codec>random</tests.codec>
-    <tests.directory>random</tests.directory>
-    <tests.locale>random</tests.locale>
-    <tests.luceneMatchVersion>@version.base@</tests.luceneMatchVersion>
-    <tests.multiplier>1</tests.multiplier>
-    <tests.nightly>false</tests.nightly>
-    <tests.postingsformat>random</tests.postingsformat>
-    <tests.timezone>random</tests.timezone>
-    <tests.verbose>false</tests.verbose>
-    <tests.infostream>${tests.verbose}</tests.infostream>
-  </properties>
-  <issueManagement>
-    <system>JIRA</system>
-    <url>https://issues.apache.org/jira/browse/LUCENE</url>
-  </issueManagement>
-  <ciManagement>
-    <system>Jenkins</system>
-    <url>https://builds.apache.org/computer/lucene/</url>
-  </ciManagement>
-  <mailingLists>
-    <mailingList>
-      <name>General List</name>
-      <subscribe>general-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>general-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>
-        https://mail-archives.apache.org/mod_mbox/lucene-general/
-      </archive>
-    </mailingList>
-    <mailingList>
-      <name>Java User List</name>
-      <subscribe>java-user-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>java-user-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>
-        https://mail-archives.apache.org/mod_mbox/lucene-java-user/
-      </archive>
-    </mailingList>
-    <mailingList>
-      <name>Java Developer List</name>
-      <subscribe>dev-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>dev-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>https://mail-archives.apache.org/mod_mbox/lucene-dev/</archive>
-    </mailingList>
-    <mailingList>
-      <name>Java Commits List</name>
-      <subscribe>commits-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>commits-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>
-        https://mail-archives.apache.org/mod_mbox/lucene-java-commits/
-      </archive>
-    </mailingList>
-  </mailingLists>
-  <inceptionYear>2000</inceptionYear>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url}</url>
-  </scm>
-  <licenses>
-    <license>
-      <name>Apache 2</name>
-      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
-    </license>
-  </licenses>
-  <repositories>
-    <repository>
-      <id>apache.snapshots</id>
-      <name>Apache Snapshot Repository</name>
-      <url>https://repository.apache.org/snapshots</url>
-      <releases>
-        <enabled>false</enabled>
-      </releases>
-      <snapshots>
-        <!-- Disable the Apache snapshot repository, overriding declaration in parent Apache POM. -->
-        <enabled>false</enabled>
-        <updatePolicy>never</updatePolicy>
-      </snapshots>
-    </repository>
-  </repositories>
-  <dependencyManagement>
-    <dependencies>
-@lucene.solr.dependency.management@
-    </dependencies>
-  </dependencyManagement>
-  <prerequisites>
-    <maven>2.2.1</maven>
-  </prerequisites>
-  <dependencies>
-    <dependency> 
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>com.carrotsearch.randomizedtesting</groupId>
-      <artifactId>randomizedtesting-runner</artifactId>
-      <scope>test</scope>
-    </dependency>
-  </dependencies>
-  <build>
-    <pluginManagement>
-      <plugins>
-        <plugin>
-          <groupId>de.thetaphi</groupId>
-          <artifactId>forbiddenapis</artifactId>
-          <version>3.0.1</version>
-          <configuration>
-            <!--
-              This is the default setting, we don't support too new Java versions.
-              The checker simply passes by default and only prints a warning.
-             -->
-            <failOnUnsupportedJava>false</failOnUnsupportedJava>
-            <targetVersion>${java.compat.version}</targetVersion>
-            <suppressAnnotations>
-              <suppressAnnotation>**.SuppressForbidden</suppressAnnotation>
-            </suppressAnnotations>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-antrun-plugin</artifactId>
-          <version>1.8</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-clean-plugin</artifactId>
-          <version>3.1.0</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-compiler-plugin</artifactId>
-          <version>3.8.0</version>
-          <configuration>
-            <source>${java.compat.version}</source>
-            <target>${java.compat.version}</target>
-            <compilerArgs>
-              <!-- -proc:none was added because of LOG4J2-1925, JDK-8186647, https://github.com/apache/zookeeper/pull/317, JDK-8055048 -->
-              <arg>-proc:none</arg>
-            </compilerArgs>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-dependency-plugin</artifactId>
-          <version>3.1.1</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-deploy-plugin</artifactId>
-          <version>3.0.0-M1</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-enforcer-plugin</artifactId>
-          <version>3.0.0-M2</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-install-plugin</artifactId>
-          <version>3.0.0-M1</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-jar-plugin</artifactId>
-          <version>3.1.1</version>
-          <configuration>
-            <archive>
-              <manifest>
-                <addDefaultSpecificationEntries>false</addDefaultSpecificationEntries>
-                <addDefaultImplementationEntries>false</addDefaultImplementationEntries>
-              </manifest>
-            </archive>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-javadoc-plugin</artifactId>
-          <version>3.1.0</version>
-          <configuration>
-            <quiet>true</quiet>
-            <additionalparam>-Xdoclint:all</additionalparam>
-            <additionalparam>-Xdoclint:-missing</additionalparam>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-resources-plugin</artifactId>
-          <version>3.1.0</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-remote-resources-plugin</artifactId>
-          <version>1.6.0</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-site-plugin</artifactId>
-          <version>3.7.1</version>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-surefire-plugin</artifactId>
-          <version>2.17</version>
-          <configuration>
-            <runOrder>random</runOrder>
-            <reportFormat>plain</reportFormat>
-            <workingDirectory>${project.build.directory}/test</workingDirectory>
-            <redirectTestOutputToFile>true</redirectTestOutputToFile>
-            <argLine>-Xmx512M</argLine>
-            <systemPropertyVariables>
-              <tempDir>.</tempDir>
-              <java.awt.headless>true</java.awt.headless>
-
-              <!-- See <https://cwiki.apache.org/confluence/display/lucene/RunningTests>
-                   for a description of the tests.* system properties. -->
-
-              <!-- RandomizedTesting library system properties -->
-              <tests.iters>${tests.iters}</tests.iters>
-              <tests.seed>${tests.seed}</tests.seed>
-              <tests.nightly>${tests.nightly}</tests.nightly>
-              <tests.weekly>${tests.weekly}</tests.weekly>
-              <tests.awaitsfix>${tests.awaitsfix}</tests.awaitsfix>
-              <tests.slow>${tests.slow}</tests.slow>
-
-              <!-- Lucene/Solr-specific test system properties -->
-              <jetty.testMode>1</jetty.testMode>
-              <tests.codec>${tests.codec}</tests.codec>
-              <tests.directory>${tests.directory}</tests.directory>
-              <tests.infostream>${tests.infostream}</tests.infostream>
-              <tests.locale>${tests.locale}</tests.locale>
-              <tests.luceneMatchVersion>${tests.luceneMatchVersion}</tests.luceneMatchVersion>
-              <tests.multiplier>${tests.multiplier}</tests.multiplier>
-              <tests.postingsformat>${tests.postingsformat}</tests.postingsformat>
-              <tests.timezone>${tests.timezone}</tests.timezone>
-              <tests.verbose>${tests.verbose}</tests.verbose>
-              <java.security.egd>file:/dev/./urandom</java.security.egd>
-            </systemPropertyVariables>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-war-plugin</artifactId>
-          <version>3.2.2</version>
-          <configuration>
-            <archive>
-              <!-- This section should be *exactly* the same under -->
-              <!-- maven-bundle-plugin and maven-war-plugin.          -->
-              <!-- If you make changes here, make the same changes -->
-              <!-- in the other location as well.                  -->
-              <manifestEntries>
-                <Extension-Name>${project.groupId}</Extension-Name>
-                <Implementation-Title>${project.groupId}</Implementation-Title>
-                <Specification-Title>${project.name}</Specification-Title>
-                <Specification-Version>${specification.version}</Specification-Version>
-                <Specification-Vendor>The Apache Software Foundation</Specification-Vendor>
-                <!-- impl version can be any string -->
-                <Implementation-Version>${project.version} ${checkoutid} - ${user.name} - ${now.timestamp}</Implementation-Version>
-                <Implementation-Vendor>The Apache Software Foundation</Implementation-Vendor>
-                <Implementation-Vendor-Id>${project.groupId}</Implementation-Vendor-Id>
-                <X-Compile-Source-JDK>${java.compat.version}</X-Compile-Source-JDK>
-                <X-Compile-Target-JDK>${java.compat.version}</X-Compile-Target-JDK>
-              </manifestEntries>
-            </archive>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.codehaus.mojo</groupId>
-          <artifactId>build-helper-maven-plugin</artifactId>
-          <version>3.0.0</version>
-        </plugin>
-        <plugin>
-          <groupId>org.codehaus.mojo</groupId>
-          <artifactId>buildnumber-maven-plugin</artifactId>
-          <version>1.4</version>
-        </plugin>
-        <plugin>
-          <groupId>org.mortbay.jetty</groupId>
-          <artifactId>jetty-maven-plugin</artifactId>
-          <version>${jetty.version}</version>
-        </plugin>
-        <plugin>
-          <groupId>org.codehaus.gmaven</groupId>
-          <artifactId>gmaven-plugin</artifactId>
-          <version>1.5</version>
-        </plugin>
-      </plugins>
-    </pluginManagement>
-    <plugins>
-      <plugin>
-        <groupId>org.codehaus.gmaven</groupId>
-        <artifactId>gmaven-plugin</artifactId>
-        <executions>
-          <execution>
-            <id>generate-timestamps-and-get-top-level-basedir</id>
-            <phase>validate</phase>
-            <goals>
-              <goal>execute</goal>
-            </goals>
-            <configuration>
-              <source>
-                project.properties['now.timestamp'] = "${maven.build.timestamp}"
-                project.properties['now.version'] = ("${maven.build.timestamp}" =~ /[- :]/).replaceAll(".")
-                project.properties['now.year'] = "${maven.build.timestamp}".substring(0, 4)
-                project.properties['top-level'] = (project.basedir.getAbsolutePath() =~ /[\\\\\/]maven-build.*/).replaceAll("")
-              </source>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.codehaus.mojo</groupId>
-        <artifactId>buildnumber-maven-plugin</artifactId>
-        <executions>
-          <execution>
-            <phase>validate</phase>
-            <goals>
-              <goal>create</goal>
-            </goals>
-          </execution>
-        </executions>
-        <configuration>
-          <doCheck>false</doCheck>
-          <doUpdate>false</doUpdate>
-          <getRevisionOnlyOnce>true</getRevisionOnlyOnce>
-          <revisionOnScmFailure>NO-REVISION-AVAILABLE</revisionOnScmFailure>
-          <buildNumberPropertyName>checkoutid</buildNumberPropertyName>
-          <scmDirectory>${top-level}</scmDirectory>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-enforcer-plugin</artifactId>
-        <executions>
-          <execution>
-            <id>enforce-java-compat-version-and-maven-3.0.0</id>
-            <goals>
-              <goal>enforce</goal>
-            </goals>
-            <configuration>
-              <rules>
-                <requireJavaVersion>
-                  <message>Java ${java.compat.version}+ is required.</message>
-                  <version>[${java.compat.version},)</version>
-                </requireJavaVersion>
-                <requireMavenVersion>
-                  <message>Maven 3.5.0+ is required.</message>
-                  <version>[3.5.0,)</version>
-                </requireMavenVersion>
-                <requirePluginVersions/>
-              </rules>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin>
-      
-      <!-- Adding OSGI metadata to the JAR without changing the packaging type. -->
-      <plugin>
-        <artifactId>maven-jar-plugin</artifactId>
-        <configuration>
-          <archive>
-            <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
-          </archive>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.felix</groupId>
-        <artifactId>maven-bundle-plugin</artifactId>
-        <version>4.2.0</version>
-        <configuration>
-          <instructions>
-            <Export-Package>*;-split-package:=merge-first</Export-Package>
-            
-            <!-- This section should be *exactly* the same under -->
-            <!-- maven-bundle-plugin and maven-war-plugin.          -->
-            <!-- If you make changes here, make the same changes -->
-            <!-- in the other location as well.                  -->
-            <Extension-Name>${project.groupId}</Extension-Name>
-            <Implementation-Title>${project.groupId}</Implementation-Title>
-            <Specification-Title>${project.name}</Specification-Title>
-            <Specification-Version>${specification.version}</Specification-Version>
-            <Specification-Vendor>The Apache Software Foundation</Specification-Vendor>
-            <!-- impl version can be any string -->
-            <Implementation-Version>${project.version} ${checkoutid} - ${user.name} - ${now.timestamp}</Implementation-Version>
-            <Implementation-Vendor>The Apache Software Foundation</Implementation-Vendor>
-            <Implementation-Vendor-Id>${project.groupId}</Implementation-Vendor-Id>
-            <X-Compile-Source-JDK>${java.compat.version}</X-Compile-Source-JDK>
-            <X-Compile-Target-JDK>${java.compat.version}</X-Compile-Target-JDK>
-          </instructions>
-        </configuration>
-        <executions>
-          <execution>
-            <id>bundle-manifest</id>
-            <phase>process-classes</phase>
-            <goals>
-              <goal>manifest</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-  <profiles>
-    <profile>
-      <!-- Although currently a no-op, this profile remains here to reserve
-           the ability to perform Maven build initialization tasks. -->
-      <id>bootstrap</id>
-      <build>
-        <plugins>
-          <plugin>
-            <groupId>org.apache.maven.plugins</groupId>
-            <artifactId>maven-install-plugin</artifactId>
-            <executions>
-            </executions>
-          </plugin>
-        </plugins>
-      </build>
-    </profile>
-  </profiles>
-</project>
diff --git a/dev-tools/maven/solr/contrib/analysis-extras/pom.xml.template b/dev-tools/maven/solr/contrib/analysis-extras/pom.xml.template
deleted file mode 100644
index a0b3702..0000000
--- a/dev-tools/maven/solr/contrib/analysis-extras/pom.xml.template
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-analysis-extras</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Analysis Extras</name>
-  <description>Apache Solr Analysis Extras</description>
-  <properties>
-    <module-directory>solr/contrib/analysis-extras</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-analyzers-common</artifactId>
-      <version>${project.version}</version>
-      <type>test-jar</type>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-analysis-extras.internal.dependencies@
-@solr-analysis-extras.external.dependencies@
-@solr-analysis-extras.internal.test.dependencies@
-@solr-analysis-extras.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/analytics/pom.xml.template b/dev-tools/maven/solr/contrib/analytics/pom.xml.template
deleted file mode 100644
index afd53c1..0000000
--- a/dev-tools/maven/solr/contrib/analytics/pom.xml.template
+++ /dev/null
@@ -1,80 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-analytics</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Analytics Package</name>
-  <description>
-    Apache Solr Content Analytics Package
-  </description>
-  <properties>
-    <module-directory>solr/contrib/analytics</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    @solr-analytics.internal.dependencies@
-    @solr-analytics.external.dependencies@
-    @solr-analytics.internal.test.dependencies@
-    @solr-analytics.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/clustering/pom.xml.template b/dev-tools/maven/solr/contrib/clustering/pom.xml.template
deleted file mode 100644
index 831dede..0000000
--- a/dev-tools/maven/solr/contrib/clustering/pom.xml.template
+++ /dev/null
@@ -1,78 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-clustering</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Clustering</name>
-  <description>Apache Solr Clustering</description>
-  <properties>
-    <module-directory>solr/contrib/clustering</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-clustering.internal.dependencies@
-@solr-clustering.external.dependencies@
-@solr-clustering.internal.test.dependencies@
-@solr-clustering.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/dataimporthandler-extras/pom.xml.template b/dev-tools/maven/solr/contrib/dataimporthandler-extras/pom.xml.template
deleted file mode 100644
index 21ccf5e..0000000
--- a/dev-tools/maven/solr/contrib/dataimporthandler-extras/pom.xml.template
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-dataimporthandler-extras</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr DataImportHandler Extras</name>
-  <description>Apache Solr DataImportHandler Extras</description>
-  <properties>
-    <module-directory>solr/contrib/dataimporthandler-extras</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-dataimporthandler</artifactId>
-      <version>${project.version}</version>
-      <type>test-jar</type>
-      <scope>test</scope>
-    </dependency>
-@solr-dataimporthandler-extras.internal.dependencies@
-@solr-dataimporthandler-extras.external.dependencies@
-@solr-dataimporthandler-extras.internal.test.dependencies@
-@solr-dataimporthandler-extras.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/dataimporthandler/pom.xml.template b/dev-tools/maven/solr/contrib/dataimporthandler/pom.xml.template
deleted file mode 100644
index 3992f9d..0000000
--- a/dev-tools/maven/solr/contrib/dataimporthandler/pom.xml.template
+++ /dev/null
@@ -1,91 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-dataimporthandler</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr DataImportHandler</name>
-  <description>Apache Solr DataImportHandler</description>
-  <properties>
-    <module-directory>solr/contrib/dataimporthandler</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-dataimporthandler.internal.dependencies@
-@solr-dataimporthandler.external.dependencies@
-@solr-dataimporthandler.internal.test.dependencies@
-@solr-dataimporthandler.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <goals>
-              <goal>test-jar</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/extraction/pom.xml.template b/dev-tools/maven/solr/contrib/extraction/pom.xml.template
deleted file mode 100644
index 3ceaabe..0000000
--- a/dev-tools/maven/solr/contrib/extraction/pom.xml.template
+++ /dev/null
@@ -1,81 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-cell</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Content Extraction Library</name>
-  <description>
-    Apache Solr Content Extraction Library integrates Apache Tika 
-    content extraction framework into Solr
-  </description>
-  <properties>
-    <module-directory>solr/contrib/extraction</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-cell.internal.dependencies@
-@solr-cell.external.dependencies@
-@solr-cell.internal.test.dependencies@
-@solr-cell.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/jaegertracer-configurator/pom.xml.template b/dev-tools/maven/solr/contrib/jaegertracer-configurator/pom.xml.template
deleted file mode 100644
index 7234185..0000000
--- a/dev-tools/maven/solr/contrib/jaegertracer-configurator/pom.xml.template
+++ /dev/null
@@ -1,80 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-jaegertracer-configurator</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Jaeger Tracer Configurator Package</name>
-  <description>
-    Apache Solr Jaeger Tracer Configurator Package
-  </description>
-  <properties>
-    <module-directory>solr/contrib/jaegertracer-configurator</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    @solr-jaegertracer-configurator.internal.dependencies@
-    @solr-jaegertracer-configurator.external.dependencies@
-    @solr-jaegertracer-configurator.internal.test.dependencies@
-    @solr-jaegertracer-configurator.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/langid/pom.xml.template b/dev-tools/maven/solr/contrib/langid/pom.xml.template
deleted file mode 100644
index 6543481..0000000
--- a/dev-tools/maven/solr/contrib/langid/pom.xml.template
+++ /dev/null
@@ -1,87 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-langid</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Language Identifier</name>
-  <description>
-    This module is intended to be used while indexing documents.
-    It is implemented as an UpdateProcessor to be placed in an UpdateChain.
-    Its purpose is to identify language from documents and tag the document with language code.
-  </description>
-  <properties>
-    <module-directory>solr/contrib/langid</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-langid.internal.dependencies@
-@solr-langid.external.dependencies@
-@solr-langid.internal.test.dependencies@
-@solr-langid.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/ltr/pom.xml.template b/dev-tools/maven/solr/contrib/ltr/pom.xml.template
deleted file mode 100644
index 9506cb4..0000000
--- a/dev-tools/maven/solr/contrib/ltr/pom.xml.template
+++ /dev/null
@@ -1,80 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-ltr</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Learning to Rank Package</name>
-  <description>
-    Apache Solr Learning to Rank Package
-  </description>
-  <properties>
-    <module-directory>solr/contrib/ltr</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    @solr-ltr.internal.dependencies@
-    @solr-ltr.external.dependencies@
-    @solr-ltr.internal.test.dependencies@
-    @solr-ltr.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/pom.xml.template b/dev-tools/maven/solr/contrib/pom.xml.template
deleted file mode 100644
index d8280e4..0000000
--- a/dev-tools/maven/solr/contrib/pom.xml.template
+++ /dev/null
@@ -1,57 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-contrib-aggregator</artifactId>
-  <name>Apache Solr Contrib aggregator POM</name>
-  <packaging>pom</packaging>
-  <modules>
-    <module>analysis-extras</module>
-    <module>analytics</module>
-    <module>clustering</module>
-    <module>dataimporthandler</module>
-    <module>dataimporthandler-extras</module>
-    <module>extraction</module>
-    <module>jaegertracer-configurator</module>
-    <module>langid</module>
-    <module>ltr</module>
-    <module>prometheus-exporter</module>
-    <module>velocity</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/prometheus-exporter/pom.xml.template b/dev-tools/maven/solr/contrib/prometheus-exporter/pom.xml.template
deleted file mode 100644
index 1d2d508..0000000
--- a/dev-tools/maven/solr/contrib/prometheus-exporter/pom.xml.template
+++ /dev/null
@@ -1,80 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-prometheus-exporter</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Prometheus Exporter Package</name>
-  <description>
-    Apache Solr Prometheus Exporter Package
-  </description>
-  <properties>
-    <module-directory>solr/contrib/prometheus-exporter</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    @solr-prometheus-exporter.internal.dependencies@
-    @solr-prometheus-exporter.external.dependencies@
-    @solr-prometheus-exporter.internal.test.dependencies@
-    @solr-prometheus-exporter.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/contrib/velocity/pom.xml.template b/dev-tools/maven/solr/contrib/velocity/pom.xml.template
deleted file mode 100644
index 82f12ee..0000000
--- a/dev-tools/maven/solr/contrib/velocity/pom.xml.template
+++ /dev/null
@@ -1,89 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-velocity</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Velocity</name>
-  <description>Apache Solr Velocity</description>
-  <properties>
-    <module-directory>solr/contrib/velocity</module-directory>
-    <relative-top-level>../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-velocity.internal.dependencies@
-@solr-velocity.external.dependencies@
-@solr-velocity.internal.test.dependencies@
-@solr-velocity.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${module-path}/src/test</directory>
-        <includes>
-          <include>velocity/*.properties</include>
-        </includes>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/core/pom.xml.template b/dev-tools/maven/solr/core/pom.xml.template
deleted file mode 100644
index 7c4942a..0000000
--- a/dev-tools/maven/solr/core/pom.xml.template
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-core-aggregator</artifactId>
-  <packaging>pom</packaging>
-  <name>Apache Solr Core aggregator POM</name>
-  <modules>
-    <module>src/java</module>
-    <module>src/test</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/core/src/java/pom.xml.template b/dev-tools/maven/solr/core/src/java/pom.xml.template
deleted file mode 100644
index 5c63aef..0000000
--- a/dev-tools/maven/solr/core/src/java/pom.xml.template
+++ /dev/null
@@ -1,84 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-core</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Core</name>
-  <description>Apache Solr Core</description>
-  <properties>
-    <module-directory>solr/core</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/java</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-@solr-core.internal.dependencies@
-@solr-core.external.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}</sourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/../resources</directory>
-      </resource>
-    </resources>
-    <testSourceDirectory/>
-    <testResources/>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- Tests are run from solr-core-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- This skips test compilation - tests are run from solr-core-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>solr-shared-test-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/core/src/test/pom.xml.template b/dev-tools/maven/solr/core/src/test/pom.xml.template
deleted file mode 100644
index 154f904..0000000
--- a/dev-tools/maven/solr/core/src/test/pom.xml.template
+++ /dev/null
@@ -1,155 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-core-tests</artifactId>
-  <name>Apache Solr Core tests</name>
-  <packaging>jar</packaging>
-  <properties>
-    <module-directory>solr/core</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/test</module-path>
-  </properties>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-core</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-queryparser</artifactId>
-      <version>${project.version}</version>
-      <type>test-jar</type>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-backward-codecs</artifactId>
-      <version>${project.version}</version>
-      <type>test-jar</type>
-      <scope>test</scope>
-    </dependency>
-@solr-core.internal.test.dependencies@
-@solr-core.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory/>
-    <!-- Instead of depending on solr-core module, use its output directory -->
-    <outputDirectory>../java/target/classes</outputDirectory>
-    <testSourceDirectory>${module-path}</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/../test-files</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-      <testResource>
-        <directory>${project.build.testSourceDirectory}</directory>
-        <excludes>
-          <exclude>**/*.java</exclude>
-        </excludes>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <id>default-jar</id>
-            <!-- Skipping by binding the default execution ID to a non-existent phase only works in Maven 3, not 2. -->
-            <phase>none</phase>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-install-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.felix</groupId>
-        <artifactId>maven-bundle-plugin</artifactId>
-        <version>2.5.3</version>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>solr-shared-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-          <execution>
-            <id>solr-shared-test-check-forbidden-apis</id>
-            <configuration>
-              <excludes>
-                <!-- TODO: remove this - imported code -->
-                <exclude>org/apache/solr/internal/**/*.class</exclude>
-                <exclude>org/apache/hadoop/**</exclude>
-              </excludes>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/maven.testlogging.properties b/dev-tools/maven/solr/maven.testlogging.properties
deleted file mode 100644
index 4a4df0ea..0000000
--- a/dev-tools/maven/solr/maven.testlogging.properties
+++ /dev/null
@@ -1,2 +0,0 @@
-handlers=java.util.logging.ConsoleHandler
-.level=SEVERE
diff --git a/dev-tools/maven/solr/pom.xml.template b/dev-tools/maven/solr/pom.xml.template
deleted file mode 100644
index 827eb26..0000000
--- a/dev-tools/maven/solr/pom.xml.template
+++ /dev/null
@@ -1,186 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.lucene</groupId>
-    <artifactId>lucene-solr-grandparent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-parent</artifactId>
-  <packaging>pom</packaging>
-  <name>Apache Solr parent POM</name>
-  <description>Apache Solr parent POM</description>
-  <modules>
-    <module>core</module>
-    <module>solrj</module>
-    <module>test-framework</module>
-    <module>contrib</module>
-  </modules>
-  <properties>
-    <module-directory>solr</module-directory>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <issueManagement>
-    <system>JIRA</system>
-    <url>https://issues.apache.org/jira/browse/SOLR</url>
-  </issueManagement>
-  <mailingLists>
-    <mailingList>
-      <name>Solr User List</name>
-      <subscribe>solr-user-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>solr-user-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>
-        https://mail-archives.apache.org/mod_mbox/solr-user/
-      </archive>
-    </mailingList>
-    <mailingList>
-      <name>Java Developer List</name>
-      <subscribe>dev-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>dev-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>https://mail-archives.apache.org/mod_mbox/lucene-dev/</archive>
-    </mailingList>
-    <mailingList>
-      <name>Java Commits List</name>
-      <subscribe>commits-subscribe@lucene.apache.org</subscribe>
-      <unsubscribe>commits-unsubscribe@lucene.apache.org</unsubscribe>
-      <archive>
-        https://mail-archives.apache.org/mod_mbox/lucene-java-commits/
-      </archive>
-    </mailingList>
-  </mailingLists>
-  <inceptionYear>2006</inceptionYear>
-  <repositories>
-    <repository>
-      <id>maven-restlet</id>
-      <name>Public online Restlet repository</name>
-      <url>https://maven.restlet.com</url>
-    </repository>
-    <repository>
-      <id>releases.cloudera.com</id>
-      <name>Cloudera Releases</name>
-      <url>https://repository.cloudera.com/artifactory/libs-release-local/</url>
-    </repository>
-  </repositories>
-  <build>
-    <pluginManagement>
-      <plugins>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-javadoc-plugin</artifactId>
-          <configuration>
-            <overview/>
-            <windowtitle>${project.name} ${project.version} API (${now.version})</windowtitle>
-            <doctitle>${project.name} ${project.version} API (${now.version})</doctitle>
-          </configuration>
-        </plugin>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-surefire-plugin</artifactId>
-          <configuration>
-            <systemPropertyVariables>
-              <tests.disableHdfs>${tests.disableHdfs}</tests.disableHdfs>
-            </systemPropertyVariables>
-          </configuration>
-        </plugin>
-      </plugins>
-    </pluginManagement>
-    <plugins>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>solr-shared-check-forbidden-apis</id>
-            <configuration>
-              <!-- for simplicty with servlet-api and commons-io checks, disable this: -->
-              <failOnUnresolvableSignatures>false</failOnUnresolvableSignatures>
-              <bundledSignatures>
-                <bundledSignature>jdk-unsafe</bundledSignature>
-                <bundledSignature>jdk-deprecated</bundledSignature>
-                <bundledSignature>jdk-non-portable</bundledSignature>
-                <bundledSignature>jdk-reflection</bundledSignature>
-                <bundledSignature>commons-io-unsafe-@commons-io:commons-io.version@</bundledSignature>
-              </bundledSignatures>
-              <signaturesFiles>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/base.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/servlet-api.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/solr.txt</signaturesFile>
-              </signaturesFiles>
-            </configuration>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-          <execution>
-            <id>solr-shared-test-check-forbidden-apis</id>
-            <configuration>
-              <!-- for simplicty with servlet-api and commons-io checks, disable this: -->
-              <failOnUnresolvableSignatures>false</failOnUnresolvableSignatures>
-              <bundledSignatures>
-                <bundledSignature>jdk-unsafe</bundledSignature>
-                <bundledSignature>jdk-deprecated</bundledSignature>
-                <bundledSignature>jdk-non-portable</bundledSignature>
-                <bundledSignature>jdk-reflection</bundledSignature>
-                <bundledSignature>commons-io-unsafe-@commons-io:commons-io.version@</bundledSignature>
-              </bundledSignatures>
-              <signaturesFiles>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/base.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/servlet-api.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/solr.txt</signaturesFile>
-                <signaturesFile>${top-level}/lucene/tools/forbiddenApis/tests.txt</signaturesFile>
-              </signaturesFiles>
-            </configuration>
-            <goals>
-              <goal>testCheck</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <systemPropertyVariables>
-            <java.util.logging.config.file>${top-level}/solr/testlogging.properties</java.util.logging.config.file>
-          </systemPropertyVariables>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-  <profiles>
-    <profile>
-      <id>windows-tests-disableHdfs</id>
-      <activation>
-        <os><family>windows</family></os>
-      </activation>
-      <properties>
-        <tests.disableHdfs>true</tests.disableHdfs>
-      </properties>
-    </profile>
-  </profiles>
-</project>
diff --git a/dev-tools/maven/solr/solrj/pom.xml.template b/dev-tools/maven/solr/solrj/pom.xml.template
deleted file mode 100644
index 8dbccd5..0000000
--- a/dev-tools/maven/solr/solrj/pom.xml.template
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-solrj-aggregator</artifactId>
-  <packaging>pom</packaging>
-  <name>Apache Solr Solrj aggregator POM</name>
-  <modules>
-    <module>src/java</module>
-    <module>src/test</module>
-  </modules>
-  <build>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/solrj/src/java/pom.xml.template b/dev-tools/maven/solr/solrj/src/java/pom.xml.template
deleted file mode 100644
index bb792c4..0000000
--- a/dev-tools/maven/solr/solrj/src/java/pom.xml.template
+++ /dev/null
@@ -1,78 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-solrj</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Solrj</name>
-  <description>Apache Solr Solrj</description>
-  <properties>
-    <module-directory>solr/solrj</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/java</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-@solr-solrj.internal.dependencies@
-@solr-solrj.external.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}</sourceDirectory>
-    <testSourceDirectory/>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- Tests are run from solr-solrj-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <configuration>
-          <skip>true</skip> <!-- This skips test compilation - tests are run from solr-solrj-tests module -->
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>solr-shared-test-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/solrj/src/test/pom.xml.template b/dev-tools/maven/solr/solrj/src/test/pom.xml.template
deleted file mode 100644
index 93c14b3..0000000
--- a/dev-tools/maven/solr/solrj/src/test/pom.xml.template
+++ /dev/null
@@ -1,122 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../../../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-solrj-tests</artifactId>
-  <name>Apache Solr Solrj tests</name>
-  <packaging>jar</packaging>
-  <properties>
-    <module-directory>solr/solrj</module-directory>
-    <relative-top-level>../../../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}/src/test</module-path>
-  </properties>
-  <dependencies>
-    <dependency>
-      <!-- lucene-test-framework dependency must be declared before lucene-core -->
-      <!-- This dependency cannot be put into solr-parent, because local        -->
-      <!-- dependencies are always ordered before inherited dependencies.       -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-test-framework</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.solr</groupId>
-      <artifactId>solr-solrj</artifactId>
-      <scope>test</scope>
-    </dependency>
-@solr-solrj.internal.test.dependencies@
-@solr-solrj.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory/>
-    <!-- Instead of depending on solr-solrj module, use its output directory -->
-    <outputDirectory>../java/target/classes</outputDirectory>
-    <testSourceDirectory>${module-path}</testSourceDirectory>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/../test-files</directory>
-      </testResource>
-      <testResource>
-        <directory>${top-level}/dev-tools/maven/solr</directory>
-        <includes>
-          <include>maven.testlogging.properties</include>
-        </includes>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-deploy-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <executions>
-          <execution>
-            <id>default-jar</id>
-            <!-- Skipping by binding the default execution ID to a non-existent phase only works in Maven 3, not 2. -->
-            <phase>none</phase>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-install-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.felix</groupId>
-        <artifactId>maven-bundle-plugin</artifactId>
-        <version>2.5.3</version>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>solr-shared-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/maven/solr/test-framework/pom.xml.template b/dev-tools/maven/solr/test-framework/pom.xml.template
deleted file mode 100644
index 3f55647..0000000
--- a/dev-tools/maven/solr/test-framework/pom.xml.template
+++ /dev/null
@@ -1,95 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-  http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-  <parent>
-    <groupId>org.apache.solr</groupId>
-    <artifactId>solr-parent</artifactId>
-    <version>@version@</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-  <groupId>org.apache.solr</groupId>
-  <artifactId>solr-test-framework</artifactId>
-  <packaging>jar</packaging>
-  <name>Apache Solr Test Framework</name>
-  <description>Apache Solr Test Framework</description>
-  <properties>
-    <module-directory>solr/test-framework</module-directory>
-    <relative-top-level>../../..</relative-top-level>
-    <module-path>${relative-top-level}/${module-directory}</module-path>
-  </properties>
-  <scm>
-    <connection>scm:git:${vc-anonymous-base-url}</connection>
-    <developerConnection>scm:git:${vc-dev-base-url}</developerConnection>
-    <url>${vc-browse-base-url};f=${module-directory}</url>
-  </scm>
-  <dependencies>
-    <!-- These dependencies are compile scope because this is a test framework. -->
-    <dependency>
-      <!-- lucene-test-framework dependency must come before lucene-core -->
-      <groupId>org.apache.lucene</groupId>
-      <artifactId>lucene-test-framework</artifactId>
-    </dependency>
-@solr-test-framework.internal.dependencies@
-@solr-test-framework.external.dependencies@
-@solr-test-framework.internal.test.dependencies@
-@solr-test-framework.external.test.dependencies@
-  </dependencies>
-  <build>
-    <sourceDirectory>${module-path}/src/java</sourceDirectory>
-    <testSourceDirectory>${module-path}/src/test</testSourceDirectory>
-    <resources>
-      <resource>
-        <directory>${module-path}/src/resources</directory>
-      </resource>
-    </resources>
-    <testResources>
-      <testResource>
-        <directory>${module-path}/src/test-files</directory>
-      </testResource>
-    </testResources>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-       <plugin>
-        <groupId>de.thetaphi</groupId>
-        <artifactId>forbiddenapis</artifactId>
-        <executions>
-          <execution>
-            <id>solr-shared-check-forbidden-apis</id>
-            <phase>none</phase> <!-- Block inherited execution -->
-          </execution>
-          <execution>
-            <id>solr-shared-test-check-forbidden-apis</id>
-            <goals>
-              <goal>check</goal> <!-- NOT testCheck -->
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/dev-tools/netbeans/nb-project.xsl b/dev-tools/netbeans/nb-project.xsl
deleted file mode 100644
index 69b1944..0000000
--- a/dev-tools/netbeans/nb-project.xsl
+++ /dev/null
@@ -1,165 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<xsl:stylesheet version="1.0" 
-                xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
-                xmlns:str="http://exslt.org/strings"
-                xmlns:common="http://exslt.org/common"
-                extension-element-prefixes="str common">
-  <xsl:param name="netbeans.fileset.sourcefolders"/>
-  <xsl:param name="netbeans.path.libs"/>
-  <xsl:param name="netbeans.source-level"/>
-  
-  <xsl:variable name="netbeans.fileset.sourcefolders.sortedfrag">
-    <xsl:for-each select="str:split($netbeans.fileset.sourcefolders,'|')">
-      <!-- hack to sort **/src/java before **/src/test before **/src/resources : contains() returns "true" which sorts before "false" if descending: -->
-      <xsl:sort select="string(contains(text(), '/src/java'))" order="descending" lang="en"/>
-      <xsl:sort select="string(contains(text(), '/src/test'))" order="descending" lang="en"/>
-      <xsl:sort select="string(contains(text(), '/src/resources'))" order="descending" lang="en"/>
-      <!-- hack to sort the list, starts-with() returns "true" which sorts before "false" if descending: -->
-      <xsl:sort select="string(starts-with(text(), 'lucene/core/'))" order="descending" lang="en"/>
-      <xsl:sort select="string(starts-with(text(), 'lucene/test-framework/'))" order="descending" lang="en"/>
-      <xsl:sort select="string(starts-with(text(), 'lucene/'))" order="descending" lang="en"/>
-      <xsl:sort select="string(starts-with(text(), 'solr/core/'))" order="descending" lang="en"/>
-      <xsl:sort select="string(starts-with(text(), 'solr/solrj/'))" order="descending" lang="en"/>
-      <xsl:sort select="string(starts-with(text(), 'solr/test-framework/'))" order="descending" lang="en"/>
-      <xsl:sort select="string(starts-with(text(), 'solr/'))" order="descending" lang="en"/>
-      <!-- all others in one group above are sorted by path name: -->
-      <xsl:sort select="text()" order="ascending" lang="en"/>
-      <xsl:copy-of select="."/>
-    </xsl:for-each>
-  </xsl:variable>
-  <xsl:variable name="netbeans.fileset.sourcefolders.sorted" select="common:node-set($netbeans.fileset.sourcefolders.sortedfrag)/*"/>
-  
-  <xsl:variable name="netbeans.full.classpath.frag">
-    <classpath mode="compile" xmlns="http://www.netbeans.org/ns/freeform-project-java/3">
-      <xsl:value-of select="$netbeans.path.libs"/>
-      <xsl:for-each select="$netbeans.fileset.sourcefolders.sorted[contains(text(), '/src/java')]">
-        <xsl:text>:</xsl:text>
-        <xsl:value-of select="."/>
-      </xsl:for-each>
-    </classpath>
-  </xsl:variable>
-
-  <!--
-      NOTE: This template matches the root element of any given input XML document!
-      The XSL input file is ignored completely.
-    --> 
-  <xsl:template match="/">
-    <project xmlns="http://www.netbeans.org/ns/project/1">
-      <type>org.netbeans.modules.ant.freeform</type>
-      <configuration>
-        <general-data xmlns="http://www.netbeans.org/ns/freeform-project/1">
-          <name>lucene</name>
-          <properties/>
-          <folders>
-            <xsl:for-each select="$netbeans.fileset.sourcefolders.sorted">
-              <source-folder>
-                <label>
-                  <xsl:value-of select="."/>
-                </label>
-                <xsl:if test="contains(text(), '/src/java') or contains(text(), '/src/test')">
-                  <type>java</type>
-                </xsl:if>
-                <location>
-                  <xsl:value-of select="."/>
-                </location>
-              </source-folder>
-            </xsl:for-each>
-          </folders>
-          <ide-actions>
-            <action name="build">
-              <target>compile</target>
-            </action>
-            <action name="clean">
-              <target>clean</target>
-            </action>
-            <action name="javadoc">
-              <target>documentation</target>
-            </action>
-            <action name="test">
-              <target>test</target>
-            </action>
-            <action name="rebuild">
-              <target>clean</target>
-              <target>compile</target>
-            </action>
-          </ide-actions>
-          <view>
-            <items>
-              <xsl:for-each select="$netbeans.fileset.sourcefolders.sorted">
-                <source-folder>
-                  <xsl:attribute name="style">
-                    <xsl:choose>
-                      <xsl:when test="contains(text(), '/src/java') or contains(text(), '/src/test')">packages</xsl:when>
-                      <xsl:otherwise>tree</xsl:otherwise>
-                    </xsl:choose>
-                  </xsl:attribute>
-                  <label>
-                    <xsl:value-of select="."/>
-                  </label>
-                  <location>
-                    <xsl:value-of select="."/>
-                  </location>
-                </source-folder>
-              </xsl:for-each>
-              <source-file>
-                <label>Project Build Script</label>
-                <location>build.xml</location>
-              </source-file>
-            </items>
-            <context-menu>
-              <ide-action name="build"/>
-              <ide-action name="rebuild"/>
-              <ide-action name="clean"/>
-              <ide-action name="javadoc"/>
-              <ide-action name="test"/>
-            </context-menu>
-          </view>
-          <subprojects/>
-        </general-data>
-        <java-data xmlns="http://www.netbeans.org/ns/freeform-project-java/3">
-          <compilation-unit>
-            <xsl:for-each select="$netbeans.fileset.sourcefolders.sorted[contains(text(), '/src/java')]">
-              <package-root>
-                <xsl:value-of select="."/>
-              </package-root>
-            </xsl:for-each>
-            <xsl:copy-of select="$netbeans.full.classpath.frag"/>
-            <built-to>nb-build/classes</built-to>
-            <source-level>
-              <xsl:value-of select="$netbeans.source-level"/>
-            </source-level>
-          </compilation-unit>
-          <compilation-unit>
-            <xsl:for-each select="$netbeans.fileset.sourcefolders.sorted[contains(text(), '/src/test')]">
-              <package-root>
-                <xsl:value-of select="."/>
-              </package-root>
-            </xsl:for-each>
-            <unit-tests/>
-            <xsl:copy-of select="$netbeans.full.classpath.frag"/>
-            <built-to>nb-build/test-classes</built-to>
-            <source-level>
-              <xsl:value-of select="$netbeans.source-level"/>
-            </source-level>
-          </compilation-unit>
-        </java-data>
-      </configuration>
-    </project>
-  </xsl:template>
-</xsl:stylesheet>
diff --git a/dev-tools/netbeans/nbproject/project.properties b/dev-tools/netbeans/nbproject/project.properties
deleted file mode 100644
index aef5cba..0000000
--- a/dev-tools/netbeans/nbproject/project.properties
+++ /dev/null
@@ -1,9 +0,0 @@
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.project.expand-tabs=true
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.project.indent-shift-width=2
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.project.spaces-per-tab=2
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.project.tab-size=2
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.project.text-limit-width=120
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.project.text-line-wrap=none
-auxiliary.org-netbeans-modules-editor-indent.CodeStyle.usedProfile=project
-auxiliary.org-netbeans-modules-editor-indent.text.x-java.CodeStyle.project.continuationIndentSize=4
-auxiliary.org-netbeans-modules-editor-indent.text.x-java.CodeStyle.project.spaceAfterTypeCast=false
diff --git a/dev-tools/scripts/SOLR-2452.patch.hack.pl b/dev-tools/scripts/SOLR-2452.patch.hack.pl
deleted file mode 100755
index 7da5c54..0000000
--- a/dev-tools/scripts/SOLR-2452.patch.hack.pl
+++ /dev/null
@@ -1,215 +0,0 @@
-#!/usr/bin/perl
-#
-# This script can be used to fix up paths that were moved as a result
-# of the structural changes committed as part of SOLR-2452.
-#
-# Input is on STDIN, output is to STDOUT
-#
-# Example use:
-#
-#    perl SOLR-2452.patch.hack.pl <my.pre-SOLR-2452.patch >my.post-SOLR-2452.patch
-#
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-use strict;
-use warnings;
-
-my @moves = (
-    'solr/contrib/analysis-extras/src/test-files/solr-analysis-extras'
- => 'solr/contrib/analysis-extras/src/test-files/analysis-extras/solr',
-
-    'solr/contrib/analysis-extras/src/test-files'
- => 'solr/contrib/analysis-extras/src/test-files/analysis-extras',
-
-    'solr/contrib/clustering/src/test/java'
- => 'solr/contrib/clustering/src/test',
-
-    'solr/contrib/clustering/src/test/resources/solr-clustering'
- => 'solr/contrib/clustering/src/test-files/clustering/solr',
-
-    'solr/contrib/clustering/src/test/resources'
- => 'solr/contrib/clustering/src/test-files/clustering',
-
-    'solr/contrib/clustering/src/main/java'
- => 'solr/contrib/clustering/src/java',
-
-    'solr/contrib/dataimporthandler/src/test/java'
- => 'solr/contrib/dataimporthandler/src/test',
-
-    'solr/contrib/dataimporthandler/src/test/resources/solr-dih'
- => 'solr/contrib/dataimporthandler/src/test-files/dih/solr',
-
-    'solr/contrib/dataimporthandler/src/test/resources'
- => 'solr/contrib/dataimporthandler/src/test-files/dih',
-
-    'solr/contrib/dataimporthandler/src/main/java'
- => 'solr/contrib/dataimporthandler/src/java',
-
-    'solr/contrib/dataimporthandler/src/main/webapp'
- => 'solr/contrib/dataimporthandler/src/webapp',
-
-    'solr/contrib/dataimporthandler/src/extras/test/java'
- => 'solr/contrib/dataimporthandler-extras/src/test',
-
-    'solr/contrib/dataimporthandler/src/extras/test/resources/solr-dihextras'
- => 'solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr',
-
-    'solr/contrib/dataimporthandler/src/extras/test/resources'
- => 'solr/contrib/dataimporthandler-extras/src/test-files/dihextras',
-
-    'solr/contrib/dataimporthandler/src/extras/main/java'
- => 'solr/contrib/dataimporthandler-extras/src/java',
-
-    'solr/contrib/extraction/src/test/java'
- => 'solr/contrib/extraction/src/test',
-
-    'solr/contrib/extraction/src/test/resources/solr-extraction'
- => 'solr/contrib/extraction/src/test-files/extraction/solr',
-
-    'solr/contrib/extraction/src/test/resources'
- => 'solr/contrib/extraction/src/test-files/extraction',
-
-    'solr/contrib/extraction/src/main/java'
- => 'solr/contrib/extraction/src/java',
-
-    'solr/src/test-files/books.csv'
- => 'solr/solrj/src/test-files/solrj/books.csv',
-
-    'solr/src/test-files/sampleDateFacetResponse.xml'
- => 'solr/solrj/src/test-files/solrj/sampleDateFacetResponse.xml',
-
-    'solr/src/test-files/solr/shared'
- => 'solr/solrj/src/test-files/solrj/solr/shared',
-
-    'solr/src/solrj'
- => 'solr/solrj/src/java',
-
-    'solr/src/common'
- => 'solr/solrj/src/java',
-
-    'solr/src/test/org/apache/solr/common'
- => 'solr/solrj/src/test/org/apache/solr/common',
-
-    'solr/src/test/org/apache/solr/client/solrj/SolrJettyTestBase.java'
- => 'solr/test-framework/src/java/org/apache/solr/SolrJettyTestBase.java',
-
-    'solr/src/test/org/apache/solr/client/solrj'
- => 'solr/solrj/src/test/org/apache/solr/client/solrj',
-
-    'solr/src/test-framework'
- => 'solr/test-framework/src/java',
-
-    'solr/src/test/org/apache/solr/util/ExternalPaths.java'
- => 'solr/test-framework/src/java/org/apache/solr/util/ExternalPaths.java',
-
-    'solr/src/java'
- => 'solr/core/src/java',
-
-    'solr/src/test'
- => 'solr/core/src/test',
-
-    'solr/src/test-files'
- => 'solr/core/src/test-files',
-
-    'solr/src/webapp/src'
- => 'solr/core/src/java',
-
-    'solr/src/webapp/web'
- => 'solr/webapp/web',
-
-    'solr/src/scripts'
- => 'solr/scripts',
-
-    'solr/src/dev-tools'
- => 'solr/dev-tools',
-
-    'solr/src/site'
- => 'solr/site-src',
-
-    'dev-tools/maven/solr/src/pom.xml.template'
- => 'dev-tools/maven/solr/core/pom.xml.template',
-
-    'dev-tools/maven/solr/src/test-framework/pom.xml.template'
- => 'dev-tools/maven/solr/test-framework/pom.xml.template',
-
-    'dev-tools/maven/solr/src/solrj/pom.xml.template'
- => 'dev-tools/maven/solr/solrj/pom.xml.template',
-
-    'dev-tools/maven/solr/src/webapp/pom.xml.template'
- => 'dev-tools/maven/solr/webapp/pom.xml.template',
-);
-
-my @copies = (
-    'solr/core/src/test-files/README'
- => 'solr/solrj/src/test-files/solrj/README',
-
-    'solr/core/src/test-files/solr/crazy-path-to-schema.xml'
- => 'solr/solrj/src/test-files/solrj/solr/crazy-path-to-schema.xml',
-
-    'solr/core/src/test-files/solr/conf/schema.xml'
- => 'solr/solrj/src/test-files/solrj/solr/conf/schema.xml',
-
-    'solr/core/src/test-files/solr/conf/schema-replication1.xml'
- => 'solr/solrj/src/test-files/solrj/solr/conf/schema-replication1.xml',
-
-    'solr/core/src/test-files/solr/conf/solrconfig-slave1.xml'
- => 'solr/solrj/src/test-files/solrj/solr/conf/solrconfig-slave1.xml',
-);
-
-my $diff;
-
-while (<>) {
-  if (/^Index/) {
-    my $next_diff = $_;
-    &fixup_paths if ($diff);
-    $diff = $next_diff;
-  } else {
-    $diff .= $_;
-  }
-}
-
-&fixup_paths; # Handle the final diff
-
-sub fixup_paths {
-  for (my $move_pos = 0 ; $move_pos < $#moves ; $move_pos += 2) {
-    my $source = $moves[$move_pos];
-    my $target = $moves[$move_pos + 1];
-    if ($diff =~ /^Index: \Q$source\E/) {
-      $diff =~ s/^Index: \Q$source\E/Index: $target/;
-      $diff =~ s/\n--- \Q$source\E/\n--- $target/;
-      $diff =~ s/\n\+\+\+ \Q$source\E/\n+++ $target/;
-      $diff =~ s/\nProperty changes on: \Q$source\E/\nProperty changes on: $target/;
-      last;
-    }
-  }
-  print $diff;
-
-  for (my $copy_pos = 0 ; $copy_pos < $#copies ; $copy_pos += 2) {
-    my $source = $copies[$copy_pos];
-    my $target = $copies[$copy_pos + 1];
-    if ($diff =~ /^Index: \Q$source\E/) {
-      my $new_diff = $diff;
-      $new_diff =~ s/^Index: \Q$source\E/Index: $target/;
-      $new_diff =~ s/\n--- \Q$source\E/\n--- $target/;
-      $new_diff =~ s/\n\+\+\+ \Q$source\E/\n+++ $target/;
-      $new_diff =~ s/\nProperty changes on: \Q$source\E/\nProperty changes on: $target/;
-      print $new_diff;
-      last;
-    }
-  }
-}
diff --git a/dev-tools/scripts/buildAndPushRelease.py b/dev-tools/scripts/buildAndPushRelease.py
index 6cf5db6..58dbe8b 100755
--- a/dev-tools/scripts/buildAndPushRelease.py
+++ b/dev-tools/scripts/buildAndPushRelease.py
@@ -108,8 +108,8 @@
   print('  Check DOAP files')
   checkDOAPfiles(version)
 
-  print('  ant -Dtests.badapples=false clean test validate documentation-lint')
-  run('ant -Dtests.badapples=false clean test validate documentation-lint')
+  print('  ant -Dtests.badapples=false clean validate documentation-lint test')
+  run('ant -Dtests.badapples=false clean validate documentation-lint test')
 
   open('rev.txt', mode='wb').write(rev.encode('UTF-8'))
   
diff --git a/dev-tools/scripts/jenkins.build.ref.guide.sh b/dev-tools/scripts/jenkins.build.ref.guide.sh
deleted file mode 100755
index 34fb778..0000000
--- a/dev-tools/scripts/jenkins.build.ref.guide.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env bash
-
-# This shell script will download the software required to build the ref
-# guide using RVM (Ruby Version Manager), and then run the following
-# under solr/solr-ref-guide: "ant clean build-site build-pdf".
-#
-# The following will be downloaded and installed into $HOME/.rvm/:
-# RVM, Ruby, and Ruby gems jekyll, asciidoctor, jekyll-asciidoc,
-# and pygments.rb.
-#
-# The script expects to be run in the top-level project directory.
-#
-# RVM will attempt to verify the signature on downloaded RVM software if
-# you have gpg or gpg2 installed.  If you do, as a one-time operation you
-# must import two keys (substitute gpg2 below if you have it installed):
-#
-#    gpg --keyserver hkp://keys.gnupg.net --recv-keys \
-#        409B6B1796C275462A1703113804BB82D39DC0E3     \
-#        7D2BAF1CF37B13E2069D6956105BD0E739499BDB
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-set -x                                   # Echo commands to the console
-set -e                                   # Fail the script if any command fails
-
-RVM_PATH=$HOME/.rvm
-RUBY_VERSION=ruby-2.5.1
-GEMSET=solr-refguide-gemset
-
-# Install the "stable" RVM release to ~/.rvm/, and don't mess with .bash_profile etc.
-\curl -sSL https://get.rvm.io | bash -s -- --ignore-dotfiles stable
-
-set +x                                   # Temporarily disable command echoing to reduce clutter
-
-function echoRun() {
-    local cmd="$1"
-    echo "Running '$cmd'"
-    $cmd
-}
-
-echoRun "source $RVM_PATH/scripts/rvm"   # Load RVM into a shell session *as a Bash function*
-echoRun "rvm cleanup all"                # Remove old stuff
-echoRun "rvm autolibs disable"           # Enable single-user mode
-echoRun "rvm install $RUBY_VERSION"      # Install Ruby
-echoRun "rvm gemset create $GEMSET"      # Create this project's gemset
-echoRun "rvm $RUBY_VERSION@$GEMSET"      # Activate this project's gemset
-
-# Install gems in the gemset.  Param --force disables dependency conflict detection.
-echoRun "gem install --force --version 3.5.0 jekyll"
-echoRun "gem uninstall --all --ignore-dependencies asciidoctor"  # Get rid of all versions
-echoRun "gem install --force --version 2.0.10 asciidoctor"
-echoRun "gem install --force --version 3.0.0 jekyll-asciidoc"
-echoRun "gem install --force --version 4.0.1 slim"
-echoRun "gem install --force --version 2.0.10 tilt"
-echoRun "gem install --force --version 1.1.5 concurrent-ruby"
-
-cd solr/solr-ref-guide
-
-set -x                                   # Re-enable command echoing
-ant clean build-site
diff --git a/dev-tools/scripts/releaseWizard.py b/dev-tools/scripts/releaseWizard.py
index 9be2897..9c70237 100755
--- a/dev-tools/scripts/releaseWizard.py
+++ b/dev-tools/scripts/releaseWizard.py
@@ -97,6 +97,7 @@
         'release_version_major': state.release_version_major,
         'release_version_minor': state.release_version_minor,
         'release_version_bugfix': state.release_version_bugfix,
+        'release_version_refguide': state.get_refguide_release() ,
         'state': state,
         'gpg_key' : state.get_gpg_key(),
         'epoch': unix_time_millis(datetime.utcnow()),
@@ -582,6 +583,9 @@
         if self.release_type == 'bugfix':
             return "%s.%s.%s" % (self.release_version_major, self.release_version_minor, self.release_version_bugfix + 1)
 
+    def get_refguide_release(self):
+        return "%s_%s" % (self.release_version_major, self.release_version_minor)
+
     def get_java_home(self):
         return self.get_java_home_for_version(self.release_version)
 
diff --git a/dev-tools/scripts/releaseWizard.yaml b/dev-tools/scripts/releaseWizard.yaml
index 41a3a74..09be477 100644
--- a/dev-tools/scripts/releaseWizard.yaml
+++ b/dev-tools/scripts/releaseWizard.yaml
@@ -125,11 +125,15 @@
      * Feature 1 pasted from WIKI release notes
      * Feature 2 ...
 
-    Please read CHANGES.txt for a full list of {% if is_feature_release %}new features and {% endif %}changes:
+    Please refer to the Upgrade Notes in the Solr Ref Guide for information on upgrading from previous Solr versions:
+
+      <https://lucene.apache.org/solr/guide/{{ release_version_refguide }}/solr-upgrade-notes.html>
+
+    Please read CHANGES.txt for a full list of {% if is_feature_release %}new features, changes and {% endif %}bugfixes:
 
       <https://lucene.apache.org/solr/{{ release_version_underscore }}/changes/Changes.html>
 
-    Solr {{ release_version }} also includes {% if is_feature_release %}features, optimizations {% endif %} and bugfixes in the corresponding Apache Lucene release:
+    Solr {{ release_version }} also includes {% if is_feature_release %}features, optimizations and {% endif %}bugfixes in the corresponding Apache Lucene release:
 
       <https://lucene.apache.org/core/{{ release_version_underscore }}/changes/Changes.html>
   announce_lucene_mail: |
@@ -327,6 +331,16 @@
     - http://www.apache.org/dev/openpgp.html#apache-wot
     - https://id.apache.org
     - https://dist.apache.org/repos/dist/release/lucene/KEYS
+  - !Todo
+    id: jira_permissions
+    title: Obtain the neccessary permissions for Apache Jira
+    description: |-
+      If you are not a PMC member and this is your first time as RM, please ask to be granted extra permissions in Jira in order to complete all steps of the release.
+
+      If you are a PMC member, you will already have the necessary permissions.
+    links:
+      - https://issues.apache.org/jira/projects/LUCENE
+      - https://issues.apache.org/jira/projects/SOLR
 - !TodoGroup
   id: preparation
   title: Prepare for the release
@@ -1278,6 +1292,43 @@
         cmd: git push origin
         logfile: push.log
         stdout: true
+        comment: Push the master branch
+      - !Command
+        cmd: "git checkout {{ stable_branch }} && git pull"
+        stdout: true
+        comment: Checkout the stable branch
+      - !Command
+        cmd: "git cherry-pick master"
+        logfile: commit.log
+        stdout: true
+        comment: Cherrypick the DOAP changes from master onto the stable branch.
+      - !Command
+        cmd: git show HEAD
+        stdout: true
+        comment: Ensure the only change is adding the new version.
+      - !Command
+        cmd: git push origin
+        logfile: push.log
+        stdout: true
+        comment: Push the stable branch
+      - !Command
+        cmd: "git checkout {{ release_branch }} && git pull"
+        stdout: true
+        comment: Checkout the release branch
+      - !Command
+        cmd: "git cherry-pick {{ stable_branch }}"
+        logfile: commit.log
+        stdout: true
+        comment: Cherrypick the DOAP changes from the stable branch onto the release branch.
+      - !Command
+        cmd: git show HEAD
+        stdout: true
+        comment: Ensure the only change is adding the new version.
+      - !Command
+        cmd: git push origin
+        logfile: push.log
+        stdout: true
+        comment: Push the release branch
 - !TodoGroup
   id: announce
   title: Announce the release
@@ -1339,8 +1390,8 @@
     - https://en.wikipedia.org/wiki/Apache_Solr
 - !TodoGroup
   id: post_release
-  title: Tasks to do after release
-  description: There are many more tasks to do now that the new version is out there, so hang in there for a few more hours :)
+  title: Tasks to do after release.
+  description: There are many more tasks to do now that the new version is out there, so hang in there for a few more hours.
   todos:
   - !Todo
     id: add_version_bugfix
@@ -1589,7 +1640,7 @@
     id: jira_close_resolved
     title: Close all issues resolved in the release
     description: |-
-      Go to JIRA search in both Solr and Lucene and find all issues that were fixed in the release 
+      Go to JIRA search in both Solr and Lucene and find all issues that were fixed in the release
       you just made, whose Status is Resolved.
 
       . Go to https://issues.apache.org/jira/issues/?jql=project+in+(LUCENE,SOLR)+AND+status=Resolved+AND+fixVersion={{ release_version }}
@@ -1631,7 +1682,7 @@
     id: jira_clear_security
     title: Clear Security Level of Public Solr JIRA Issues
     description: |-
-      ASF JIRA has a deficiency in which issues that have a security level of "Public" are nonetheless not searchable. 
+      ASF JIRA has a deficiency in which issues that have a security level of "Public" are nonetheless not searchable.
       As a maintenance task, we'll clear the security flag for all public Solr JIRAs, even if it is not a task directly
       related to the release:
 
diff --git a/dev-tools/scripts/smokeTestRelease.py b/dev-tools/scripts/smokeTestRelease.py
index 263287b..e2d336d 100755
--- a/dev-tools/scripts/smokeTestRelease.py
+++ b/dev-tools/scripts/smokeTestRelease.py
@@ -225,8 +225,7 @@
     for file in files:
       if file.lower().endswith('.jar'):
         if project == 'solr':
-          if ((normRoot.endswith('/contrib/dataimporthandler-extras/lib') and (file.startswith('javax.mail-') or file.startswith('activation-')))
-              or (normRoot.endswith('/test-framework/lib') and file.startswith('jersey-'))
+          if ((normRoot.endswith('/test-framework/lib') and file.startswith('jersey-'))
               or (normRoot.endswith('/contrib/extraction/lib') and file.startswith('xml-apis-'))):
             print('      **WARNING**: skipping check of %s/%s: it has javax.* classes' % (root, file))
             continue
@@ -796,7 +795,12 @@
     startupEvent.set()
   finally:
     f.close()
-    
+
+def is_port_in_use(port):
+    import socket
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        return s.connect_ex(('localhost', port)) == 0
+
 def testSolrExample(unpackPath, javaPath, isSrc):
   # test solr using some examples it comes with
   logFile = '%s/solr-example.log' % unpackPath
@@ -1317,7 +1321,7 @@
   command = 'ant test -Dtestcase=TestBackwardsCompatibility -Dtests.verbose=true'
   p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
   stdout, stderr = p.communicate()
-  if p.returncode is not 0:
+  if p.returncode != 0:
     # Not good: the test failed!
     raise RuntimeError('%s failed:\n%s' % (command, stdout))
   stdout = stdout.decode('utf-8',errors='replace').replace('\r\n','\n')
@@ -1456,6 +1460,9 @@
     download('KEYS', keysFileURL, tmpDir, force_clean=FORCE_CLEAN)
     keysFile = '%s/KEYS' % (tmpDir)
 
+  if is_port_in_use(8983):
+    raise RuntimeError('Port 8983 is already in use. The smoketester needs it to test Solr')
+
   print()
   print('Test Lucene...')
   checkSigs('lucene', lucenePath, version, tmpDir, isSigned, keysFile)
diff --git a/gradle/ant-compat/forbidden-api-rules-in-sync.gradle b/gradle/ant-compat/forbidden-api-rules-in-sync.gradle
deleted file mode 100644
index c36847f..0000000
--- a/gradle/ant-compat/forbidden-api-rules-in-sync.gradle
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-// Just make sure the forbidden API rules are in sync between gradle and ant versions until
-// we get rid of ant build.
-
-def linesOf(FileTree ftree) {
-  return ftree.collectMany { path ->
-    path.readLines("UTF-8")
-      .collect { line -> line.trim() }
-      .findAll { line -> !line.startsWith("#") }
-      .unique()
-      .collect { line -> [path: path, line: line] }
-  }.groupBy { e -> e.line }
-}
-
-configure(rootProject) {
-  task verifyForbiddenApiRulesInSync() {
-    doFirst {
-      // Read all rules line by line from ant, gradle, remove comments, uniq.
-      // Rule sets should be identical.
-      def gradleRules = linesOf(fileTree("gradle/validation/forbidden-apis", { include "**/*.txt" }))
-      def antRules = linesOf(project(":lucene").fileTree("tools/forbiddenApis", { include "**/*.txt" }))
-
-      def antOnlyLines = antRules.keySet() - gradleRules.keySet()
-      def gradleOnlyLines = gradleRules.keySet() - antRules.keySet()
-
-      if (!gradleOnlyLines.isEmpty() || !antOnlyLines.isEmpty()) {
-        project.logger.log(LogLevel.ERROR, "The following rules don't have counterparts:\n" +
-          (gradleRules.findAll { gradleOnlyLines.contains(it.key) } + antRules.findAll { antOnlyLines.contains(it.key)})
-            .collectMany { it.value }
-            .join("\n"))
-        throw new GradleException("Forbidden APIs rules out of sync.")
-      }
-    }
-  }
-
-  check.dependsOn verifyForbiddenApiRulesInSync
-}
diff --git a/gradle/ant-compat/jar-checks.gradle b/gradle/ant-compat/jar-checks.gradle
deleted file mode 100644
index a93a002..0000000
--- a/gradle/ant-compat/jar-checks.gradle
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-// This file is not included but is kept in ant-compat so that cleanup can be done later
-
-// Remove special handling of dependency checksum validation/ collection for Solr where
-// transitive Lucene dependencies are sucked in to licenses/ folder. We can just copy
-// Lucene licenses as a whole (they're joint projects after all).
-//
-// the hack is in 'jar-checks.gradle' under:
-// def isSolr = project.path.startsWith(":solr")
diff --git a/gradle/ant-compat/resolve.gradle b/gradle/ant-compat/resolve.gradle
deleted file mode 100644
index ee18aa8..0000000
--- a/gradle/ant-compat/resolve.gradle
+++ /dev/null
@@ -1,227 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-// For Lucene, a 'resolve' task that copies any (non-project) dependencies
-// under lib/ folder.
-configure(allprojects.findAll {project -> project.path.startsWith(":lucene") }) {
-  plugins.withType(JavaPlugin) {
-    configurations {
-      runtimeLibs {
-        extendsFrom runtimeElements
-        extendsFrom testRuntimeClasspath
-      }
-    }
-
-    task resolve(type: Sync) {
-      from({
-        return configurations.runtimeLibs.copyRecursive { dep ->
-          !(dep instanceof org.gradle.api.artifacts.ProjectDependency)
-        }
-      })
-
-      into 'lib'
-    }
-  }
-}
-
-// For Solr, a 'resolve' task is much more complex. There are three folders:
-// lib/
-// test-lib/
-// lucene-libs/
-//
-// There doesn't seem to be one ideal set of rules on how these should be created, but
-// I tried to imitate the current (master) logic present in ivy and ant files in this way:
-//
-// The "solr platform" set of dependencies is a union of all deps for (core, solrj, server).
-//
-// Then:
-// lib - these are module's "own" dependencies, excluding Lucene's that are not present in the
-//       solr platform.
-// lucene-libs - these are lucene modules declared as module's dependencies and not
-//       present in solr platform.
-// test-lib/ - libs not present in solr platform and not included in solr:test-framework.
-//
-// None of these are really needed with gradle... they should be collected just in the distribution
-// package, not at each project's level.
-//
-// Unfortunately this "resolution" process is also related to how the final Solr packaging is assembled.
-// I don't know how to untie these two cleanly.
-//
-
-configure(allprojects.findAll {project -> project.path.startsWith(":solr:contrib") }) {
-  plugins.withType(JavaPlugin) {
-    ext {
-      packagingDir = file("${buildDir}/packaging")
-      deps = file("${packagingDir}/${project.name}")
-    }
-
-    configurations {
-      solrPlatformLibs
-      solrTestPlatformLibs
-      runtimeLibs {
-        extendsFrom runtimeElements
-      }
-      packaging
-    }
-
-    dependencies {
-      solrPlatformLibs project(":solr:core")
-      solrPlatformLibs project(":solr:solrj")
-      solrPlatformLibs project(":solr:server")
-
-      solrTestPlatformLibs project(":solr:test-framework")
-    }
-
-    // An aggregate that configures lib, lucene-libs and test-lib in a temporary location.
-    task assemblePackaging(type: Sync) {
-      from "README.txt"
-
-      from ({
-        def externalLibs = configurations.runtimeLibs.copyRecursive { dep ->
-          if (dep instanceof org.gradle.api.artifacts.ProjectDependency) {
-            return !dep.dependencyProject.path.startsWith(":solr")
-          } else {
-            return true
-          }
-        }
-        return externalLibs - configurations.solrPlatformLibs
-      }, {
-        exclude "lucene-*"
-        into "lib"
-      })
-
-      from ({
-        def projectLibs = configurations.runtimeLibs.copyRecursive { dep ->
-          (dep instanceof org.gradle.api.artifacts.ProjectDependency)
-        }
-        return projectLibs - configurations.solrPlatformLibs
-      }, {
-        include "lucene-*"
-        into "lucene-libs"
-      })
-
-      into deps
-    }
-
-    task syncLib(type: Sync) {
-      dependsOn assemblePackaging
-
-      from(file("${deps}/lib"), {
-        include "**"
-      })
-      into file("${projectDir}/lib")
-    }
-
-    task syncTestLib(type: Sync) {
-      // From test runtime classpath exclude:
-      // 1) project dependencies (and their dependencies)
-      // 2) runtime dependencies
-      // What remains is this module's "own" test dependency.
-      from({
-        def testRuntimeLibs = configurations.testRuntimeClasspath.copyRecursive { dep ->
-          !(dep instanceof org.gradle.api.artifacts.ProjectDependency)
-        }
-
-        return testRuntimeLibs - configurations.runtimeLibs - configurations.solrTestPlatformLibs
-      })
-
-      into file("${projectDir}/test-lib")
-    }
-
-    task resolve() {
-      dependsOn syncLib, syncTestLib
-    }
-
-    // Contrib packaging currently depends on internal resolve.
-    artifacts {
-      packaging packagingDir, {
-        builtBy assemblePackaging
-      }
-    }
-  }
-}
-
-configure(project(":solr:example")) {
-  evaluationDependsOn(":solr:example") // explicitly wait for other configs to be applied
-
-  task resolve(type: Copy) {
-    from(configurations.postJar, {
-      into "exampledocs/"
-    })
-
-    from(configurations.dih, {
-      into "example-DIH/solr/db/lib"
-    })
-
-    into projectDir
-  }
-}
-
-configure(project(":solr:server")) {
-  evaluationDependsOn(":solr:server")
-
-  task resolve(type: Copy) {
-    dependsOn assemblePackaging
-
-    from({ packagingDir }, {
-      include "**/*.jar"
-      include "solr-webapp/webapp/**"
-      includeEmptyDirs false
-    })
-
-    into projectDir
-  }
-}
-
-configure(project(":solr:core")) {
-  evaluationDependsOn(":solr:core")
-
-  configurations {
-    runtimeLibs {
-      extendsFrom runtimeElements
-    }
-  }
-
-  task resolve(type: Sync) {
-    from({
-      def ownDeps = configurations.runtimeLibs.copyRecursive { dep ->
-        if (dep instanceof org.gradle.api.artifacts.ProjectDependency) {
-          return !dep.dependencyProject.path.startsWith(":solr")
-        } else {
-          return true
-        }
-      }
-      return ownDeps
-    }, {
-      exclude "lucene-*"
-    })
-
-    into "lib"
-  }
-}
-
-configure(project(":solr:solrj")) {
-  evaluationDependsOn(":solr:solrj")
-
-  task resolve(type: Sync) {
-    from({ configurations.runtimeClasspath }, {
-    })
-
-    into "lib"
-  }
-}
\ No newline at end of file
diff --git a/gradle/ant-compat/solr-forbidden-apis.gradle b/gradle/ant-compat/solr-forbidden-apis.gradle
deleted file mode 100644
index e55c745..0000000
--- a/gradle/ant-compat/solr-forbidden-apis.gradle
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-// Why does solr exclude these from forbidden API checks?
-
-configure(project(":solr:core")) {
-  configure([forbiddenApisMain, forbiddenApisTest]) {
-    exclude "org/apache/solr/internal/**"
-    exclude "org/apache/hadoop/**"
-  }
-}
\ No newline at end of file
diff --git a/gradle/ant-compat/test-classes-cross-deps.gradle b/gradle/ant-compat/test-classes-cross-deps.gradle
index 1c32dba..d0985eb 100644
--- a/gradle/ant-compat/test-classes-cross-deps.gradle
+++ b/gradle/ant-compat/test-classes-cross-deps.gradle
@@ -20,8 +20,7 @@
 configure([project(":lucene:spatial3d"),
            project(":lucene:analysis:common"),
            project(":lucene:backward-codecs"),
-           project(":lucene:queryparser"),
-           project(":solr:contrib:dataimporthandler")]) {
+           project(":lucene:queryparser")]) {
   plugins.withType(JavaPlugin) {
     configurations {
       testClassesExported
@@ -56,15 +55,6 @@
   plugins.withType(JavaPlugin) {
     dependencies {
       testImplementation project(path: ':lucene:analysis:common', configuration: 'testClassesExported')
-      testImplementation project(path: ':solr:contrib:dataimporthandler', configuration: 'testClassesExported')
-    }
-  }
-}
-
-configure(project(":solr:contrib:dataimporthandler-extras")) {
-  plugins.withType(JavaPlugin) {
-    dependencies {
-      testImplementation project(path: ':solr:contrib:dataimporthandler', configuration: 'testClassesExported')
     }
   }
 }
diff --git a/gradle/defaults-java.gradle b/gradle/defaults-java.gradle
index 5ff2301..3614ee8 100644
--- a/gradle/defaults-java.gradle
+++ b/gradle/defaults-java.gradle
@@ -19,12 +19,12 @@
 
 allprojects {
   plugins.withType(JavaPlugin) {
-    sourceCompatibility = "11"
-    targetCompatibility = "11"
+    sourceCompatibility = rootProject.minJavaVersion
+    targetCompatibility = rootProject.minJavaVersion
 
     // Use 'release' flag instead of 'source' and 'target'
     tasks.withType(JavaCompile) {
-      options.compilerArgs += ["--release", "11"]
+      options.compilerArgs += ["--release", rootProject.minJavaVersion.toString()]
     }
 
     // Configure warnings.
@@ -42,7 +42,6 @@
         "-Xlint:finally",
         "-Xlint:options",
         "-Xlint:overrides",
-        "-Xlint:path",
         "-Xlint:processing",
         "-Xlint:rawtypes",
         "-Xlint:static",
diff --git a/gradle/defaults.gradle b/gradle/defaults.gradle
index bf64968..a011add 100644
--- a/gradle/defaults.gradle
+++ b/gradle/defaults.gradle
@@ -40,10 +40,28 @@
         result = project.getProperty(propName)
       } else if (System.properties.containsKey(propName)) {
         result = System.properties.get(propName)
+      } else if (defValue instanceof Closure) {
+        result = defValue.call()
       } else {
         result = defValue
       }
       return result
     }
+
+    // System environment variable or default.
+    envOrDefault = { envName, defValue ->
+      return Objects.requireNonNullElse(System.getenv(envName), defValue);
+    }
+
+    // Either a project, system property, environment variable or default value.
+    propertyOrEnvOrDefault = { propName, envName, defValue ->
+      return propertyOrDefault(propName, envOrDefault(envName, defValue));
+    }
+
+    // Locate script-relative resource folder. This is context-sensitive so pass
+    // the right buildscript (top-level).
+    scriptResources = { buildscript ->
+      return file(buildscript.sourceFile.absolutePath.replaceAll('.gradle$', ""))
+    }
   }
 }
diff --git a/gradle/documentation/changes-to-html.gradle b/gradle/documentation/changes-to-html.gradle
index 8d8c02e..b49ae93 100644
--- a/gradle/documentation/changes-to-html.gradle
+++ b/gradle/documentation/changes-to-html.gradle
@@ -54,7 +54,7 @@
   def toHtml(File versionsFile) {
     def output = new ByteArrayOutputStream()
     def result = project.exec {
-      executable "perl"
+      executable project.externalTool("perl")
       standardInput changesFile.newInputStream()
       standardOutput project.file("${targetDir.get().getAsFile()}/Changes.html").newOutputStream()
       errorOutput = output
diff --git a/gradle/documentation/render-javadoc.gradle b/gradle/documentation/render-javadoc.gradle
index 0a9ddb2..baae066 100644
--- a/gradle/documentation/render-javadoc.gradle
+++ b/gradle/documentation/render-javadoc.gradle
@@ -20,6 +20,8 @@
 // generate javadocs by calling javadoc tool
 // see https://docs.oracle.com/en/java/javase/11/tools/javadoc.html
 
+def resources = scriptResources(buildscript)
+
 allprojects {
   plugins.withType(JavaPlugin) {
     ext {
@@ -39,17 +41,19 @@
       description "Generates Javadoc API documentation for the main source code. This directly invokes javadoc tool."
       group "documentation"
 
+      taskResources = resources
       dependsOn sourceSets.main.compileClasspath
-      classpath = sourceSets.main.compileClasspath;
+      classpath = sourceSets.main.compileClasspath
       srcDirSet = sourceSets.main.java;
 
-      outputDir = project.javadoc.destinationDir;
+      outputDir = project.javadoc.destinationDir
     }
 
     task renderSiteJavadoc(type: RenderJavadocTask) {
       description "Generates Javadoc API documentation for the site (relative links)."
       group "documentation"
 
+      taskResources = resources
       dependsOn sourceSets.main.compileClasspath
       classpath = sourceSets.main.compileClasspath;
       srcDirSet = sourceSets.main.java;
@@ -65,8 +69,9 @@
 
 // Set up titles and link up some offline docs for all documentation
 // (they may be unused but this doesn't do any harm).
-def javaJavadocPackages = project.project(':lucene').file('tools/javadoc/java11/')
-def junitJavadocPackages = project.project(':lucene').file('tools/javadoc/junit/')
+
+def javaJavadocPackages = rootProject.file("${resources}/java11/")
+def junitJavadocPackages = rootProject.file("${resources}/junit/")
 allprojects {
   project.tasks.withType(RenderJavadocTask) {
     title = "${project.path.startsWith(':lucene') ? 'Lucene' : 'Solr'} ${project.version} ${project.name} API"
@@ -157,7 +162,10 @@
   @Optional
   @Input
   def executable
-  
+
+  @Input
+  def taskResources
+
   /** Utility method to recursively collect all tasks with same name like this one that we depend on */
   private Set findRenderTasksInDependencies() {
     Set found = []
@@ -312,13 +320,15 @@
 
     // append some special table css, prettify css
     ant.concat(destfile: "${outputDir}/stylesheet.css", append: "true", fixlastline: "true", encoding: "UTF-8") {
-      filelist(dir: project.project(":lucene").file("tools/javadoc"), files: "table_padding.css")
-      filelist(dir: project.project(":lucene").file("tools/prettify"), files: "prettify.css")
+      filelist(dir: taskResources, files: "table_padding.css")
+      filelist(dir: project.file("${taskResources}/prettify"), files: "prettify.css")
     }
+
     // append prettify to scripts
     ant.concat(destfile: "${outputDir}/script.js", append: "true", fixlastline: "true", encoding: "UTF-8") {
-      filelist(dir: project.project(':lucene').file("tools/prettify"), files: "prettify.js inject-javadocs.js")
+      filelist(dir: project.file("${taskResources}/prettify"), files: "prettify.js inject-javadocs.js")
     }
+
     ant.fixcrlf(srcdir: outputDir, includes: "stylesheet.css script.js", eol: "lf", fixlast: "true", encoding: "UTF-8")
   }
 }
diff --git a/lucene/tools/javadoc/java11/package-list b/gradle/documentation/render-javadoc/java11/package-list
similarity index 100%
rename from lucene/tools/javadoc/java11/package-list
rename to gradle/documentation/render-javadoc/java11/package-list
diff --git a/lucene/tools/javadoc/junit/package-list b/gradle/documentation/render-javadoc/junit/package-list
similarity index 100%
rename from lucene/tools/javadoc/junit/package-list
rename to gradle/documentation/render-javadoc/junit/package-list
diff --git a/lucene/tools/prettify/inject-javadocs.js b/gradle/documentation/render-javadoc/prettify/inject-javadocs.js
similarity index 100%
rename from lucene/tools/prettify/inject-javadocs.js
rename to gradle/documentation/render-javadoc/prettify/inject-javadocs.js
diff --git a/lucene/tools/prettify/prettify.css b/gradle/documentation/render-javadoc/prettify/prettify.css
similarity index 100%
rename from lucene/tools/prettify/prettify.css
rename to gradle/documentation/render-javadoc/prettify/prettify.css
diff --git a/lucene/tools/prettify/prettify.js b/gradle/documentation/render-javadoc/prettify/prettify.js
similarity index 100%
rename from lucene/tools/prettify/prettify.js
rename to gradle/documentation/render-javadoc/prettify/prettify.js
diff --git a/lucene/tools/javadoc/table_padding.css b/gradle/documentation/render-javadoc/table_padding.css
similarity index 100%
rename from lucene/tools/javadoc/table_padding.css
rename to gradle/documentation/render-javadoc/table_padding.css
diff --git a/gradle/generation/jflex.gradle b/gradle/generation/jflex.gradle
index 9bdbc2c..e7c65df 100644
--- a/gradle/generation/jflex.gradle
+++ b/gradle/generation/jflex.gradle
@@ -149,7 +149,7 @@
       def target = file('src/java/org/apache/lucene/analysis/charfilter/HTMLCharacterEntities.jflex')
       target.withOutputStream { output ->
         project.exec {
-          executable = "python"
+          executable = project.externalTool("python2")
           workingDir = target.parentFile
           standardOutput = output
           args += [
diff --git a/gradle/generation/kuromoji.gradle b/gradle/generation/kuromoji.gradle
index 2f55c1a..c865a13 100644
--- a/gradle/generation/kuromoji.gradle
+++ b/gradle/generation/kuromoji.gradle
@@ -73,7 +73,7 @@
       // Apply patch via local git.
       project.exec {
         workingDir = unpackedDir
-        executable "git"
+        executable "git" // TODO: better use jgit to apply patch, this is not portable!!!
         args += [
             "apply",
             file("src/tools/patches/Noun.proper.csv.patch").absolutePath
diff --git a/gradle/generation/snowball.gradle b/gradle/generation/snowball.gradle
index 6f7049e..b7b37c4 100644
--- a/gradle/generation/snowball.gradle
+++ b/gradle/generation/snowball.gradle
@@ -15,6 +15,8 @@
  * limitations under the License.
  */
 
+import org.apache.tools.ant.taskdefs.condition.Os
+
 apply plugin: "de.undercouch.download"
 
 configure(rootProject) {
@@ -99,6 +101,10 @@
     dependsOn downloadSnowballData
 
     doLast {
+      if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+        throw GradleException("Snowball generation does not work on Windows, use a platform where bash is available.")
+      }
+
       project.exec {
         executable "bash"
         args = [snowballScript, snowballStemmerDir, snowballWebsiteDir, snowballDataDir, projectDir]
diff --git a/gradle/generation/util.gradle b/gradle/generation/util.gradle
index 597b60a..36ad86b 100644
--- a/gradle/generation/util.gradle
+++ b/gradle/generation/util.gradle
@@ -57,7 +57,7 @@
         logger.lifecycle("Executing: ${prog} in ${targetDir}")
         project.exec {
           workingDir targetDir
-          executable "python3"
+          executable project.externalTool("python3")
           args = ['-B', "${prog}"]
         }
       }
@@ -82,7 +82,7 @@
         ['True', 'False'].each { transpose ->
           project.exec {
             workingDir targetDir
-            executable "python3"
+            executable project.externalTool("python3")
             args = ['-B', 'createLevAutomata.py', num, transpose, "${momanDir}/finenight/python"]
           }
         }
diff --git a/gradle/hacks/gradle.gradle b/gradle/hacks/gradle.gradle
new file mode 100644
index 0000000..acb9394
--- /dev/null
+++ b/gradle/hacks/gradle.gradle
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */ 
+
+// See LUCENE-9471. We redirect temporary location for gradle
+// so that it doesn't pollute user's tmp. Wipe it during a clean though.
+
+configure(rootProject) {
+  task cleanGradleTmp(type: Delete) {
+    delete fileTree(".gradle/tmp").matching {
+      include "gradle-worker-classpath*"
+    }
+  }
+
+  clean.dependsOn cleanGradleTmp
+}
+
+// Make sure we clean up after running tests.
+allprojects {
+  plugins.withType(JavaPlugin) {
+    def temps = []
+
+    task cleanTaskTmp() {
+      doLast {
+        temps.each { temp ->
+          project.delete fileTree(temp).matching {
+            include "jar_extract*"
+          }
+        }
+      }
+    }
+
+    tasks.withType(Test) {
+      finalizedBy rootProject.cleanGradleTmp, cleanTaskTmp
+      temps += temporaryDir
+    }
+  }
+}
diff --git a/gradle/hacks/hashmapAssertions.gradle b/gradle/hacks/hashmapAssertions.gradle
new file mode 100644
index 0000000..095726c
--- /dev/null
+++ b/gradle/hacks/hashmapAssertions.gradle
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// Disable assertions for HashMap due to: LUCENE-8991 / JDK-8205399
+def vmName = System.getProperty("java.vm.name")
+def spec = System.getProperty("java.specification.version")
+if (vmName =~ /(?i)(hotspot|openjdk|jrockit)/ &&
+    spec =~ /^(1\.8|9|10|11)$/ &&
+    !Boolean.parseBoolean(propertyOrDefault('tests.asserts.hashmap', 'false'))) {
+  logger.info("Enabling HashMap assertions.")
+  allprojects {
+    plugins.withType(JavaPlugin) {
+      tasks.withType(Test) { task ->
+        jvmArgs("-da:java.util.HashMap")
+      }
+    }
+  }
+}
+
diff --git a/gradle/help.gradle b/gradle/help.gradle
index edee1c3..51435e1 100644
--- a/gradle/help.gradle
+++ b/gradle/help.gradle
@@ -28,6 +28,7 @@
       ["LocalSettings", "help/localSettings.txt", "Local settings, overrides and build performance tweaks."],
       ["Git", "help/git.txt", "Git assistance and guides."],
       ["ValidateLogCalls", "help/validateLogCalls.txt", "How to use logging calls efficiently."],
+      ["IDEs", "help/IDEs.txt", "IDE support."],
   ]
 
   helpFiles.each { section, path, sectionInfo ->
diff --git a/gradle/ide/eclipse.gradle b/gradle/ide/eclipse.gradle
new file mode 100644
index 0000000..e414112
--- /dev/null
+++ b/gradle/ide/eclipse.gradle
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import org.gradle.plugins.ide.eclipse.model.SourceFolder
+import org.gradle.plugins.ide.eclipse.model.ClasspathEntry
+
+def resources = scriptResources(buildscript)
+
+configure(rootProject) {
+  apply plugin: "eclipse"
+
+  def relativize = { other -> rootProject.rootDir.relativePath(other).toString() }
+
+  eclipse {
+    project {
+      name = "Apache Lucene Solr ${version}"
+    }
+
+    classpath {
+      defaultOutputDir = file('build/eclipse')
+
+      file {
+        beforeMerged { classpath -> classpath.entries.removeAll { it.kind == "src" } }
+
+        whenMerged { classpath ->
+          def projects = allprojects.findAll { prj ->
+            return prj.plugins.hasPlugin(JavaPlugin) &&
+                   prj.path != ":solr:solr-ref-guide"
+          }
+
+          Set<String> sources = []
+          Set<File> jars = []
+          projects.each { prj ->
+            prj.sourceSets.each { sourceSet ->
+              sources += sourceSet.java.srcDirs.findAll { dir -> dir.exists() }.collect { dir -> relativize(dir) }
+            }
+
+            // This is hacky - we take the resolved compile classpath and just
+            // include JAR files from there. We should probably make it smarter
+            // by looking at real dependencies. But then: this Eclipse configuration
+            // doesn't really separate sources anyway so why bother.
+            jars += prj.configurations.compileClasspath.resolve()
+            jars += prj.configurations.testCompileClasspath.resolve()
+          }
+
+          classpath.entries += sources.sort().collect {name -> new SourceFolder(name, "build/eclipse/" + name) }
+          classpath.entries += jars.unique().findAll { location -> location.isFile() }.collect { location ->
+            new LibEntry(location.toString())
+          }
+        }
+      }
+    }
+
+    jdt {
+      sourceCompatibility = rootProject.minJavaVersion
+      targetCompatibility = rootProject.minJavaVersion
+      javaRuntimeName = "JavaSE-${rootProject.minJavaVersion}"
+    }
+  }
+
+  eclipseJdt {
+    doLast {
+      project.sync {
+        from rootProject.file("${resources}/dot.settings")
+        into rootProject.file(".settings")
+      }
+    }
+  }
+}
+
+public class LibEntry implements ClasspathEntry {
+  private String path;
+
+  LibEntry(String path) {
+    this.path = path;
+  }
+
+  @Override
+  String getKind() {
+    return "lib"
+  }
+
+  @Override
+  void appendNode(Node node) {
+    node.appendNode("classpathentry", Map.of(
+        "kind", "lib",
+        "path", path
+    ));
+  }
+}
diff --git a/dev-tools/eclipse/dot.classpath.xsl b/gradle/ide/eclipse/dot.classpath.xsl
similarity index 100%
rename from dev-tools/eclipse/dot.classpath.xsl
rename to gradle/ide/eclipse/dot.classpath.xsl
diff --git a/dev-tools/eclipse/dot.project b/gradle/ide/eclipse/dot.project
similarity index 100%
rename from dev-tools/eclipse/dot.project
rename to gradle/ide/eclipse/dot.project
diff --git a/dev-tools/eclipse/dot.settings/org.eclipse.core.resources.prefs b/gradle/ide/eclipse/dot.settings/org.eclipse.core.resources.prefs
similarity index 100%
rename from dev-tools/eclipse/dot.settings/org.eclipse.core.resources.prefs
rename to gradle/ide/eclipse/dot.settings/org.eclipse.core.resources.prefs
diff --git a/dev-tools/eclipse/dot.settings/org.eclipse.jdt.core.prefs b/gradle/ide/eclipse/dot.settings/org.eclipse.jdt.core.prefs
similarity index 100%
rename from dev-tools/eclipse/dot.settings/org.eclipse.jdt.core.prefs
rename to gradle/ide/eclipse/dot.settings/org.eclipse.jdt.core.prefs
diff --git a/dev-tools/eclipse/dot.settings/org.eclipse.jdt.ui.prefs b/gradle/ide/eclipse/dot.settings/org.eclipse.jdt.ui.prefs
similarity index 100%
rename from dev-tools/eclipse/dot.settings/org.eclipse.jdt.ui.prefs
rename to gradle/ide/eclipse/dot.settings/org.eclipse.jdt.ui.prefs
diff --git a/dev-tools/eclipse/run-solr-cloud.launch b/gradle/ide/eclipse/run-solr-cloud.launch
similarity index 100%
rename from dev-tools/eclipse/run-solr-cloud.launch
rename to gradle/ide/eclipse/run-solr-cloud.launch
diff --git a/dev-tools/eclipse/run-solr.launch b/gradle/ide/eclipse/run-solr.launch
similarity index 100%
rename from dev-tools/eclipse/run-solr.launch
rename to gradle/ide/eclipse/run-solr.launch
diff --git a/dev-tools/eclipse/run-test-cases.launch b/gradle/ide/eclipse/run-test-cases.launch
similarity index 100%
rename from dev-tools/eclipse/run-test-cases.launch
rename to gradle/ide/eclipse/run-test-cases.launch
diff --git a/gradle/maven/defaults-maven.gradle b/gradle/maven/defaults-maven.gradle
index 6c4b458..570d011 100644
--- a/gradle/maven/defaults-maven.gradle
+++ b/gradle/maven/defaults-maven.gradle
@@ -60,8 +60,6 @@
         ":solr:core",
         ":solr:solrj",
         ":solr:contrib:analysis-extras",
-        ":solr:contrib:dataimporthandler",
-        ":solr:contrib:dataimporthandler-extras",
         ":solr:contrib:analytics",
         ":solr:contrib:clustering",
         ":solr:contrib:extraction",
diff --git a/gradle/solr/packaging.gradle b/gradle/solr/packaging.gradle
new file mode 100644
index 0000000..3b1ea92
--- /dev/null
+++ b/gradle/solr/packaging.gradle
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+// For Solr, a 'resolve' task is much more complex. There are three folders:
+// lib/
+// test-lib/
+// lucene-libs/
+//
+// There doesn't seem to be one ideal set of rules on how these should be created, but
+// I tried to imitate the current (master) logic present in ivy and ant files in this way:
+//
+// The "solr platform" set of dependencies is a union of all deps for (core, solrj, server).
+//
+// Then:
+// lib - these are module's "own" dependencies, excluding Lucene's that are not present in the
+//       solr platform.
+// lucene-libs - these are lucene modules declared as module's dependencies and not
+//       present in solr platform.
+// test-lib/ - libs not present in solr platform and not included in solr:test-framework.
+//
+// None of these are really needed with gradle... they should be collected just in the distribution
+// package, not at each project's level.
+//
+// Unfortunately this "resolution" process is also related to how the final Solr packaging is assembled.
+// I don't know how to untie these two cleanly.
+//
+
+configure(allprojects.findAll {project -> project.path.startsWith(":solr:contrib") }) {
+  plugins.withType(JavaPlugin) {
+    ext {
+      packagingDir = file("${buildDir}/packaging")
+      deps = file("${packagingDir}/${project.name}")
+    }
+
+    configurations {
+      solrPlatformLibs
+      solrTestPlatformLibs
+      runtimeLibs {
+        extendsFrom runtimeElements
+      }
+      packaging
+    }
+
+    dependencies {
+      solrPlatformLibs project(":solr:core")
+      solrPlatformLibs project(":solr:solrj")
+      solrPlatformLibs project(":solr:server")
+
+      solrTestPlatformLibs project(":solr:test-framework")
+    }
+
+    // An aggregate that configures lib, lucene-libs and test-lib in a temporary location.
+    task assemblePackaging(type: Sync) {
+      from "README.txt"
+
+      from ({
+        def externalLibs = configurations.runtimeLibs.copyRecursive { dep ->
+          if (dep instanceof org.gradle.api.artifacts.ProjectDependency) {
+            return !dep.dependencyProject.path.startsWith(":solr")
+          } else {
+            return true
+          }
+        }
+        return externalLibs - configurations.solrPlatformLibs
+      }, {
+        exclude "lucene-*"
+        into "lib"
+      })
+
+      from ({
+        def projectLibs = configurations.runtimeLibs.copyRecursive { dep ->
+          (dep instanceof org.gradle.api.artifacts.ProjectDependency)
+        }
+        return projectLibs - configurations.solrPlatformLibs
+      }, {
+        include "lucene-*"
+        into "lucene-libs"
+      })
+
+      into deps
+    }
+
+    task syncLib(type: Sync) {
+      dependsOn assemblePackaging
+
+      from(file("${deps}/lib"), {
+        include "**"
+      })
+      into file("${projectDir}/lib")
+    }
+
+    task syncTestLib(type: Sync) {
+      // From test runtime classpath exclude:
+      // 1) project dependencies (and their dependencies)
+      // 2) runtime dependencies
+      // What remains is this module's "own" test dependency.
+      from({
+        def testRuntimeLibs = configurations.testRuntimeClasspath.copyRecursive { dep ->
+          !(dep instanceof org.gradle.api.artifacts.ProjectDependency)
+        }
+
+        return testRuntimeLibs - configurations.runtimeLibs - configurations.solrTestPlatformLibs
+      })
+
+      into file("${projectDir}/test-lib")
+    }
+
+    task resolve() {
+      dependsOn syncLib, syncTestLib
+    }
+
+    // Contrib packaging currently depends on internal resolve.
+    artifacts {
+      packaging packagingDir, {
+        builtBy assemblePackaging
+      }
+    }
+  }
+}
+
+configure(project(":solr:example")) {
+  evaluationDependsOn(":solr:example") // explicitly wait for other configs to be applied
+
+  task resolve(type: Copy) {
+    from(configurations.postJar, {
+      into "exampledocs/"
+    })
+
+    into projectDir
+  }
+}
+
+configure(project(":solr:server")) {
+  evaluationDependsOn(":solr:server")
+
+  task resolve(type: Copy) {
+    dependsOn assemblePackaging
+
+    from({ packagingDir }, {
+      include "**/*.jar"
+      include "solr-webapp/webapp/**"
+      includeEmptyDirs false
+    })
+
+    into projectDir
+  }
+}
+
+configure(project(":solr:core")) {
+  evaluationDependsOn(":solr:core")
+
+  configurations {
+    runtimeLibs {
+      extendsFrom runtimeElements
+    }
+  }
+
+  task resolve(type: Sync) {
+    from({
+      def ownDeps = configurations.runtimeLibs.copyRecursive { dep ->
+        if (dep instanceof org.gradle.api.artifacts.ProjectDependency) {
+          return !dep.dependencyProject.path.startsWith(":solr")
+        } else {
+          return true
+        }
+      }
+      return ownDeps
+    }, {
+      exclude "lucene-*"
+    })
+
+    into "lib"
+  }
+}
+
+configure(project(":solr:solrj")) {
+  evaluationDependsOn(":solr:solrj")
+
+  task resolve(type: Sync) {
+    from({ configurations.runtimeClasspath }, {
+    })
+
+    into "lib"
+  }
+}
diff --git a/gradle/solr/solr-forbidden-apis.gradle b/gradle/solr/solr-forbidden-apis.gradle
new file mode 100644
index 0000000..bb4dc60
--- /dev/null
+++ b/gradle/solr/solr-forbidden-apis.gradle
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+// Why does solr exclude these from forbidden API checks?
+
+configure(project(":solr:core")) {
+  tasks.matching { it.name == "forbiddenApisMain" || it.name == "forbiddenApisTest" }.all {
+    exclude "org/apache/solr/internal/**"
+    exclude "org/apache/hadoop/**"
+  }
+}
\ No newline at end of file
diff --git a/gradle/testing/alternative-jdk-support.gradle b/gradle/testing/alternative-jdk-support.gradle
index 1e69291..f864b64 100644
--- a/gradle/testing/alternative-jdk-support.gradle
+++ b/gradle/testing/alternative-jdk-support.gradle
@@ -32,6 +32,9 @@
   }
 }()
 
+// Set up root project's property.
+rootProject.ext.runtimeJava = altJvm
+
 if (!currentJvm.javaExecutable.equals(altJvm.javaExecutable)) {
   // Set up java toolchain tasks to use the alternative Java.
   // This is a related Gradle issue for the future:
@@ -63,6 +66,7 @@
       options.forkOptions.javaHome = altJvm.installationDirectory.asFile
     }
 
+    // Javadoc compilation.
     def javadocExecutable = altJvm.jdk.get().javadocExecutable.asFile
     tasks.matching { it.name == "renderJavadoc" || it.name == "renderSiteJavadoc" }.all {
       dependsOn ":altJvmWarning"
diff --git a/gradle/testing/beasting.gradle b/gradle/testing/beasting.gradle
new file mode 100644
index 0000000..8934100
--- /dev/null
+++ b/gradle/testing/beasting.gradle
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// This adds 'beast' task which clones tests a given number of times (preferably
+// constrained with a filtering pattern passed via '--tests').
+
+// TODO: subtasks are not run in parallel (sigh, gradle removed this capability for intra-project tasks).
+// TODO: maybe it would be better to take a deeper approach and just feed the task
+//       runner duplicated suite names (much like https://github.com/gradle/test-retry-gradle-plugin)
+// TODO: this is a somewhat related issue: https://github.com/gradle/test-retry-gradle-plugin/issues/29
+
+def beastingMode = gradle.startParameter.taskNames.any{ name -> name == 'beast' || name.endsWith(':beast') }
+
+allprojects {
+  plugins.withType(JavaPlugin) {
+    ext {
+      testOptions += [
+          [propName: 'tests.dups', value: 0, description: "Reiterate runs of entire test suites ('beast' task)."]
+      ]
+    }
+  }
+}
+
+if (beastingMode) {
+  if (rootProject.rootSeedUserProvided) {
+    logger.warn("Root randomization seed is externally provided, all duplicated runs will use the same starting seed.")
+  }
+
+  allprojects {
+    plugins.withType(JavaPlugin) {
+      task beast(type: BeastTask) {
+        description "Run a test suite (or a set of tests) many times over (duplicate 'test' task)."
+        group "Verification"
+      }
+
+      def dups = Integer.parseInt(resolvedTestOption("tests.dups") as String)
+      if (dups <= 0) {
+        throw new GradleException("Specify -Ptests.dups=[count] for beast task.")
+      }
+
+      // generate N test tasks and attach them to the beasting task for this project;
+      // the test filter will be applied by the beast task once it is received from
+      // command line.
+      def subtasks = (1..dups).collect { value ->
+        return tasks.create(name: "test_${value}", type: Test, {
+          failFast = true
+          doFirst {
+            // If there is a global root seed, use it (all duplicated tasks will run
+            // from the same starting seed). Otherwise pick a sequential derivative.
+            if (!rootProject.rootSeedUserProvided) {
+              systemProperty("tests.seed",
+                  String.format("%08X", new Random(rootProject.rootSeedLong + value).nextLong()))
+            }
+          }
+        })
+      }
+
+      beast.dependsOn subtasks
+    }
+  }
+}
+
+/**
+ * We have to declare a dummy task here to be able to reuse the same syntax for 'test' task
+ * filter option.
+ */
+class BeastTask extends DefaultTask {
+  @Option(option = "tests", description = "Sets test class or method name to be included, '*' is supported.")
+  public void setTestNamePatterns(List<String> patterns) {
+    taskDependencies.getDependencies(this).each { subtask ->
+      subtask.filter.setCommandLineIncludePatterns(patterns)
+    }
+  }
+
+  @TaskAction
+  void run() {
+  }
+}
\ No newline at end of file
diff --git a/gradle/testing/defaults-tests.gradle b/gradle/testing/defaults-tests.gradle
index 583b76e..2ceb436 100644
--- a/gradle/testing/defaults-tests.gradle
+++ b/gradle/testing/defaults-tests.gradle
@@ -20,19 +20,54 @@
 import org.gradle.api.tasks.testing.logging.*
 import org.apache.lucene.gradle.ErrorReportingTestListener
 
+def resources = scriptResources(buildscript)
 def verboseModeHookInstalled = false
 
 allprojects {
   plugins.withType(JavaPlugin) {
-    def verboseMode = Boolean.parseBoolean(propertyOrDefault("tests.verbose", "false"))
-
     project.ext {
+      // This array will collect all test options, including default values and option description.
+      // The actual values of these properties (defaults, project properties) are resolved lazily after evaluation
+      // completes.
+      // [propName: 'tests.foo', value: "bar", description: "Sets foo in tests."],
+      testOptions = [
+          // asserts, debug output.
+          [propName: 'tests.verbose', value: false, description: "Enables verbose mode (emits full test outputs immediately)."],
+          [propName: 'tests.workDir',
+           value: { -> file("${buildDir}/tmp/tests-tmp") },
+           description: "Working directory for forked test JVMs",
+           includeInReproLine: false
+          ],
+          // JVM settings
+          [propName: 'tests.minheapsize', value: "256m", description: "Minimum heap size for test JVMs"],
+          [propName: 'tests.heapsize', value: "512m", description: "Heap size for test JVMs"],
+          // Test forks
+          [propName: 'tests.jvms',
+           value: { -> ((int) Math.max(1, Math.min(Runtime.runtime.availableProcessors() / 2.0, 4.0))) },
+           description: "Number of forked test JVMs"],
+          [propName: 'tests.haltonfailure', value: true, description: "Halt processing on test failure."],
+          [propName: 'tests.jvmargs',
+           value: { -> propertyOrEnvOrDefault("tests.jvmargs", "TEST_JVM_ARGS", "-XX:TieredStopAtLevel=1") },
+           description: "Arguments passed to each forked JVM."],
+      ]
+
+      // Resolves test option's value.
+      resolvedTestOption = { propName ->
+        def option = testOptions.find { entry -> entry.propName == propName }
+        if (option == null) {
+          throw new GradleException("No such test option: " + propName)
+        }
+        return propertyOrDefault(option.propName, option.value)
+      }
+
       testsCwd = file("${buildDir}/tmp/tests-cwd")
-      testsTmpDir = file(propertyOrDefault("tests.workDir", "${buildDir}/tmp/tests-tmp"))
+      testsTmpDir = file(resolvedTestOption("tests.workDir"))
       commonDir = project(":lucene").projectDir
       commonSolrDir = project(":solr").projectDir
     }
 
+    def verboseMode = resolvedTestOption("tests.verbose").toBoolean()
+
     // If we're running in verbose mode and:
     // 1) worker count > 1
     // 2) number of 'test' tasks in the build is > 1
@@ -50,26 +85,28 @@
       }
     }
 
-    test {
+    tasks.withType(Test) {
       ext {
         testOutputsDir = file("${reports.junitXml.destination}/outputs")
       }
 
-      if (verboseMode) {
+      maxParallelForks = resolvedTestOption("tests.jvms") as Integer
+      if (verboseMode && maxParallelForks != 1) {
+        logger.lifecycle("tests.jvm forced to 1 in verbose mode.")
         maxParallelForks = 1
-      } else {
-        maxParallelForks = propertyOrDefault("tests.jvms", (int) Math.max(1, Math.min(Runtime.runtime.availableProcessors() / 2.0, 4.0))) as Integer
       }
 
       workingDir testsCwd
       useJUnit()
 
-      minHeapSize = propertyOrDefault("tests.minheapsize", "256m")
-      maxHeapSize = propertyOrDefault("tests.heapsize", "512m")
+      minHeapSize = resolvedTestOption("tests.minheapsize")
+      maxHeapSize = resolvedTestOption("tests.heapsize")
 
-      jvmArgs Commandline.translateCommandline(propertyOrDefault("tests.jvmargs", "-XX:TieredStopAtLevel=1"))
+      ignoreFailures = resolvedTestOption("tests.haltonfailure").toBoolean() == false
 
-      systemProperty 'java.util.logging.config.file', file("${commonDir}/tools/junit4/logging.properties")
+      jvmArgs Commandline.translateCommandline(resolvedTestOption("tests.jvmargs"))
+
+      systemProperty 'java.util.logging.config.file', file("${resources}/logging.properties")
       systemProperty 'java.awt.headless', 'true'
       systemProperty 'jdk.map.althashing.threshold', '0'
 
diff --git a/lucene/tools/junit4/logging.properties b/gradle/testing/defaults-tests/logging.properties
similarity index 100%
rename from lucene/tools/junit4/logging.properties
rename to gradle/testing/defaults-tests/logging.properties
diff --git a/gradle/testing/fail-on-no-tests.gradle b/gradle/testing/fail-on-no-tests.gradle
index 4851b47..db763d8 100644
--- a/gradle/testing/fail-on-no-tests.gradle
+++ b/gradle/testing/fail-on-no-tests.gradle
@@ -19,7 +19,7 @@
 
 configure(allprojects) {
   plugins.withType(JavaPlugin) {
-    test {
+    tasks.withType(Test) {
       filter {
         failOnNoMatchingTests = false
       }
diff --git a/gradle/testing/policies/solr-tests.policy b/gradle/testing/policies/solr-tests.policy
deleted file mode 100644
index 1290a38..0000000
--- a/gradle/testing/policies/solr-tests.policy
+++ /dev/null
@@ -1,217 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-// Policy file for solr. Please keep minimal and avoid wildcards.
-
-// permissions needed for tests to pass, based on properties set by the build system
-// NOTE: if the property is not set, the permission entry is ignored.
-grant {
-  // 3rd party jar resources (where symlinks are not supported), test-files/ resources
-  permission java.io.FilePermission "${common.dir}${/}-", "read";
-  permission java.io.FilePermission "${common.dir}${/}..${/}solr${/}-", "read";
-
-  // system jar resources
-  permission java.io.FilePermission "${java.home}${/}-", "read";
-
-  // Test launchers (randomizedtesting, etc.)
-  permission java.io.FilePermission "${java.io.tmpdir}", "read,write";
-  permission java.io.FilePermission "${java.io.tmpdir}${/}-", "read,write,delete";
-
-  permission java.io.FilePermission "${tests.linedocsfile}", "read";
-  // DirectoryFactoryTest messes with these (wtf?)
-  permission java.io.FilePermission "/tmp/inst1/conf/solrcore.properties", "read";
-  permission java.io.FilePermission "/path/to/myinst/conf/solrcore.properties", "read";
-  // TestConfigSets messes with these (wtf?)
-  permission java.io.FilePermission "/path/to/solr/home/lib", "read";
-
-  permission java.nio.file.LinkPermission "hard";
-
-  // all possibilities of accepting/binding/connections on localhost with ports >=1024:
-  permission java.net.SocketPermission "localhost:1024-", "accept,listen,connect,resolve";
-  permission java.net.SocketPermission "127.0.0.1:1024-", "accept,listen,connect,resolve";
-  permission java.net.SocketPermission "[::1]:1024-", "accept,listen,connect,resolve";
-  // "dead hosts", we try to keep it fast
-  permission java.net.SocketPermission "[::1]:4", "connect,resolve";
-  permission java.net.SocketPermission "[::1]:6", "connect,resolve";
-  permission java.net.SocketPermission "[::1]:8", "connect,resolve";
-  
-  // Basic permissions needed for Lucene to work:
-  permission java.util.PropertyPermission "*", "read,write";
-
-  // needed by randomizedtesting runner to identify test methods.
-  permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
-  permission java.lang.RuntimePermission "accessDeclaredMembers";
-  // needed by certain tests to redirect sysout/syserr:
-  permission java.lang.RuntimePermission "setIO";
-  // needed by randomized runner to catch failures from other threads:
-  permission java.lang.RuntimePermission "setDefaultUncaughtExceptionHandler";
-  // needed by randomized runner getTopThreadGroup:
-  permission java.lang.RuntimePermission "modifyThreadGroup";
-  // needed by tests e.g. shutting down executors:
-  permission java.lang.RuntimePermission "modifyThread";
-  // needed for tons of test hacks etc
-  permission java.lang.RuntimePermission "getStackTrace";
-  // needed for mock filesystems in tests
-  permission java.lang.RuntimePermission "fileSystemProvider";
-  // needed for test of IOUtils.spins (maybe it can be avoided)
-  permission java.lang.RuntimePermission "getFileStoreAttributes";
-  // analyzers/uima: needed by lucene expressions' JavascriptCompiler
-  permission java.lang.RuntimePermission "createClassLoader";
-  // needed to test unmap hack on platforms that support it
-  permission java.lang.RuntimePermission "accessClassInPackage.sun.misc";
-  // needed by jacoco to dump coverage
-  permission java.lang.RuntimePermission "shutdownHooks";
-  // needed by org.apache.logging.log4j
-  permission java.lang.RuntimePermission "getenv.*";
-  permission java.lang.RuntimePermission "getClassLoader";
-  permission java.lang.RuntimePermission "setContextClassLoader";
-  permission java.lang.RuntimePermission "getStackWalkerWithClassReference";
-  // needed by bytebuddy
-  permission java.lang.RuntimePermission "defineClass";
-  // needed by mockito
-  permission java.lang.RuntimePermission "accessClassInPackage.sun.reflect";
-  permission java.lang.RuntimePermission "reflectionFactoryAccess";
-  // needed by SolrResourceLoader
-  permission java.lang.RuntimePermission "closeClassLoader";
-  // needed by HttpSolrClient
-  permission java.lang.RuntimePermission "getFileSystemAttributes";
-  // needed by hadoop auth (TODO: there is a cleaner way to handle this)
-  permission java.lang.RuntimePermission "loadLibrary.jaas";
-  permission java.lang.RuntimePermission "loadLibrary.jaas_unix";
-  permission java.lang.RuntimePermission "loadLibrary.jaas_nt";
-  // needed by hadoop common RawLocalFileSystem for java nio getOwner
-  permission java.lang.RuntimePermission "accessUserInformation";
-  // needed by hadoop hdfs
-  permission java.lang.RuntimePermission "readFileDescriptor";
-  permission java.lang.RuntimePermission "writeFileDescriptor";
-  // needed by hadoop http
-  permission java.lang.RuntimePermission "getProtectionDomain";
-
-  // These two *have* to be spelled out a separate
-  permission java.lang.management.ManagementPermission "control";
-  permission java.lang.management.ManagementPermission "monitor";
-
-  // needed by hadoop htrace
-  permission java.net.NetPermission "getNetworkInformation";
-
-  // needed by DIH
-  permission java.sql.SQLPermission "deregisterDriver";
-
-  permission java.util.logging.LoggingPermission "control";
-
-  // needed by solr mbeans feature/tests
-  // TODO: can we remove wildcard for class names/members?
-  permission javax.management.MBeanPermission "*", "getAttribute";
-  permission javax.management.MBeanPermission "*", "getMBeanInfo";
-  permission javax.management.MBeanPermission "*", "queryMBeans";
-  permission javax.management.MBeanPermission "*", "queryNames";
-  permission javax.management.MBeanPermission "*", "registerMBean";
-  permission javax.management.MBeanPermission "*", "unregisterMBean";
-  permission javax.management.MBeanServerPermission "createMBeanServer";
-  permission javax.management.MBeanServerPermission "findMBeanServer";
-  permission javax.management.MBeanServerPermission "releaseMBeanServer";
-  permission javax.management.MBeanTrustPermission "register";
-
-  // needed by hadoop auth
-  permission javax.security.auth.AuthPermission "getSubject";
-  permission javax.security.auth.AuthPermission "modifyPrincipals";
-  permission javax.security.auth.AuthPermission "doAs";
-  permission javax.security.auth.AuthPermission "getLoginConfiguration";
-  permission javax.security.auth.AuthPermission "setLoginConfiguration";
-  permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
-  permission javax.security.auth.AuthPermission "modifyPublicCredentials";
-  permission javax.security.auth.PrivateCredentialPermission "org.apache.hadoop.security.Credentials * \"*\"", "read";
-
-  // needed by hadoop security
-  permission java.security.SecurityPermission "putProviderProperty.SaslPlainServer";
-  permission java.security.SecurityPermission "insertProvider";
-
-  permission javax.xml.bind.JAXBPermission "setDatatypeConverter";
-
-  // SSL related properties for Solr tests
-  permission javax.net.ssl.SSLPermission "setDefaultSSLContext";
-
-  // SASL/Kerberos related properties for Solr tests
-  permission javax.security.auth.PrivateCredentialPermission "javax.security.auth.kerberos.KerberosTicket * \"*\"", "read";
-  
-  // may only be necessary with Java 7?
-  permission javax.security.auth.PrivateCredentialPermission "javax.security.auth.kerberos.KeyTab * \"*\"", "read";
-  permission javax.security.auth.PrivateCredentialPermission "sun.security.jgss.krb5.Krb5Util$KeysFromKeyTab * \"*\"", "read";
-  
-  permission javax.security.auth.kerberos.ServicePermission "*", "initiate";
-  permission javax.security.auth.kerberos.ServicePermission "*", "accept";
-  permission javax.security.auth.kerberos.DelegationPermission "\"*\" \"krbtgt/EXAMPLE.COM@EXAMPLE.COM\"";
-  
-  // java 8 accessibility requires this perm - should not after 8 I believe (rrd4j is the root reason we hit an accessibility code path)
-  permission java.awt.AWTPermission "*";
-
-  // used by solr to create sandboxes (e.g. script execution)
-  permission java.security.SecurityPermission "createAccessControlContext";
-};
-
-// additional permissions based on system properties set by /bin/solr
-// NOTE: if the property is not set, the permission entry is ignored.
-grant {
-  permission java.io.FilePermission "${hadoop.security.credential.provider.path}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${hadoop.security.credential.provider.path}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.jetty.keystore}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.jetty.keystore}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.jetty.truststore}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.jetty.truststore}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.install.dir}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.install.dir}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${jetty.home}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${jetty.home}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.solr.home}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.solr.home}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.data.home}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.data.home}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.default.confdir}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.default.confdir}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${solr.log.dir}", "read,write,delete,readlink";
-  permission java.io.FilePermission "${solr.log.dir}${/}-", "read,write,delete,readlink";
-
-  permission java.io.FilePermission "${log4j.configurationFile}", "read,write,delete,readlink";
-
-  // expanded to a wildcard if set, allows all networking everywhere
-  permission java.net.SocketPermission "${solr.internal.network.permission}", "accept,listen,connect,resolve";
-};
-
-// Grant all permissions to Gradle test runner classes.
-
-grant codeBase "file:${gradle.lib.dir}${/}-" {
-  permission java.security.AllPermission;
-};
-
-grant codeBase "file:${gradle.worker.jar}" {
-  permission java.security.AllPermission;
-};
-
-grant {
-  // Allow reading gradle worker JAR.
-  permission java.io.FilePermission "${gradle.worker.jar}", "read";
-  // Allow reading from classpath JARs (resources).
-  permission java.io.FilePermission "${gradle.user.home}${/}-", "read";
-};
\ No newline at end of file
diff --git a/gradle/testing/profiling.gradle b/gradle/testing/profiling.gradle
index 01938f9..6655df7 100644
--- a/gradle/testing/profiling.gradle
+++ b/gradle/testing/profiling.gradle
@@ -19,29 +19,39 @@
 
 def recordings = files()
 
-if (Boolean.parseBoolean(propertyOrDefault("tests.profile", "false"))) {
-  allprojects {
-    tasks.withType(Test) {
-      jvmArgs("-XX:StartFlightRecording=dumponexit=true,maxsize=250M,settings=" + rootProject.file("gradle/testing/profiling.jfc"),
+allprojects {
+  plugins.withType(JavaPlugin) {
+    ext {
+      testOptions += [
+          [propName: 'tests.profile', value: false, description: "Enable java flight recorder profiling."]
+      ]
+    }
+
+    if (resolvedTestOption("tests.profile").toBoolean()) {
+      allprojects {
+        tasks.withType(Test) {
+          jvmArgs("-XX:StartFlightRecording=dumponexit=true,maxsize=250M,settings=" + rootProject.file("gradle/testing/profiling.jfc"),
               "-XX:+UnlockDiagnosticVMOptions",
               "-XX:+DebugNonSafepoints")
-      // delete any previous profile results
-      doFirst {
-        project.delete fileTree(dir: workingDir, include: '*.jfr')
+          // delete any previous profile results
+          doFirst {
+            project.delete fileTree(dir: workingDir, include: '*.jfr')
+          }
+          doLast {
+            recordings = recordings.plus fileTree(dir: workingDir, include: '*.jfr')
+          }
+        }
       }
-      doLast {
-        recordings = recordings.plus fileTree(dir: workingDir, include: '*.jfr')
-      }
-    }
-  }
 
-  gradle.buildFinished {
-    if (!recordings.isEmpty()) {
-      ProfileResults.printReport(recordings.getFiles().collect { it.toString() },
-                                 propertyOrDefault(ProfileResults.MODE_KEY, ProfileResults.MODE_DEFAULT) as String,
-                                 Integer.parseInt(propertyOrDefault(ProfileResults.STACKSIZE_KEY, ProfileResults.STACKSIZE_DEFAULT)),
-                                 Integer.parseInt(propertyOrDefault(ProfileResults.COUNT_KEY, ProfileResults.COUNT_DEFAULT)),
-                                 Boolean.parseBoolean(propertyOrDefault(ProfileResults.LINENUMBERS_KEY, ProfileResults.LINENUMBERS_DEFAULT)))
+      gradle.buildFinished {
+        if (!recordings.isEmpty()) {
+          ProfileResults.printReport(recordings.getFiles().collect { it.toString() },
+              propertyOrDefault(ProfileResults.MODE_KEY, ProfileResults.MODE_DEFAULT) as String,
+              Integer.parseInt(propertyOrDefault(ProfileResults.STACKSIZE_KEY, ProfileResults.STACKSIZE_DEFAULT)),
+              Integer.parseInt(propertyOrDefault(ProfileResults.COUNT_KEY, ProfileResults.COUNT_DEFAULT)),
+              Boolean.parseBoolean(propertyOrDefault(ProfileResults.LINENUMBERS_KEY, ProfileResults.LINENUMBERS_DEFAULT)))
+        }
+      }
     }
   }
-}
+}
\ No newline at end of file
diff --git a/gradle/testing/randomization.gradle b/gradle/testing/randomization.gradle
index 0f36bae..298bfe1 100644
--- a/gradle/testing/randomization.gradle
+++ b/gradle/testing/randomization.gradle
@@ -33,10 +33,13 @@
   }
 }
 
+def resources = scriptResources(buildscript)
+
 // Pick the "root" seed from which everything else is derived.
 configure(rootProject) {
   ext {
     rootSeed = propertyOrDefault('tests.seed', String.format("%08X", new Random().nextLong()))
+    rootSeedUserProvided = (propertyOrDefault('tests.seed', null) != null)
     rootSeedLong = SeedUtils.parseSeedChain(rootSeed)[0]
     projectSeedLong = rootSeedLong ^ project.path.hashCode()
   }
@@ -59,19 +62,18 @@
 allprojects {
   plugins.withType(JavaPlugin) {
     ext {
-      testOptions = [
+      testOptions += [
           // seed, repetition and amplification.
-          [propName: 'tests.seed', value: "random", description: "Sets the master randomization seed."],
-          [propName: 'tests.iters', value: null, description: "Duplicate (re-run) each test N times."],
+          [propName: 'tests.seed', value: { -> rootSeed }, description: "Sets the master randomization seed."],
+          [propName: 'tests.iters', value: null, description: "Duplicate (re-run) each test case N times."],
           [propName: 'tests.multiplier', value: 1, description: "Value multiplier for randomized tests."],
           [propName: 'tests.maxfailures', value: null, description: "Skip tests after a given number of failures."],
           [propName: 'tests.timeoutSuite', value: null, description: "Timeout (in millis) for an entire suite."],
           [propName: 'tests.failfast', value: "false", description: "Stop the build early on failure.", buildOnly: true],
           // asserts, debug output.
           [propName: 'tests.asserts', value: "true", description: "Enables or disables assertions mode."],
-          [propName: 'tests.verbose', value: false, description: "Emit verbose debug information from tests."],
           [propName: 'tests.infostream', value: false, description: "Enables or disables infostream logs."],
-          [propName: 'tests.leaveTemporary', value: null, description: "Leave temporary directories after tests complete."],
+          [propName: 'tests.leaveTemporary', value: false, description: "Leave temporary directories after tests complete."],
           [propName: 'tests.useSecurityManager', value: true, description: "Control security manager in tests.", buildOnly: true],
           // component randomization
           [propName: 'tests.codec', value: "random", description: "Sets the codec tests should run with."],
@@ -87,7 +89,12 @@
           [propName: 'tests.weekly', value: false, description: "Enables or disables @Weekly tests."],
           [propName: 'tests.monster', value: false, description: "Enables or disables @Monster tests."],
           [propName: 'tests.awaitsfix', value: null, description: "Enables or disables @AwaitsFix tests."],
-          [propName: 'tests.file.encoding', value: "random", description: "Sets the default file.encoding on test JVM.", buildOnly: true],
+          [propName: 'tests.badapples', value: null, description: "Enables or disables @BadApple tests."],
+          [propName: 'tests.file.encoding',
+           value: { ->
+             RandomPicks.randomFrom(new Random(projectSeedLong), ["US-ASCII", "ISO-8859-1", "UTF-8"])
+           },
+           description: "Sets the default file.encoding on test JVM.", buildOnly: true],
           // test data
           [propName: 'tests.linedocsfile', value: 'europarl.lines.txt.gz', description: "Test data file path."],
           // miscellaneous; some of them very weird.
@@ -104,7 +111,11 @@
     ext {
       testOptions += [
           [propName: 'tests.luceneMatchVersion', value: baseVersion, description: "Base Lucene version."],
-          [propName: 'common-solr.dir', value: file("${commonDir}/../solr").path, description: "Solr base dir."],
+          [propName: 'common-solr.dir',
+           value: { -> file("${commonDir}/../solr").path },
+           description: "Solr base dir.",
+           includeInReproLine: false
+          ],
           [propName: 'solr.directoryFactory', value: "org.apache.solr.core.MockDirectoryFactory", description: "Solr directory factory."],
           [propName: 'tests.src.home', value: null, description: "See SOLR-14023."],
           [propName: 'solr.tests.use.numeric.points', value: null, description: "Point implementation to use (true=numerics, false=trie)."],
@@ -120,23 +131,15 @@
       ext.testOptionsResolved = testOptions.findAll { opt ->
         propertyOrDefault(opt.propName, opt.value) != null
       }.collectEntries { opt ->
-        [(opt.propName): Objects.toString(propertyOrDefault(opt.propName, opt.value))]
-      }
-
-      // These are not official options or dynamically seed-derived options.
-      if (testOptionsResolved['tests.file.encoding'] == 'random') {
-        testOptionsResolved['tests.file.encoding'] = RandomPicks.randomFrom(
-            new Random(projectSeedLong), [
-                "US-ASCII", "ISO-8859-1", "UTF-8"
-            ])
-      }
-
-      if (testOptionsResolved['tests.seed'] == 'random') {
-        testOptionsResolved['tests.seed'] = rootSeed
+        [(opt.propName): Objects.toString(resolvedTestOption(opt.propName))]
       }
 
       // Compute the "reproduce with" string.
       ext.testOptionsForReproduceLine = testOptions.findAll { opt ->
+        if (opt["includeInReproLine"] == false) {
+          return false
+        }
+
         def defValue = Objects.toString(opt.value, null)
         def value = testOptionsResolved[opt.propName]
         return defValue != value
@@ -151,14 +154,19 @@
          "tests.leavetmpdir",
          "solr.test.leavetmpdir",
       ].find { prop ->
-        Boolean.parseBoolean(propertyOrDefault(prop, "false"))
+        def v = Boolean.parseBoolean(propertyOrDefault(prop, "false"))
+        if (v) {
+          logger.lifecycle("Update your code to use the official 'tests.leaveTemporary' option (you used '${prop}').")
+        }
+        return v
       }) {
         testOptionsResolved['tests.leaveTemporary'] = "true"
       }
 
       // Append resolved test properties to the test task.
-      test {
-        // TODO: we could remove opts with "buildOnly: true" (?)
+      tasks.withType(Test) { task ->
+        // TODO: we could remove some options that are only relevant to the build environment
+        // and not the test JVM itself.
         systemProperties testOptionsResolved
 
         if (Boolean.parseBoolean(testOptionsResolved['tests.asserts'])) {
@@ -176,14 +184,14 @@
         if (Boolean.parseBoolean(testOptionsResolved["tests.useSecurityManager"])) {
           if (project.path == ":lucene:replicator") {
             systemProperty 'java.security.manager', "org.apache.lucene.util.TestSecurityManager"
-            systemProperty 'java.security.policy', rootProject.file("lucene/tools/junit4/replicator-tests.policy")
+            systemProperty 'java.security.policy', file("${resources}/policies/replicator-tests.policy")
           } else if (project.path.startsWith(":lucene")) {
             systemProperty 'java.security.manager', "org.apache.lucene.util.TestSecurityManager"
-            systemProperty 'java.security.policy', rootProject.file("lucene/tools/junit4/tests.policy")
+            systemProperty 'java.security.policy', file("${resources}/policies/tests.policy")
           } else {
             systemProperty 'common-solr.dir', commonSolrDir
             systemProperty 'java.security.manager', "org.apache.lucene.util.TestSecurityManager"
-            systemProperty 'java.security.policy', rootProject.file("gradle/testing/policies/solr-tests.policy")
+            systemProperty 'java.security.policy', file("${resources}/policies/solr-tests.policy")
           }
 
           systemProperty 'common.dir', commonDir
@@ -214,32 +222,24 @@
         println "Test options for project ${project.path} and seed \"${rootSeed}\":"
 
         testOptions.sort { a, b -> a.propName.compareTo(b.propName) }.each { opt ->
-          def defValue = Objects.toString(opt.value, null)
+          def defValue
+          def computedValue = false
+          if (opt.value instanceof Closure) {
+            defValue = Objects.toString(opt.value(), null)
+            computedValue = true
+          } else {
+            defValue = Objects.toString(opt.value, null)
+          }
+
           def value = testOptionsResolved[opt.propName]
           println String.format(Locale.ROOT,
-              "%s%-23s = %-8s # %s",
-              (defValue != value ? "! " : "  "),
+              "%s%-24s = %-8s # %s",
+              (defValue != value ? "! " : computedValue ? "C " : "  "),
               opt.propName,
               value,
-              (defValue != value ? "(!= default: ${defValue}) " : "") + opt.description)
+              (computedValue ? "(!= default: computed) " : (defValue != value ? "(!= default: ${defValue}) " : "")) + opt.description)
         }
       }
     }
   }
 }
-
-// Disable assertions for HashMap due to: LUCENE-8991 / JDK-8205399
-def vmName = System.getProperty("java.vm.name")
-def spec = System.getProperty("java.specification.version")
-if (vmName =~ /(?i)(hotspot|openjdk|jrockit)/ &&
-    spec =~ /^(1\.8|9|10|11)$/ &&
-    !Boolean.parseBoolean(propertyOrDefault('tests.asserts.hashmap', 'false'))) {
-  logger.debug("Enabling HashMap assertions.")
-  allprojects {
-    plugins.withType(JavaPlugin) {
-      test {
-        jvmArgs("-da:java.util.HashMap")
-      }
-    }
-  }
-}
diff --git a/lucene/tools/junit4/replicator-tests.policy b/gradle/testing/randomization/policies/replicator-tests.policy
similarity index 100%
rename from lucene/tools/junit4/replicator-tests.policy
rename to gradle/testing/randomization/policies/replicator-tests.policy
diff --git a/gradle/testing/randomization/policies/solr-tests.policy b/gradle/testing/randomization/policies/solr-tests.policy
new file mode 100644
index 0000000..35b3e84
--- /dev/null
+++ b/gradle/testing/randomization/policies/solr-tests.policy
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// Policy file for solr. Please keep minimal and avoid wildcards.
+
+// permissions needed for tests to pass, based on properties set by the build system
+// NOTE: if the property is not set, the permission entry is ignored.
+grant {
+  // 3rd party jar resources (where symlinks are not supported), test-files/ resources
+  permission java.io.FilePermission "${common.dir}${/}-", "read";
+  permission java.io.FilePermission "${common.dir}${/}..${/}solr${/}-", "read";
+
+  // system jar resources
+  permission java.io.FilePermission "${java.home}${/}-", "read";
+
+  // Test launchers (randomizedtesting, etc.)
+  permission java.io.FilePermission "${java.io.tmpdir}", "read,write";
+  permission java.io.FilePermission "${java.io.tmpdir}${/}-", "read,write,delete";
+
+  permission java.io.FilePermission "${tests.linedocsfile}", "read";
+  // DirectoryFactoryTest messes with these (wtf?)
+  permission java.io.FilePermission "/tmp/inst1/conf/solrcore.properties", "read";
+  permission java.io.FilePermission "/path/to/myinst/conf/solrcore.properties", "read";
+  // TestConfigSets messes with these (wtf?)
+  permission java.io.FilePermission "/path/to/solr/home/lib", "read";
+
+  permission java.nio.file.LinkPermission "hard";
+
+  // all possibilities of accepting/binding/connections on localhost with ports >=1024:
+  permission java.net.SocketPermission "localhost:1024-", "accept,listen,connect,resolve";
+  permission java.net.SocketPermission "127.0.0.1:1024-", "accept,listen,connect,resolve";
+  permission java.net.SocketPermission "[::1]:1024-", "accept,listen,connect,resolve";
+  // "dead hosts", we try to keep it fast
+  permission java.net.SocketPermission "[::1]:4", "connect,resolve";
+  permission java.net.SocketPermission "[::1]:6", "connect,resolve";
+  permission java.net.SocketPermission "[::1]:8", "connect,resolve";
+  
+  // Basic permissions needed for Lucene to work:
+  permission java.util.PropertyPermission "*", "read,write";
+
+  // needed by randomizedtesting runner to identify test methods.
+  permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
+  permission java.lang.RuntimePermission "accessDeclaredMembers";
+  // needed by certain tests to redirect sysout/syserr:
+  permission java.lang.RuntimePermission "setIO";
+  // needed by randomized runner to catch failures from other threads:
+  permission java.lang.RuntimePermission "setDefaultUncaughtExceptionHandler";
+  // needed by randomized runner getTopThreadGroup:
+  permission java.lang.RuntimePermission "modifyThreadGroup";
+  // needed by tests e.g. shutting down executors:
+  permission java.lang.RuntimePermission "modifyThread";
+  // needed for tons of test hacks etc
+  permission java.lang.RuntimePermission "getStackTrace";
+  // needed for mock filesystems in tests
+  permission java.lang.RuntimePermission "fileSystemProvider";
+  // needed for test of IOUtils.spins (maybe it can be avoided)
+  permission java.lang.RuntimePermission "getFileStoreAttributes";
+  // analyzers/uima: needed by lucene expressions' JavascriptCompiler
+  permission java.lang.RuntimePermission "createClassLoader";
+  // needed to test unmap hack on platforms that support it
+  permission java.lang.RuntimePermission "accessClassInPackage.sun.misc";
+  // needed by jacoco to dump coverage
+  permission java.lang.RuntimePermission "shutdownHooks";
+  // needed by org.apache.logging.log4j
+  permission java.lang.RuntimePermission "getenv.*";
+  permission java.lang.RuntimePermission "getClassLoader";
+  permission java.lang.RuntimePermission "setContextClassLoader";
+  permission java.lang.RuntimePermission "getStackWalkerWithClassReference";
+  // needed by bytebuddy
+  permission java.lang.RuntimePermission "defineClass";
+  // needed by mockito
+  permission java.lang.RuntimePermission "accessClassInPackage.sun.reflect";
+  permission java.lang.RuntimePermission "reflectionFactoryAccess";
+  // needed by SolrResourceLoader
+  permission java.lang.RuntimePermission "closeClassLoader";
+  // needed by HttpSolrClient
+  permission java.lang.RuntimePermission "getFileSystemAttributes";
+  // needed by hadoop auth (TODO: there is a cleaner way to handle this)
+  permission java.lang.RuntimePermission "loadLibrary.jaas";
+  permission java.lang.RuntimePermission "loadLibrary.jaas_unix";
+  permission java.lang.RuntimePermission "loadLibrary.jaas_nt";
+  // needed by hadoop common RawLocalFileSystem for java nio getOwner
+  permission java.lang.RuntimePermission "accessUserInformation";
+  // needed by hadoop hdfs
+  permission java.lang.RuntimePermission "readFileDescriptor";
+  permission java.lang.RuntimePermission "writeFileDescriptor";
+  // needed by hadoop http
+  permission java.lang.RuntimePermission "getProtectionDomain";
+
+  // These two *have* to be spelled out a separate
+  permission java.lang.management.ManagementPermission "control";
+  permission java.lang.management.ManagementPermission "monitor";
+
+  // needed by hadoop htrace
+  permission java.net.NetPermission "getNetworkInformation";
+
+  // needed by DIH - possibly even after DIH is a package
+  permission java.sql.SQLPermission "deregisterDriver";
+
+  permission java.util.logging.LoggingPermission "control";
+
+  // needed by solr mbeans feature/tests
+  // TODO: can we remove wildcard for class names/members?
+  permission javax.management.MBeanPermission "*", "getAttribute";
+  permission javax.management.MBeanPermission "*", "getMBeanInfo";
+  permission javax.management.MBeanPermission "*", "queryMBeans";
+  permission javax.management.MBeanPermission "*", "queryNames";
+  permission javax.management.MBeanPermission "*", "registerMBean";
+  permission javax.management.MBeanPermission "*", "unregisterMBean";
+  permission javax.management.MBeanServerPermission "createMBeanServer";
+  permission javax.management.MBeanServerPermission "findMBeanServer";
+  permission javax.management.MBeanServerPermission "releaseMBeanServer";
+  permission javax.management.MBeanTrustPermission "register";
+
+  // needed by hadoop auth
+  permission javax.security.auth.AuthPermission "getSubject";
+  permission javax.security.auth.AuthPermission "modifyPrincipals";
+  permission javax.security.auth.AuthPermission "doAs";
+  permission javax.security.auth.AuthPermission "getLoginConfiguration";
+  permission javax.security.auth.AuthPermission "setLoginConfiguration";
+  permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
+  permission javax.security.auth.AuthPermission "modifyPublicCredentials";
+  permission javax.security.auth.PrivateCredentialPermission "org.apache.hadoop.security.Credentials * \"*\"", "read";
+
+  // needed by hadoop security
+  permission java.security.SecurityPermission "putProviderProperty.SaslPlainServer";
+  permission java.security.SecurityPermission "insertProvider";
+
+  permission javax.xml.bind.JAXBPermission "setDatatypeConverter";
+
+  // SSL related properties for Solr tests
+  permission javax.net.ssl.SSLPermission "setDefaultSSLContext";
+
+  // SASL/Kerberos related properties for Solr tests
+  permission javax.security.auth.PrivateCredentialPermission "javax.security.auth.kerberos.KerberosTicket * \"*\"", "read";
+  
+  // may only be necessary with Java 7?
+  permission javax.security.auth.PrivateCredentialPermission "javax.security.auth.kerberos.KeyTab * \"*\"", "read";
+  permission javax.security.auth.PrivateCredentialPermission "sun.security.jgss.krb5.Krb5Util$KeysFromKeyTab * \"*\"", "read";
+  
+  permission javax.security.auth.kerberos.ServicePermission "*", "initiate";
+  permission javax.security.auth.kerberos.ServicePermission "*", "accept";
+  permission javax.security.auth.kerberos.DelegationPermission "\"*\" \"krbtgt/EXAMPLE.COM@EXAMPLE.COM\"";
+  
+  // java 8 accessibility requires this perm - should not after 8 I believe (rrd4j is the root reason we hit an accessibility code path)
+  permission java.awt.AWTPermission "*";
+
+  // used by solr to create sandboxes (e.g. script execution)
+  permission java.security.SecurityPermission "createAccessControlContext";
+};
+
+// additional permissions based on system properties set by /bin/solr
+// NOTE: if the property is not set, the permission entry is ignored.
+grant {
+  permission java.io.FilePermission "${hadoop.security.credential.provider.path}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${hadoop.security.credential.provider.path}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.jetty.keystore}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.jetty.keystore}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.jetty.truststore}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.jetty.truststore}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.install.dir}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.install.dir}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${jetty.home}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${jetty.home}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.solr.home}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.solr.home}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.data.home}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.data.home}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.default.confdir}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.default.confdir}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${solr.log.dir}", "read,write,delete,readlink";
+  permission java.io.FilePermission "${solr.log.dir}${/}-", "read,write,delete,readlink";
+
+  permission java.io.FilePermission "${log4j.configurationFile}", "read,write,delete,readlink";
+
+  // expanded to a wildcard if set, allows all networking everywhere
+  permission java.net.SocketPermission "${solr.internal.network.permission}", "accept,listen,connect,resolve";
+};
+
+// Grant all permissions to Gradle test runner classes.
+
+grant codeBase "file:${gradle.lib.dir}${/}-" {
+  permission java.security.AllPermission;
+};
+
+grant codeBase "file:${gradle.worker.jar}" {
+  permission java.security.AllPermission;
+};
+
+grant {
+  // Allow reading gradle worker JAR.
+  permission java.io.FilePermission "${gradle.worker.jar}", "read";
+  // Allow reading from classpath JARs (resources).
+  permission java.io.FilePermission "${gradle.user.home}${/}-", "read";
+};
diff --git a/lucene/tools/junit4/tests.policy b/gradle/testing/randomization/policies/tests.policy
similarity index 100%
rename from lucene/tools/junit4/tests.policy
rename to gradle/testing/randomization/policies/tests.policy
diff --git a/gradle/validation/check-broken-links.gradle b/gradle/validation/check-broken-links.gradle
index bebfe45..26e9e8a 100644
--- a/gradle/validation/check-broken-links.gradle
+++ b/gradle/validation/check-broken-links.gradle
@@ -16,7 +16,6 @@
  */
 
 configure(rootProject) {
-
   task checkBrokenLinks {
     group 'Verification'
     description 'Check broken links in the entire documentation'
@@ -24,15 +23,11 @@
     dependsOn ':lucene:checkBrokenLinks'
     dependsOn ':solr:checkBrokenLinks'
   }
-
 }
+
 configure(subprojects.findAll { it.path in [':lucene', ':solr'] }) {
-
   task checkBrokenLinks(type: CheckBrokenLinksTask, 'dependsOn': 'documentation')
-
-  // TODO: uncomment this line after fixing all broken links.
-  // (we can't fix the cross-project links until ant build is disabled.)
-  // check.dependsOn checkBrokenLinks
+  check.dependsOn checkBrokenLinks
 }
 
 class CheckBrokenLinksTask extends DefaultTask {
@@ -52,7 +47,7 @@
     def result
     outputFile.withOutputStream { output ->
       result = project.exec {
-        executable "python3"
+        executable project.externalTool("python3")
         ignoreExitValue = true
         standardOutput = output
         errorOutput = output
diff --git a/gradle/validation/config-file-sanity.gradle b/gradle/validation/config-file-sanity.gradle
index e7ae048..30f6391 100644
--- a/gradle/validation/config-file-sanity.gradle
+++ b/gradle/validation/config-file-sanity.gradle
@@ -20,7 +20,7 @@
 configure(project(":solr")) {
   task validateConfigFileSanity() {
     doFirst {
-      def matchVersion = project(":solr:core").testOptionsResolved['tests.luceneMatchVersion']
+      def matchVersion = project(":solr:core").resolvedTestOption('tests.luceneMatchVersion')
       if (!matchVersion) {
         throw new GradleException("tests.luceneMatchVersion not defined?")
       }
diff --git a/gradle/validation/ecj-lint.gradle b/gradle/validation/ecj-lint.gradle
index 5ae13c1..3dcb2c0 100644
--- a/gradle/validation/ecj-lint.gradle
+++ b/gradle/validation/ecj-lint.gradle
@@ -27,6 +27,8 @@
   }
 }
 
+def resources = scriptResources(buildscript)
+
 allprojects {
   plugins.withType(JavaPlugin) {
     // Create a [sourceSetName]EcjLint task for each source set
@@ -69,7 +71,7 @@
         args += [ "-proc:none" ]
         args += [ "-nowarn" ]
         args += [ "-enableJavadoc" ]
-        args += [ "-properties", project(":lucene").file("tools/javadoc/ecj.javadocs.prefs").absolutePath ]
+        args += [ "-properties", file("${resources}/ecj.javadocs.prefs").absolutePath ]
 
         doFirst {
           tmpDst.mkdirs()
diff --git a/lucene/tools/javadoc/ecj.javadocs.prefs b/gradle/validation/ecj-lint/ecj.javadocs.prefs
similarity index 100%
rename from lucene/tools/javadoc/ecj.javadocs.prefs
rename to gradle/validation/ecj-lint/ecj.javadocs.prefs
diff --git a/gradle/validation/forbidden-apis.gradle b/gradle/validation/forbidden-apis.gradle
index 33c1d68..4767d0f 100644
--- a/gradle/validation/forbidden-apis.gradle
+++ b/gradle/validation/forbidden-apis.gradle
@@ -1,4 +1,4 @@
-/*
+  /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
  * this work for additional information regarding copyright ownership.
@@ -18,6 +18,8 @@
 // This configures application of forbidden API rules
 // via https://github.com/policeman-tools/forbidden-apis
 
+def resources = scriptResources(buildscript)
+
 // Only apply forbidden-apis to java projects.
 allprojects { prj ->
   plugins.withId("java", {
@@ -26,8 +28,9 @@
     // This helper method appends signature files based on a set of true
     // dependencies from a given configuration.
     def dynamicSignatures = { configuration, suffix ->
-      def deps = configuration.resolvedConfiguration.resolvedArtifacts
+      def resolvedMods = configuration.resolvedConfiguration.resolvedArtifacts
           .collect { a -> a.moduleVersion.id }
+      def deps = resolvedMods
           .collect { id -> [
               "${id.group}.${id.name}.all.txt",
               "${id.group}.${id.name}.${suffix}.txt",
@@ -38,7 +41,7 @@
       deps += ["defaults.all.txt", "defaults.${suffix}.txt"]
 
       deps.each { sig ->
-        def signaturesFile = rootProject.file("gradle/validation/forbidden-apis/${sig}")
+        def signaturesFile = file("${resources}/${sig}")
         if (signaturesFile.exists()) {
           logger.info("Signature file applied: ${sig}")
           signaturesFiles += files(signaturesFile)
@@ -46,6 +49,11 @@
           logger.debug("Signature file omitted (does not exist): ${sig}")
         }
       }
+      
+      // commons-io is special: forbiddenapis has a versioned bundledSignature.
+      bundledSignatures += resolvedMods
+        .findAll { id -> id.group == 'commons-io' && id.name == 'commons-io' }
+        .collect { id -> "${id.name}-unsafe-${id.version}" as String }
     }
 
     // Configure defaults for sourceSets.main
@@ -73,7 +81,7 @@
       ]
 
       signaturesFiles = files(
-          rootProject.file("gradle/validation/forbidden-apis/defaults.tests.txt")
+          file("${resources}/defaults.tests.txt")
       )
 
       suppressAnnotations += [
@@ -127,7 +135,7 @@
     // This is the simplest workaround possible: just point at all the rule files and indicate
     // them as inputs. This way if a rule is modified, checks will be reapplied.
     configure([forbiddenApisMain, forbiddenApisTest]) { task ->
-      task.inputs.dir(rootProject.file("gradle/validation/forbidden-apis"))
+      task.inputs.dir(file(resources))
     }
   })
 }
\ No newline at end of file
diff --git a/gradle/validation/missing-docs-check.gradle b/gradle/validation/missing-docs-check.gradle
index e465997..c0ee56e 100644
--- a/gradle/validation/missing-docs-check.gradle
+++ b/gradle/validation/missing-docs-check.gradle
@@ -14,6 +14,18 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+ 
+def javaVersionCheck = {
+  def maxSupported = JavaVersion.VERSION_14
+  def runtimeVersion = runtimeJava.javaVersion
+  if (runtimeVersion > JavaVersion.VERSION_14) {
+    logger.warn("Skipping task because runtime Java version ${runtimeVersion} is " +
+        "higher than Java ${maxSupported}.")
+    return false
+  } else {
+    return true
+  }
+}
 
 allprojects {
   plugins.withType(JavaPlugin) {
@@ -37,6 +49,8 @@
     task checkMissingDocsDefault(type: CheckMissingDocsTask, dependsOn: 'renderJavadoc') {
       dirs += [ project.javadoc.destinationDir ]
 
+      onlyIf javaVersionCheck
+
       // TODO: add missing docs for all classes and bump this to level=class
       if (project.path.startsWith(":solr")) {
         level = 'package'
@@ -60,6 +74,7 @@
   // Defer until java plugin has been applied, otherwise we can't resolve project.javadoc.
   plugins.withType(JavaPlugin) {
     task checkMissingDocsMethod(type: CheckMissingDocsTask, dependsOn: 'renderJavadoc') {
+      onlyIf javaVersionCheck
       level = 'method'
     }
 
@@ -72,7 +87,6 @@
         "org/apache/lucene/index",
         "org/apache/lucene/codecs"
     ].collect { path -> file("${project.javadoc.destinationDir}/${path}") }
-
     checkMissingDocs {
       dependsOn checkMissingDocsMethod
     }
@@ -89,7 +103,7 @@
   def checkMissingJavadocs(File dir, String level) {
     def output = new ByteArrayOutputStream()
     def result = project.exec {
-      executable "python3"
+      executable project.externalTool("python3")
       ignoreExitValue = true
       standardOutput = output
       errorOutput = output
diff --git a/gradle/validation/owasp-dependency-check.gradle b/gradle/validation/owasp-dependency-check.gradle
index 4d300b8..d58fd8b 100644
--- a/gradle/validation/owasp-dependency-check.gradle
+++ b/gradle/validation/owasp-dependency-check.gradle
@@ -20,13 +20,15 @@
 
 // If -Pvalidation.owasp=true is set the validation will also run as part of the check task.
 
+def resources = scriptResources(buildscript)
+
 configure(rootProject) {
   dependencyCheck {
     failBuildOnCVSS = propertyOrDefault("validation.owasp.threshold", 7) as Integer
     formats = ['HTML', 'JSON']
     skipProjects = [':solr:solr-ref-guide']
     skipConfigurations = ['unifiedClasspath']
-    suppressionFile = rootProject.file('gradle/validation/owasp-dependency-check/exclusions.xml')
+    suppressionFile = file("${resources}/exclusions.xml")
   }
 
   task owasp() {
diff --git a/gradle/validation/owasp-dependency-check/exclusions.xml b/gradle/validation/owasp-dependency-check/exclusions.xml
index d6de0e4..0a77b99 100644
--- a/gradle/validation/owasp-dependency-check/exclusions.xml
+++ b/gradle/validation/owasp-dependency-check/exclusions.xml
@@ -48,30 +48,6 @@
   </suppress>
   <suppress>
     <notes><![CDATA[
-   file name: derby-10.9.1.0.jar
-   Only used in tests and dih-example
-   ]]></notes>
-    <packageUrl regex="true">^pkg:maven/org\.apache\.derby/derby@.*$</packageUrl>
-    <cpe>cpe:/a:apache:derby</cpe>
-  </suppress>
-  <suppress>
-    <notes><![CDATA[
-   file name: derby-10.9.1.0.jar
-   Only used in tests and dih-example
-   ]]></notes>
-    <packageUrl regex="true">^pkg:maven/org\.apache\.derby/derby@.*$</packageUrl>
-    <vulnerabilityName>CVE-2015-1832</vulnerabilityName>
-  </suppress>
-  <suppress>
-    <notes><![CDATA[
-   file name: derby-10.9.1.0.jar
-   Only used in tests and dih-example
-   ]]></notes>
-    <packageUrl regex="true">^pkg:maven/org\.apache\.derby/derby@.*$</packageUrl>
-    <vulnerabilityName>CVE-2018-1313</vulnerabilityName>
-  </suppress>
-  <suppress>
-    <notes><![CDATA[
    file name: carrot2-guava-18.0.jar
    Only used with clustering engine, and the risk is DOS attack
    ]]></notes>
diff --git a/gradle/validation/validate-source-patterns.gradle b/gradle/validation/validate-source-patterns.gradle
index 53630ed..827f1d3 100644
--- a/gradle/validation/validate-source-patterns.gradle
+++ b/gradle/validation/validate-source-patterns.gradle
@@ -15,29 +15,237 @@
  * limitations under the License.
  */
 
+import org.apache.rat.Defaults;
+import org.apache.rat.document.impl.FileDocument;
+import org.apache.rat.api.MetaData;
 
-// Equivalent of ant's "validate-source-patterns".
-// This should be eventually rewritten in plain gradle. For now, delegate to
-// the ant/groovy script we already have.
-
-configure(rootProject) {
-  configurations {
-    checkSourceDeps
+buildscript {
+  repositories {
+    mavenCentral()
   }
 
   dependencies {
-    checkSourceDeps "org.codehaus.groovy:groovy-all:2.4.17"
-    checkSourceDeps "org.apache.rat:apache-rat:${scriptDepVersions['apache-rat']}"
+    classpath "org.apache.rat:apache-rat:${scriptDepVersions['apache-rat']}"
   }
+}
 
-  task validateSourcePatterns() {
-    doFirst {
-      ant.taskdef(
-          name: "groovy",
-          classname: "org.codehaus.groovy.ant.Groovy",
-          classpath: configurations.checkSourceDeps.asPath)
-
-      ant.groovy(src: project(":lucene").file("tools/src/groovy/check-source-patterns.groovy"))
+configure(rootProject) {
+  task("validateSourcePatterns", type: ValidateSourcePatternsTask) {
+    group = 'Verification'
+    description = 'Validate Source Patterns'
+    
+    sourceFiles = project.fileTree(project.rootDir) {
+      [
+        'java', 'jflex', 'py', 'pl', 'g4', 'jj', 'html', 'js',
+        'css', 'xml', 'xsl', 'vm', 'sh', 'cmd', 'bat', 'policy',
+        'properties', 'mdtext', 'groovy', 'gradle',
+        'template', 'adoc', 'json',
+      ].each{
+        include "lucene/**/*.${it}"
+        include "solr/**/*.${it}"
+        include "dev-tools/**/*.${it}"
+        include "gradle/**/*.${it}"
+        include "*.${it}"
+      }
+      // TODO: For now we don't scan txt / md files, so we
+      // check licenses in top-level folders separately:
+      include '*.txt'
+      include '*/*.txt'
+      include '*.md'
+      include '*/*.md'
+      // excludes:
+      exclude '**/build/**'
+      exclude '**/dist/**'
+      exclude 'lucene/benchmark/work/**'
+      exclude 'lucene/benchmark/temp/**'
+      exclude '**/CheckLoggingConfiguration.java'
+      exclude 'solr/core/src/test/org/apache/hadoop/**'
+      exclude '**/validate-source-patterns.gradle' // ourselves :-)
     }
   }
-}
\ No newline at end of file
+}
+
+class ValidateSourcePatternsTask extends DefaultTask {
+
+  ValidateSourcePatternsTask() {
+    // this task has no outputs, so it's always uptodate (if inputs don't change).
+    outputs.upToDateWhen { true }
+  }
+
+  @InputFiles
+  ConfigurableFileTree sourceFiles
+  
+  @TaskAction
+  public void check() {
+    def invalidPatterns = [
+      (~$/@author\b/$) : '@author javadoc tag',
+      (~$/(?i)\bno(n|)commit\b/$) : 'nocommit',
+      (~$/\bTOOD:/$) : 'TOOD instead TODO',
+      (~$/\t/$) : 'tabs instead spaces',
+      (~$/\Q/**\E((?:\s)|(?:\*))*\Q{@inheritDoc}\E((?:\s)|(?:\*))*\Q*/\E/$) : '{@inheritDoc} on its own is unnecessary',
+      (~$/\$$(?:LastChanged)?Date\b/$) : 'svn keyword',
+      (~$/\$$(?:(?:LastChanged)?Revision|Rev)\b/$) : 'svn keyword',
+      (~$/\$$(?:LastChangedBy|Author)\b/$) : 'svn keyword',
+      (~$/\$$(?:Head)?URL\b/$) : 'svn keyword',
+      (~$/\$$Id\b/$) : 'svn keyword',
+      (~$/\$$Header\b/$) : 'svn keyword',
+      (~$/\$$Source\b/$) : 'svn keyword',
+      (~$/^\uFEFF/$) : 'UTF-8 byte order mark',
+      (~$/import java\.lang\.\w+;/$) : 'java.lang import is unnecessary'
+    ]
+
+    // Python and others merrily use var declarations, this is a problem _only_ in Java at least for 8x where we're forbidding var declarations
+    def invalidJavaOnlyPatterns = [
+      (~$/\n\s*var\s+.*=.*<>.*/$) : 'Diamond operators should not be used with var',
+      (~$/\n\s*var\s+/$) : 'var is not allowed in until we stop development on the 8x code line'
+    ]
+
+    def baseDirLen = sourceFiles.dir.toString().length() + 1;
+
+    def found = 0;
+    def violations = new TreeSet();
+    def reportViolation = { f, name ->
+      logger.error('{}: {}', name, f.toString().substring(baseDirLen).replace(File.separatorChar, (char)'/'));
+      violations.add(name);
+      found++;
+    }
+
+    def javadocsPattern = ~$/(?sm)^\Q/**\E(.*?)\Q*/\E/$;
+    def javaCommentPattern = ~$/(?sm)^\Q/*\E(.*?)\Q*/\E/$;
+    def xmlCommentPattern = ~$/(?sm)\Q<!--\E(.*?)\Q-->\E/$;
+    def lineSplitter = ~$/[\r\n]+/$;
+    def singleLineSplitter = ~$/\r?\n/$;
+    def licenseMatcher = Defaults.createDefaultMatcher();
+    def validLoggerPattern = ~$/(?s)\b(private\s|static\s|final\s){3}+\s*Logger\s+\p{javaJavaIdentifierStart}+\s+=\s+\QLoggerFactory.getLogger(MethodHandles.lookup().lookupClass());\E/$;
+    def validLoggerNamePattern = ~$/(?s)\b(private\s|static\s|final\s){3}+\s*Logger\s+log+\s+=\s+\QLoggerFactory.getLogger(MethodHandles.lookup().lookupClass());\E/$;
+    def packagePattern = ~$/(?m)^\s*package\s+org\.apache.*;/$;
+    def xmlTagPattern = ~$/(?m)\s*<[a-zA-Z].*/$;
+    def sourceHeaderPattern = ~$/\[source\b.*/$;
+    def blockBoundaryPattern = ~$/----\s*/$;
+    def blockTitlePattern = ~$/\..*/$;
+    def unescapedSymbolPattern = ~$/(?<=[^\\]|^)([-=]>|<[-=])/$; // SOLR-10883
+    def extendsLuceneTestCasePattern = ~$/public.*?class.*?extends.*?LuceneTestCase[^\n]*?\n/$;
+    def validSPINameJavadocTag = ~$/(?s)\s*\*\s*@lucene\.spi\s+\{@value #NAME\}/$;
+
+    def isLicense = { matcher, ratDocument ->
+      licenseMatcher.reset();
+      return lineSplitter.split(matcher.group(1)).any{ licenseMatcher.match(ratDocument, it) };
+    }
+
+    def checkLicenseHeaderPrecedes = { f, description, contentPattern, commentPattern, text, ratDocument ->
+      def contentMatcher = contentPattern.matcher(text);
+      if (contentMatcher.find()) {
+        def contentStartPos = contentMatcher.start();
+        def commentMatcher = commentPattern.matcher(text);
+        while (commentMatcher.find()) {
+          if (isLicense(commentMatcher, ratDocument)) {
+            if (commentMatcher.start() < contentStartPos) {
+              break; // This file is all good, so break loop: license header precedes 'description' definition
+            } else {
+              reportViolation(f, description+' declaration precedes license header');
+            }
+          }
+        }
+      }
+    }
+
+    def checkMockitoAssume = { f, text ->
+      if (text.contains("mockito") && !text.contains("assumeWorkingMockito()")) {
+        reportViolation(f, 'File uses Mockito but has no assumeWorkingMockito() call');
+      }
+    }
+
+    def checkForUnescapedSymbolSubstitutions = { f, text ->
+      def inCodeBlock = false;
+      def underSourceHeader = false;
+      def lineNumber = 0;
+      singleLineSplitter.split(text).each {
+        ++lineNumber;
+        if (underSourceHeader) { // This line is either a single source line, or the boundary of a code block
+          inCodeBlock = blockBoundaryPattern.matcher(it).matches();
+          if ( ! blockTitlePattern.matcher(it).matches()) {
+            underSourceHeader = false;
+          }
+        } else {
+          if (inCodeBlock) {
+            inCodeBlock = ! blockBoundaryPattern.matcher(it).matches();
+          } else {
+            underSourceHeader = sourceHeaderPattern.matcher(it).lookingAt();
+            if ( ! underSourceHeader) {
+              def unescapedSymbolMatcher = unescapedSymbolPattern.matcher(it);
+              if (unescapedSymbolMatcher.find()) {
+                reportViolation(f, 'Unescaped symbol "' + unescapedSymbolMatcher.group(1) + '" on line #' + lineNumber);
+              }
+            }
+          }
+        }
+      }
+    }
+
+    sourceFiles.each{ f ->
+      logger.debug('Scanning source file: {}', f);
+      def text = f.getText('UTF-8');
+      invalidPatterns.each{ pattern,name ->
+        if (pattern.matcher(text).find()) {
+          reportViolation(f, name);
+        }
+      }
+      def javadocsMatcher = javadocsPattern.matcher(text);
+      def ratDocument = new FileDocument(f);
+      while (javadocsMatcher.find()) {
+        if (isLicense(javadocsMatcher, ratDocument)) {
+          reportViolation(f, String.format(Locale.ENGLISH, 'javadoc-style license header [%s]',
+            ratDocument.getMetaData().value(MetaData.RAT_URL_LICENSE_FAMILY_NAME)));
+        }
+      }
+      if (f.name.endsWith('.java')) {
+        if (text.contains('org.slf4j.LoggerFactory')) {
+          if (!validLoggerPattern.matcher(text).find()) {
+            reportViolation(f, 'invalid logging pattern [not private static final, uses static class name]');
+          }
+          if (!validLoggerNamePattern.matcher(text).find()) {
+            reportViolation(f, 'invalid logger name [log, uses static class name, not specialized logger]')
+          }
+        }
+        // make sure that SPI names of all tokenizers/charfilters/tokenfilters are documented
+        if (!f.name.contains("Test") && !f.name.contains("Mock") && !text.contains("abstract class") &&
+            !f.name.equals("TokenizerFactory.java") && !f.name.equals("CharFilterFactory.java") && !f.name.equals("TokenFilterFactory.java") &&
+            (f.name.contains("TokenizerFactory") && text.contains("extends TokenizerFactory") ||
+                f.name.contains("CharFilterFactory") && text.contains("extends CharFilterFactory") ||
+                f.name.contains("FilterFactory") && text.contains("extends TokenFilterFactory"))) {
+          if (!validSPINameJavadocTag.matcher(text).find()) {
+            reportViolation(f, 'invalid spi name documentation')
+          }
+        }
+        checkLicenseHeaderPrecedes(f, 'package', packagePattern, javaCommentPattern, text, ratDocument);
+        if (f.name.contains("Test")) {
+          checkMockitoAssume(f, text);
+        }
+
+        if (f.path.substring(baseDirLen).contains("solr/")
+            && f.name.equals("SolrTestCase.java") == false
+            && f.name.equals("TestXmlQParser.java") == false) {
+          if (extendsLuceneTestCasePattern.matcher(text).find()) {
+            reportViolation(f, "Solr test cases should extend SolrTestCase rather than LuceneTestCase");
+          }
+        }
+        invalidJavaOnlyPatterns.each { pattern,name ->
+          if (pattern.matcher(text).find()) {
+            reportViolation(f, name);
+          }
+        }
+      }
+      if (f.name.endsWith('.xml') || f.name.endsWith('.xml.template')) {
+        checkLicenseHeaderPrecedes(f, '<tag>', xmlTagPattern, xmlCommentPattern, text, ratDocument);
+      }
+      if (f.name.endsWith('.adoc')) {
+        checkForUnescapedSymbolSubstitutions(f, text);
+      }
+    };
+
+    if (found) {
+      throw new GradleException(String.format(Locale.ENGLISH, 'Found %d violations in source files (%s).',
+        found, violations.join(', ')));
+    }
+  }
+}
diff --git a/gradlew b/gradlew
index 32be99f..0ccd1fb 100755
--- a/gradlew
+++ b/gradlew
@@ -102,12 +102,21 @@
 location of your Java installation."
 fi
 
+# LUCENE-9471: workaround for gradle leaving junk temp. files behind.
+GRADLE_TEMPDIR="$APP_HOME/.gradle/tmp"
+mkdir -p "$GRADLE_TEMPDIR"
+if [ "$cygwin" = "true" -o "$msys" = "true" ] ; then
+    GRADLE_TEMPDIR=`cygpath --path --mixed "$GRADLE_TEMPDIR"`
+fi
+DEFAULT_JVM_OPTS="$DEFAULT_JVM_OPTS \"-Djava.io.tmpdir=$GRADLE_TEMPDIR\""
+
 # LUCENE-9266: verify and download the gradle wrapper jar if we don't have one.
 if [ "$cygwin" = "true" -o "$msys" = "true" ] ; then
     APP_HOME=`cygpath --path --mixed "$APP_HOME"`
 fi
-GRADLE_WRAPPER_JAR=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
-if ! $JAVACMD --source 11 $APP_HOME/buildSrc/src/main/java/org/apache/lucene/gradle/WrapperDownloader.java $GRADLE_WRAPPER_JAR ; then
+GRADLE_WRAPPER_JAR="$APP_HOME/gradle/wrapper/gradle-wrapper.jar"
+if ! "$JAVACMD" --source 11 "$APP_HOME/buildSrc/src/main/java/org/apache/lucene/gradle/WrapperDownloader.java" "$GRADLE_WRAPPER_JAR" ; then
+    echo "\nSomething went wrong. Make sure you're using Java 11 or later."
     exit $?
 fi
 
diff --git a/gradlew.bat b/gradlew.bat
index 1f58301..7ff625a 100644
--- a/gradlew.bat
+++ b/gradlew.bat
@@ -32,6 +32,11 @@
 @rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.

 set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"

 

+@rem LUCENE-9471: workaround for gradle leaving junk temp. files behind.

+SET GRADLE_TEMPDIR=%DIRNAME%\.gradle\tmp

+IF NOT EXIST "%GRADLE_TEMPDIR%" MKDIR "%GRADLE_TEMPDIR%"

+SET DEFAULT_JVM_OPTS=%DEFAULT_JVM_OPTS% "-Djava.io.tmpdir=%GRADLE_TEMPDIR%"

+

 @rem Find java.exe

 if defined JAVA_HOME goto findJavaFromJavaHome

 

@@ -88,7 +93,7 @@
 

 @rem Don't fork a daemon mode on initial run that generates local defaults.

 SET GRADLE_DAEMON_CTRL=

-IF NOT EXIST %DIRNAME%\gradle.properties SET GRADLE_DAEMON_CTRL=--no-daemon

+IF NOT EXIST "%DIRNAME%\gradle.properties" SET GRADLE_DAEMON_CTRL=--no-daemon

 

 @rem Execute Gradle

 "%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %GRADLE_DAEMON_CTRL% %CMD_LINE_ARGS%

diff --git a/help/IDEs.txt b/help/IDEs.txt
new file mode 100644
index 0000000..3ad5b85
--- /dev/null
+++ b/help/IDEs.txt
@@ -0,0 +1,20 @@
+IntelliJ IDEA
+=============
+
+Importing the project as a gradle project should just run out of the box.
+
+
+Eclipse
+=======
+
+Run the following to set up Eclipse project files:
+
+./gradlew eclipse
+
+then import the project into Eclipse with:
+
+File -> Import... -> Existing Project into Workspace
+
+Please note that Eclipse does not distinguish between sub-projects
+and package sets (main/ test) so pretty much all the sources and dependencies
+are available in one large bin.
diff --git a/help/tests.txt b/help/tests.txt
index 30b1f4a..5054c0e 100644
--- a/help/tests.txt
+++ b/help/tests.txt
@@ -101,6 +101,21 @@
 
 gradlew -p lucene/core cleanTest test -Ptests.seed=deadbeef
 
+The 'tests.iters' option should be sufficient for individual test cases
+and is *much* faster than trying to duplicate re-runs of the entire
+test suites. When it is absolutely needed to re-run an entire suite (because
+of randomization in the static initialization, for example), you can do it
+by running the 'beast' task with 'tests.dups' option:
+
+gradlew -p lucene/core beast -Ptests.dups=10 --tests TestPerFieldDocValuesFormat
+
+Note the filter (--tests) used to narrow down test reiterations to a particular
+class. You can use any filter, including no filter at all, but it rarely makes
+sense (will take ages). By default the test tasks generated by the 'beast' mode
+use a random starting seed for randomization. If you pass an explicit seed, this
+won't be the case (all tasks will use exactly the same starting seed):
+
+gradlew -p lucene/core beast -Ptests.dups=10 --tests TestPerFieldDocValuesFormat -Dtests.seed=deadbeef
 
 Verbose mode and debugging
 --------------------------
diff --git a/lucene/BUILD.md b/lucene/BUILD.md
index 6810225..9846ed7 100644
--- a/lucene/BUILD.md
+++ b/lucene/BUILD.md
@@ -2,78 +2,67 @@
 
 ## Basic steps:
   
-  0. Install OpenJDK 11 (or greater), Ant 1.8.2+, Ivy 2.2.0
-  1. Download Lucene from Apache and unpack it
-  2. Connect to the top-level of your Lucene installation
+  0. Install OpenJDK 11 (or greater)
+  1. Download Lucene/Solr from Apache and unpack it
+  2. Connect to the top-level of your installation (parent of the lucene top-level directory)
   3. Install JavaCC (optional)
-  4. Run ant
+  4. Run gradle
 
-## Step 0) Set up your development environment (OpenJDK 11 or greater, Ant 1.8.2+, Ivy 2.2.0)
+## Step 0) Set up your development environment (OpenJDK 11 or greater)
 
 We'll assume that you know how to get and set up the JDK - if you
 don't, then we suggest starting at https://www.oracle.com/java/ and learning
 more about Java, before returning to this README. Lucene runs with
 Java 11 and later.
 
-Like many Open Source java projects, Lucene uses Apache Ant for build
-control.  Specifically, you MUST use Ant version 1.8.2+.
+Lucene uses [Gradle](https://gradle.org/) for build control; and includes Gradle wrapper script to download the correct version of it.
 
-Ant is "kind of like make without make's wrinkles".  Ant is
-implemented in java and uses XML-based configuration files.  You can
-get it at:
+NOTE: When Solr moves to a Top Level Project, it will no longer
+be necessary to download Solr to build Lucene. You can track
+progress at: https://issues.apache.org/jira/browse/SOLR-14497 
 
-  https://ant.apache.org
+NOTE: Lucene changed from Ant to Gradle as of release 9.0. Prior releases
+still use Ant.
 
-You'll need to download the Ant binary distribution.  Install it
-according to the instructions at:
-
-  https://ant.apache.org/manual
-
-Finally, you'll need to install ivy into your ant lib folder
-(~/.ant/lib). You can get it from http://ant.apache.org/ivy/.
-If you skip this step, the Lucene build system will offer to do it 
-for you.
-
-## Step 1) Download Lucene from Apache
+## Step 1) Download/Checkout Lucene source code
 
 We'll assume you already did this, or you wouldn't be reading this
 file.  However, you might have received this file by some alternate
 route, or you might have an incomplete copy of the Lucene, so: Lucene
-releases are available for download at:
+releases are available as part of Solr for download at:
 
-  https://www.apache.org/dyn/closer.cgi/lucene/java/
+  https://lucene.apache.org/solr/downloads.html
+  
+See the note above for why it is necessary currently to download Solr
 
 Download either a zip or a tarred/gzipped version of the archive, and
 uncompress it into a directory of your choice.
 
-## Step 2) From the command line, change (cd) into the top-level directory of your Lucene installation
+Or you can directly checkout the source code from GitHub:
 
-Lucene's top-level directory contains the build.xml file. By default,
-you do not need to change any of the settings in this file, but you do
-need to run ant from this location so it knows where to find build.xml.
+  https://github.com/apache/lucene-solr
 
-If you would like to change settings you can do so by creating one 
-or more of the following files and placing your own property settings
-in there:
+## Step 2) From the command line, change (cd) into the top-level directory of your Lucene/Solr installation
 
-    ~/lucene.build.properties
-    ~/build.properties
-    lucene-x.y/build.properties
+The parent directory for both Lucene and Solr contains the base configuration
+file for the combined build, as well as the "gradle wrapper" (gradlew) that
+makes invocation of Gradle easier. By default, you do not need to change any of 
+the settings in this file, but you do need to run Gradle from this location so 
+it knows where to find the necessary configurations.
 
-The first property which is found in the order with which the files are
-loaded becomes the property setting which is used by the Ant build
-system.
+The first time you run Gradle, it will create a file "gradle.properties" that
+contains machine-specific settings. Normally you can use this file as-is, but it
+can be modified if necessary. 
 
-NOTE: the ~ character represents your user account home directory.
+## Step 4) Run Gradle
 
-## Step 4) Run ant
+Assuming you can exectue "./gradlew help" should show you the main tasks that
+can be executed to show help sub-topics.
 
-Assuming you have ant in your PATH and have set ANT_HOME to the
-location of your ant installation, typing "ant" at the shell prompt
-and command prompt should run ant.  Ant will by default look for the
-"build.xml" file in your current directory, and compile Lucene.
+If you want to build Lucene independent of Solr, type:
+  ./gradlew -p lucene assemble
 
-If you want to build the documentation, type "ant documentation".
+If you want to build the documentation, type "./gradlew buildSite".
 
 For further information on Lucene, go to:
 
@@ -86,7 +75,3 @@
 Please post suggestions, questions, corrections or additions to this
 document to the lucene-user mailing list.
 
-This file was originally written by Steven J. Owens <puff@darksleep.com>.
-This file was modified by Jon S. Stevens <jon@latchkey.com>.
-
-Copyright (c) 2001-2020 The Apache Software Foundation.  All rights reserved.
diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
index 4d15969..ac3025b 100644
--- a/lucene/CHANGES.txt
+++ b/lucene/CHANGES.txt
@@ -12,7 +12,7 @@
 
 API Changes
 
-* LUCENE-8474: RAMDirectory and associated deprecated classes have been 
+* LUCENE-8474: RAMDirectory and associated deprecated classes have been
   removed. (Dawid Weiss)
 
 * LUCENE-3041: The deprecated Weight#extractTerms() method has been 
@@ -58,8 +58,14 @@
 
 * LUCENE-9340: SimpleBindings#add(SortField) has been removed. (Alan Woodward)
 
+* LUCENE-9462: Fields without positions should still return MatchIterator.
+  (Alan Woodward, Dawid Weiss)
+
 Improvements
 
+* LUCENE-9463: Query match region retrieval component, passage scoring and formatting
+  for building custom highlighters. (Alan Woodward, Dawid Weiss)
+
 * LUCENE-9370: RegExp query is no longer lenient about inappropriate backslashes and
   follows the Java Pattern policy for rejecting illegal syntax.  (Mark Harwood)
 
@@ -114,6 +120,7 @@
   with doc values and points. In this case, there is an assumption that the same data is
   stored in these points and doc values (Mayya Sharipova, Jim Ferenczi, Adrien Grand)
 
+* LUCENE-9313: Add SerbianAnalyzer based on the snowball stemmer. (Dragan Ivanovic)
 
 Bug fixes
 
@@ -153,11 +160,12 @@
 * LUCENE-9411: Fail complation on warnings, 9x gradle-only (Erick Erickson, Dawid Weiss)
   Deserves mention here as well as Lucene CHANGES.txt since it affects both.
 
+* LUCENE-9433: Remove Ant support from trunk (Erick Erickson, Uwe Schindler et.al.)
+
 ======================= Lucene 8.7.0 =======================
 
 API Changes
 ---------------------
-(No changes)
 
 * LUCENE-9437: Lucene's facet module's DocValuesOrdinalsReader.decode method
   is now public, making it easier for applications to decode facet
@@ -168,6 +176,10 @@
 
 * LUCENE-9386: RegExpQuery added case insensitive matching option. (Mark Harwood)
 
+* LUCENE-8962: Add IndexWriter merge-on-refresh feature to selectively merge
+  small segments on getReader, subject to a configurable timeout, to improve
+  search performance by reducing the number of small segments for searching. (Simon Willnauer)
+
 Improvements
 ---------------------
 
@@ -181,15 +193,30 @@
 * LUCENE-9440: FieldInfo#checkConsistency called twice from Lucene50(60)FieldInfosFormat#read;
   Removed the (redundant?) assert and do these checks for real. (Yauheni Putsykovich)
 
+* LUCENE-9446: In BooleanQuery rewrite, always remove MatchAllDocsQuery filter clauses
+  when possible. (Julie Tibshirani)
+
 Optimizations
 ---------------------
 
 * LUCENE-9395: ConstantValuesSource now shares a single DoubleValues
   instance across all segments (Tony Xu)
 
+* LUCENE-9447: BEST_COMPRESSION now provides higher compression ratios on highly
+  compressible data. (Adrien Grand)
+
+* LUCENE-9373: FunctionMatchQuery now accepts a "matchCost" optimization hint.
+  (Maxim Glazkov, David Smiley)
+
 Bug Fixes
 ---------------------
-(No changes)
+
+* LUCENE-9427: Fix a regression where the unified highlighter didn't produce
+  highlights on fuzzy queries that correspond to exact matches. (Julie Tibshirani)
+
+* LUCENE-9467: Fix NRTCachingDirectory to use Directory#fileLength to check if a file
+  already exists instead of opening an IndexInput on the file which might throw a AccessDeniedException
+  in some Directory implementations. (Simon Willnauer)
 
 Documentation
 ---------------------
@@ -201,6 +228,14 @@
 ---------------------
 (No changes)
 
+======================= Lucene 8.6.2 =======================
+
+Bug Fixes
+---------------------
+* LUCENE-9478: Prevent DWPTDeleteQueue from referencing itself and leaking memory. The queue
+  passed an implicit this reference to the next queue instance on flush which leaked about 500byte
+  of memory on each full flush, commit or getReader call. (Simon Willnauer)
+
 ======================= Lucene 8.6.1 =======================
 
 Bug Fixes
diff --git a/lucene/analysis/analysis-module-build.xml b/lucene/analysis/analysis-module-build.xml
deleted file mode 100644
index b6c88f7..0000000
--- a/lucene/analysis/analysis-module-build.xml
+++ /dev/null
@@ -1,44 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analysis-module-build">
-
-  <!-- submodule build for analyzers.
-       ensures that each submodule is built under build/
-       consistent with its source structure so that
-       binary packaging makes sense -->
-
-  <basename property="submodule.project.name" file="${basedir}"/>
-  <dirname file="${ant.file.analysis-module-build}" property="analysis.dir"/>
-
-  <property name="build.dir" 
-        location="${analysis.dir}/../build/analysis/${submodule.project.name}"/>
-
-  <import file="../module-build.xml"/>
-
-  <target name="javadocs" depends="javadocs-analyzers-common, compile-core, check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../analyzers-common"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-</project>
diff --git a/lucene/analysis/build.xml b/lucene/analysis/build.xml
deleted file mode 100644
index 6dc1500..0000000
--- a/lucene/analysis/build.xml
+++ /dev/null
@@ -1,172 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers" default="default">
-
-  <description>
-    Additional Analyzers
-      - common: Additional Analyzers
-      - icu: Analyzers that use functionality from ICU
-      - kuromoji: Japanese Morphological Analyzer
-      - morfologik: Morfologik Stemmer
-      - nori: Korean Morphological Analyzer
-      - smartcn: Smart Analyzer for Simplified Chinese Text
-      - stempel: Algorithmic Stemmer for Polish
-  </description>
-
-  <dirname file="${ant.file.analyzers}" property="analyzers.dir"/>
-
-  <macrodef name="forall-analyzers">
-    <attribute name="target" />
-    <sequential>
-      <subant target="@{target}" inheritall="false" failonerror="true">
-         <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="${analyzers.dir}" includes="*/build.xml" />
-      </subant>
-    </sequential>
-  </macrodef>
-
-  <propertyset id="uptodate.and.compiled.properties" dynamic="true">
-    <propertyref regex=".*\.uptodate$$"/>
-    <propertyref regex=".*\.compiled$$"/>
-    <propertyref regex=".*\.loaded$$"/>
-    <propertyref name="tests.totals.tmpfile" />
-  </propertyset>
-
-  <target name="common">
-    <ant dir="common" />
-  </target>
-
-  <target name="icu">
-    <ant dir="icu" />
-  </target>
-
-  <target name="kuromoji">
-    <ant dir="kuromoji" />
-  </target>
-
-  <target name="morfologik">
-    <ant dir="morfologik" />
-  </target>
-
-  <target name="nori">
-    <ant dir="nori" />
-  </target>
-
-  <target name="opennlp">
-    <ant dir="opennlp" />
-  </target>
-
-  <target name="phonetic">
-    <ant dir="phonetic" />
-  </target>
-
-  <target name="smartcn">
-    <ant dir="smartcn" />
-  </target>
-
-  <target name="stempel">
-    <ant dir="stempel" />
-  </target>
-
-  <target name="default" depends="compile"/>
-  <target name="compile" depends="common,icu,kuromoji,morfologik,nori,opennlp,phonetic,smartcn,stempel" />
-
-  <target name="clean">
-    <forall-analyzers target="clean"/>
-  </target>
-  <target name="resolve">
-    <forall-analyzers target="resolve"/>
-  </target>
-  <target name="validate">
-    <forall-analyzers target="validate"/>
-  </target>
-  <target name="compile-core">
-    <forall-analyzers target="compile-core"/>
-  </target>
-  <target name="compile-test">
-    <forall-analyzers target="compile-test"/>
-  </target>
-  <target name="compile-tools">
-    <forall-analyzers target="compile-tools"/>
-  </target>
-  <target name="test">
-    <forall-analyzers target="test"/>
-  </target>
-  <target name="test-nocompile">
-    <fail message="Target 'test-nocompile' will not run recursively.  First change directory to the module you want to test."/>
-  </target>
-  <target name="beast">
-    <fail message="The Beast only works inside of individual modules"/>
-  </target>
-  <target name="jar">
-    <forall-analyzers target="jar-core"/>
-  </target>
-  <target name="jar-src">
-    <forall-analyzers target="jar-src"/>
-  </target>
-  <target name="jar-core" depends="jar"/>
-
-  <target name="build-artifacts-and-tests" depends="default,compile-test" />
-
-  <target name="-dist-maven">
-    <forall-analyzers target="-dist-maven"/>
-  </target>
-
-  <target name="-install-to-maven-local-repo">
-    <forall-analyzers target="-install-to-maven-local-repo"/>
-  </target>
-
-  <target name="-validate-maven-dependencies">
-    <forall-analyzers target="-validate-maven-dependencies"/>
-  </target>
-
-  <target name="javadocs">
-    <forall-analyzers target="javadocs"/>
-  </target>
-
-  <target name="javadocs-index.html">
-    <forall-analyzers target="javadocs-index.html"/>
-  </target>
-
-  <target name="rat-sources">
-    <forall-analyzers target="rat-sources"/>
-  </target>
-
-  <target name="-ecj-javadoc-lint">
-    <forall-analyzers target="-ecj-javadoc-lint"/>
-  </target>
-
-  <target name="regenerate">
-    <forall-analyzers target="regenerate"/>
-  </target>
-  
-  <target name="-append-module-dependencies-properties">
-    <forall-analyzers target="-append-module-dependencies-properties"/>
-  </target>
-  
-  <target name="check-forbidden-apis">
-    <forall-analyzers target="check-forbidden-apis"/>
-  </target>
-
-  <target name="jacoco">
-    <forall-analyzers target="jacoco"/>
-  </target>
-
-</project>
diff --git a/lucene/analysis/common/build.xml b/lucene/analysis/common/build.xml
deleted file mode 100644
index 21c5fda..0000000
--- a/lucene/analysis/common/build.xml
+++ /dev/null
@@ -1,125 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-common" default="default">
-
-  <description>
-   Analyzers for indexing content in different languages and domains.
-  </description>
-
-  <!-- some files for testing that do not have license headers -->
-  <property name="rat.excludes" value="**/*.aff,**/*.dic,**/*.txt,**/charfilter/*.htm*,**/*LuceneResourcesWikiPage.html"/>
-  <property name="rat.additional-includes" value="src/tools/**"/>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <property name="unicode-props-file" location="src/java/org/apache/lucene/analysis/util/UnicodeProps.java"/>
-
-  <!-- Because of a bug in JFlex's ant task, HTMLStripCharFilter has to be generated last.   -->
-  <!-- Otherwise the "%apiprivate" option used in its specification will leak into following -->
-  <!-- ant task invocations.                                                                 -->
-  <target name="jflex" depends="init,clean-jflex,-jflex-wiki-tokenizer,-jflex-ClassicAnalyzer,
-                                -jflex-UAX29URLEmailTokenizer,-jflex-HTMLStripCharFilter"/>
-
-  <target name="-jflex-HTMLStripCharFilter" depends="-install-jflex,generate-jflex-html-char-entities">
-    <run-jflex dir="src/java/org/apache/lucene/analysis/charfilter" name="HTMLStripCharFilter"/>
-    <fixcrlf  file="src/java/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.java" encoding="UTF-8" eol="lf"/>
-  </target>
-
-  <target name="generate-jflex-html-char-entities">
-    <exec dir="src/java/org/apache/lucene/analysis/charfilter"
-          output="src/java/org/apache/lucene/analysis/charfilter/HTMLCharacterEntities.jflex"
-          executable="${python3.exe}" failonerror="true" logerror="true">
-      <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-      <arg value="-B"/>
-      <arg value="htmlentity.py"/>
-    </exec>
-    <fixcrlf file="src/java/org/apache/lucene/analysis/charfilter/HTMLCharacterEntities.jflex" encoding="UTF-8" eol="lf"/>
-  </target>
-
-  <target name="-jflex-wiki-tokenizer" depends="-install-jflex">
-    <run-jflex dir="src/java/org/apache/lucene/analysis/wikipedia" name="WikipediaTokenizerImpl"/>
-  </target>
-
-  <target name="-jflex-ClassicAnalyzer" depends="-install-jflex">
-    <run-jflex dir="src/java/org/apache/lucene/analysis/standard" name="ClassicTokenizerImpl"/>
-  </target>
-
-  <target name="-jflex-UAX29URLEmailTokenizer" depends="-install-jflex">
-    <run-jflex-and-disable-buffer-expansion
-        dir="src/java/org/apache/lucene/analysis/standard" name="UAX29URLEmailTokenizerImpl"/>
-  </target>
-
-  <target name="clean-jflex">
-    <delete>
-      <fileset dir="src/java/org/apache/lucene/analysis/charfilter" includes="*.java">
-        <containsregexp expression="generated.*by.*JFlex"/>
-      </fileset>
-      <fileset dir="src/java/org/apache/lucene/analysis/wikipedia" includes="*.java">
-        <containsregexp expression="generated.*by.*JFlex"/>
-      </fileset>
-      <fileset dir="src/java/org/apache/lucene/analysis/standard" includes="**/*.java">
-        <containsregexp expression="generated.*by.*JFlex"/>
-      </fileset>
-    </delete>
-  </target>
-
-  <target xmlns:ivy="antlib:org.apache.ivy.ant" name="-resolve-icu4j" unless="icu4j.resolved" depends="ivy-availability-check,ivy-configure">
-    <loadproperties prefix="ivyversions" srcFile="${common.dir}/ivy-versions.properties"/>
-    <ivy:cachepath organisation="com.ibm.icu" module="icu4j" revision="${ivyversions./com.ibm.icu/icu4j}"
-      inline="true" conf="default" transitive="true" pathid="icu4j.classpath"/>
-    <property name="icu4j.resolved" value="true"/>
-  </target>
-
-  <target name="unicode-data" depends="-resolve-icu4j,resolve-groovy">
-    <groovy classpathref="icu4j.classpath" src="src/tools/groovy/generate-unicode-data.groovy"/>
-    <fixcrlf file="${unicode-props-file}" encoding="UTF-8"/>
-  </target>
-
-  <property name="tld.zones" value="https://www.internic.net/zones/root.zone"/>
-  <property name="tld.output" location="src/java/org/apache/lucene/analysis/standard/ASCIITLD.jflex-macro"/>
-
-  <target name="gen-tlds" depends="compile-tools">
-    <java
-      classname="org.apache.lucene.analysis.standard.GenerateJflexTLDMacros"
-      dir="."
-      fork="true"
-      failonerror="true">
-      <classpath>
-        <pathelement location="${build.dir}/classes/tools"/>
-      </classpath>
-      <arg value="${tld.zones}"/>
-      <arg value="${tld.output}"/>
-      <redirector alwayslog="true"/> <!-- stupid trick to get java class's stdout to ant's log -->
-    </java>
-  </target>
-
-  <target name="compile-tools" depends="common.compile-tools">
-    <compile
-      srcdir="src/tools/java"
-      destdir="${build.dir}/classes/tools">
-      <classpath refid="classpath"/>
-    </compile>
-  </target>
-
-  <target name="javadocs" depends="module-build.javadocs"/>
-
-  <target name="regenerate" depends="jflex,unicode-data"/>
-
-</project>
diff --git a/lucene/analysis/common/ivy.xml b/lucene/analysis/common/ivy.xml
deleted file mode 100644
index 9c5376b..0000000
--- a/lucene/analysis/common/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-common"/>
-</ivy-module>
diff --git a/lucene/analysis/common/src/java/org/apache/lucene/analysis/sr/SerbianAnalyzer.java b/lucene/analysis/common/src/java/org/apache/lucene/analysis/sr/SerbianAnalyzer.java
new file mode 100644
index 0000000..c672725
--- /dev/null
+++ b/lucene/analysis/common/src/java/org/apache/lucene/analysis/sr/SerbianAnalyzer.java
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.sr;
+
+import org.apache.lucene.analysis.*;
+import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;
+import org.apache.lucene.analysis.snowball.SnowballFilter;
+import org.apache.lucene.analysis.standard.StandardTokenizer;
+import org.tartarus.snowball.ext.SerbianStemmer;
+
+import java.io.IOException;
+import java.io.Reader;
+
+/**
+ * {@link Analyzer} for Serbian.
+ *
+ * @since 8.6
+ */
+public class SerbianAnalyzer extends StopwordAnalyzerBase {
+  private final CharArraySet stemExclusionSet;
+
+  /** File containing default Serbian stopwords. */
+  public final static String DEFAULT_STOPWORD_FILE = "stopwords.txt";
+
+  /**
+   * The comment character in the stopwords file.
+   * All lines prefixed with this will be ignored.
+   */
+  private static final String STOPWORDS_COMMENT = "#";
+
+  /**
+   * Returns an unmodifiable instance of the default stop words set.
+   * @return default stop words set.
+   */
+  public static CharArraySet getDefaultStopSet() {
+    return SerbianAnalyzer.DefaultSetHolder.DEFAULT_STOP_SET;
+  }
+
+  /**
+   * Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
+   * accesses the static final set the first time.;
+   */
+  private static class DefaultSetHolder {
+    static final CharArraySet DEFAULT_STOP_SET;
+
+    static {
+      try {
+        DEFAULT_STOP_SET = loadStopwordSet(false, SerbianAnalyzer.class,
+                                           DEFAULT_STOPWORD_FILE, STOPWORDS_COMMENT);
+      } catch (IOException ex) {
+        // default set should always be present as it is part of the
+        // distribution (JAR)
+        throw new RuntimeException("Unable to load default stopword set");
+      }
+    }
+  }
+
+  /**
+   * Builds an analyzer with the default stop words: {@link #DEFAULT_STOPWORD_FILE}.
+   */
+  public SerbianAnalyzer() {
+    this(SerbianAnalyzer.DefaultSetHolder.DEFAULT_STOP_SET);
+  }
+
+  /**
+   * Builds an analyzer with the given stop words.
+   *
+   * @param stopwords a stopword set
+   */
+  public SerbianAnalyzer(CharArraySet stopwords) {
+    this(stopwords, CharArraySet.EMPTY_SET);
+  }
+
+  /**
+   * Builds an analyzer with the given stop words. If a non-empty stem exclusion set is
+   * provided this analyzer will add a {@link SetKeywordMarkerFilter} before
+   * stemming.
+   *
+   * @param stopwords a stopword set
+   * @param stemExclusionSet a set of terms not to be stemmed
+   */
+  public SerbianAnalyzer(CharArraySet stopwords, CharArraySet stemExclusionSet) {
+    super(stopwords);
+    this.stemExclusionSet = CharArraySet.unmodifiableSet(CharArraySet.copy(stemExclusionSet));
+  }
+
+  /**
+   * Creates a
+   * {@link org.apache.lucene.analysis.Analyzer.TokenStreamComponents}
+   * which tokenizes all the text in the provided {@link Reader}.
+   *
+   * @return A
+   *         {@link org.apache.lucene.analysis.Analyzer.TokenStreamComponents}
+   *         built from an {@link StandardTokenizer} filtered with
+   *         {@link LowerCaseFilter}, {@link StopFilter}
+   *         , {@link SetKeywordMarkerFilter} if a stem exclusion set is
+   *         provided, {@link SnowballFilter} ({@link SerbianStemmer} https://snowballstem.org/algorithms/serbian/stemmer.html), and {@link SerbianNormalizationFilter}.
+   */
+  @Override
+  protected TokenStreamComponents createComponents(String fieldName) {
+    final Tokenizer source = new StandardTokenizer();
+    TokenStream result = new LowerCaseFilter(source);
+    result = new StopFilter(result, stopwords);
+    if(!stemExclusionSet.isEmpty())
+      result = new SetKeywordMarkerFilter(result, stemExclusionSet);
+    result = new SnowballFilter(result, new SerbianStemmer());
+    result = new SerbianNormalizationFilter(result);
+    return new TokenStreamComponents(source, result);
+  }
+
+  @Override
+  protected TokenStream normalize(String fieldName, TokenStream in) {
+    return new LowerCaseFilter(in);
+  }
+}
diff --git a/lucene/analysis/common/src/java/org/apache/lucene/collation/package-info.java b/lucene/analysis/common/src/java/org/apache/lucene/collation/package-info.java
index 5b83ea5..c79b58c 100644
--- a/lucene/analysis/common/src/java/org/apache/lucene/collation/package-info.java
+++ b/lucene/analysis/common/src/java/org/apache/lucene/collation/package-info.java
@@ -145,7 +145,7 @@
  *   </li>
  * </ol> 
  * <p>
- *   <code>ICUCollationKeyAnalyzer</code>, available in the <a href="{@docRoot}/../analyzers-icu/overview-summary.html">icu analysis module</a>,
+ *   <code>ICUCollationKeyAnalyzer</code>, available in the <a href="{@docRoot}/../icu/overview-summary.html">icu analysis module</a>,
  *   uses ICU4J's <code>Collator</code>, which 
  *   makes its version available, thus allowing collation to be versioned
  *   independently from the JVM.  <code>ICUCollationKeyAnalyzer</code> is also 
diff --git a/lucene/analysis/common/src/resources/org/apache/lucene/analysis/sr/stopwords.txt b/lucene/analysis/common/src/resources/org/apache/lucene/analysis/sr/stopwords.txt
new file mode 100644
index 0000000..17cea56
--- /dev/null
+++ b/lucene/analysis/common/src/resources/org/apache/lucene/analysis/sr/stopwords.txt
@@ -0,0 +1,156 @@
+i
+ili
+a
+ali
+pa
+biti
+ne
+jesam
+sam
+jesi
+si
+je
+jesmo
+smo
+jeste
+ste
+jesu
+su
+nijesam
+nisam
+nijesi
+nisi
+nije
+nijesmo
+nismo
+nijeste
+niste
+nijesu
+nisu
+budem
+budeš
+bude
+budemo
+budete
+budu
+budes
+bih
+bi
+bismo
+biste
+biše
+bise
+bio
+bili
+budimo
+budite
+bila
+bilo
+bile
+ću
+ćeš
+će
+ćemo
+ćete
+neću
+nećeš
+neće
+nećemo
+nećete
+cu
+ces
+ce
+cemo
+cete
+necu
+neces
+nece
+necemo
+necete
+mogu
+možeš
+može
+možemo
+možete
+mozes
+moze
+mozemo
+mozete

+или

+али
+па
+бити
+не
+јесам
+сам
+јеси
+си
+је
+јесмо
+смо
+јесте
+сте
+јесу
+су
+нијесам
+нисам
+нијеси
+ниси
+није
+нијесмо
+нисмо
+нијесте
+нисте
+нијесу
+нису
+будем
+будеш
+буде
+будемо
+будете
+буду
+будес
+бих
+би
+бисмо
+бисте
+бише
+бисе
+био
+били
+будимо
+будите
+била
+било
+биле
+ћу
+ћеш
+ће
+ћемо
+ћете
+нећу
+нећеш
+неће
+нећемо
+нећете
+цу
+цес
+це
+цемо
+цете
+нецу
+нецес
+неце
+нецемо
+нецете
+могу
+можеш
+може
+можемо
+можете
+мозес
+мозе
+моземо
+мозете
diff --git a/lucene/analysis/common/src/test/org/apache/lucene/analysis/sr/TestSerbianAnalyzer.java b/lucene/analysis/common/src/test/org/apache/lucene/analysis/sr/TestSerbianAnalyzer.java
new file mode 100644
index 0000000..c649baf
--- /dev/null
+++ b/lucene/analysis/common/src/test/org/apache/lucene/analysis/sr/TestSerbianAnalyzer.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.sr;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.BaseTokenStreamTestCase;
+import org.apache.lucene.analysis.CharArraySet;
+
+import java.io.IOException;
+
+/**
+ * Test the SerbianAnalyzer
+ *
+ */
+public class TestSerbianAnalyzer extends BaseTokenStreamTestCase {
+  /** This test fails with NPE when the
+   * stopwords file is missing in classpath */
+  public void testResourcesAvailable() {
+    new SerbianAnalyzer().close();
+  }
+
+  /** test stopwords and stemming */
+  public void testBasics() throws IOException {
+    Analyzer a = new SerbianAnalyzer();
+    // stemming
+    checkOneTerm(a, "abdiciraće", "abdicirac");
+    checkOneTerm(a, "decimalnim", "decimaln");
+    checkOneTerm(a, "đubrište", "djubrist");
+
+    // stopword
+    assertAnalyzesTo(a, "ili", new String[] {});
+    a.close();
+  }
+
+  /** test use of exclusion set */
+  public void testExclude() throws IOException {
+    CharArraySet exclusionSet = new CharArraySet( asSet("decimalnim"), false);
+    Analyzer a = new SerbianAnalyzer(
+                 SerbianAnalyzer.getDefaultStopSet(), exclusionSet);
+    checkOneTerm(a, "decimalnim", "decimalnim");
+    checkOneTerm(a, "decimalni", "decimaln");
+    a.close();
+  }
+
+  /** blast some random strings through the analyzer */
+  public void testRandomStrings() throws Exception {
+    Analyzer analyzer = new SerbianAnalyzer();
+    checkRandomData(random(), analyzer, 200 * RANDOM_MULTIPLIER);
+    analyzer.close();
+  }
+}
diff --git a/lucene/analysis/icu/build.xml b/lucene/analysis/icu/build.xml
deleted file mode 100644
index 32ab34d..0000000
--- a/lucene/analysis/icu/build.xml
+++ /dev/null
@@ -1,118 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-icu" default="default">
-
-  <description>
-   Analysis integration with ICU (International Components for Unicode).
-  </description>
-
-  <property name="rat.additional-includes" value="src/tools/**"/>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <path id="icujar">
-     <fileset dir="lib"/>
-  </path>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="icujar"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <path id="test.classpath">
-    <path refid="test.base.classpath" />
-    <pathelement path="src/test-files" />
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-
-  <property name="utr30.data.dir" location="src/data/utr30"/>
-  <target name="gen-utr30-data-files" depends="compile-tools">
-    <java
-        classname="org.apache.lucene.analysis.icu.GenerateUTR30DataFiles"
-        dir="${utr30.data.dir}"
-        fork="true"
-        failonerror="true">
-      <classpath>
-        <path refid="icujar"/>
-        <pathelement location="${build.dir}/classes/tools"/>
-      </classpath>
-    </java>
-  </target>
-
-  <property name="gennorm2.src.files"
-    value="nfc.txt nfkc.txt nfkc_cf.txt BasicFoldings.txt DiacriticFolding.txt DingbatFolding.txt HanRadicalFolding.txt NativeDigitFolding.txt"/>
-  <property name="gennorm2.tmp" value="${build.dir}/gennorm2/utr30.tmp"/>
-  <property name="gennorm2.dst" value="${resources.dir}/org/apache/lucene/analysis/icu/utr30.nrm"/>
-  <target name="gennorm2" depends="gen-utr30-data-files">
-    <echo>Note that the gennorm2 and icupkg tools must be on your PATH. These tools
-are part of the ICU4C package. See http://site.icu-project.org/ </echo>
-    <mkdir dir="${build.dir}/gennorm2"/>
-    <exec executable="gennorm2" failonerror="true">
-      <arg value="-v"/>
-      <arg value="-s"/>
-      <arg value="${utr30.data.dir}"/>
-      <arg line="${gennorm2.src.files}"/>
-      <arg value="-o"/>
-      <arg value="${gennorm2.tmp}"/>
-    </exec>
-    <!-- now convert binary file to big-endian -->
-    <exec executable="icupkg" failonerror="true">
-      <arg value="-tb"/>
-      <arg value="${gennorm2.tmp}"/>
-      <arg value="${gennorm2.dst}"/>
-    </exec>
-    <delete file="${gennorm2.tmp}"/>
-  </target>
-  
-  <property name="rbbi.src.dir" location="src/data/uax29"/>
-  <property name="rbbi.dst.dir" location="${resources.dir}/org/apache/lucene/analysis/icu/segmentation"/>
-  
-  <target name="genrbbi" depends="compile-tools">
-    <mkdir dir="${rbbi.dst.dir}"/>
-    <java
-      classname="org.apache.lucene.analysis.icu.RBBIRuleCompiler"
-      dir="."
-      fork="true"
-      failonerror="true">
-      <classpath>
-        <path refid="icujar"/>
-        <pathelement location="${build.dir}/classes/tools"/>
-      </classpath>
-      <assertions>
-        <enable package="org.apache.lucene"/>
-      </assertions>
-      <arg value="${rbbi.src.dir}"/>
-      <arg value="${rbbi.dst.dir}"/>
-    </java>
-  </target>
-
-  <target name="compile-tools" depends="init,common.compile-tools">
-    <compile
-      srcdir="src/tools/java"
-      destdir="${build.dir}/classes/tools">
-      <classpath refid="classpath"/>
-    </compile>
-  </target>
-
-  <target name="regenerate" depends="gen-utr30-data-files,gennorm2,genrbbi"/>
-
-</project>
diff --git a/lucene/analysis/icu/ivy.xml b/lucene/analysis/icu/ivy.xml
deleted file mode 100644
index 97b58f3..0000000
--- a/lucene/analysis/icu/ivy.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-icu"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="com.ibm.icu" name="icu4j" rev="${/com.ibm.icu/icu4j}" conf="compile"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/lucene/analysis/kuromoji/build.xml b/lucene/analysis/kuromoji/build.xml
deleted file mode 100644
index 7afa31d..0000000
--- a/lucene/analysis/kuromoji/build.xml
+++ /dev/null
@@ -1,98 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-kuromoji" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-
-  <description>
-    Japanese Morphological Analyzer
-  </description>
-
-  <!-- currently whether rat detects this as binary or not
-       is platform dependent?! -->
-  <property name="rat.excludes" value="**/*.txt,**/bocchan.utf-8"/>
-
-  <!-- we don't want to pull in ipadic/naist etc -->
-  <property name="ivy.default.configuration" value="default"/>
-  <import file="../analysis-module-build.xml"/> 
-
-  <!-- default configuration: uses mecab-ipadic -->
-  <property name="ipadic.type" value="ipadic"/>
-  <property name="ipadic.version" value="mecab-ipadic-2.7.0-20070801" />
-
-  <!-- alternative configuration: uses mecab-naist-jdic
-  <property name="ipadic.type" value="naist"/>
-  <property name="ipadic.version" value="mecab-naist-jdic-0.6.3b-20111013" />
-  -->
-  
-  <property name="dict.src.file" value="${ipadic.version}.tar.gz" />
-  <property name="dict.src.dir" value="${build.dir}/${ipadic.version}" />
-  <property name="dict.encoding" value="euc-jp"/>
-  <property name="dict.format" value="ipadic"/>
-  <property name="dict.normalize" value="false"/>
-  <property name="dict.target.dir" location="${resources.dir}"/>
-
-
-  <available type="dir" file="${build.dir}/${ipadic.version}" property="dict.available"/>
-
-  <path id="classpath">
-    <dirset dir="${build.dir}">
-      <include name="classes/java"/>
-    </dirset>
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-  <target name="download-dict" depends="ivy-availability-check,ivy-fail,ivy-configure" unless="dict.available">
-     <ivy:retrieve pattern="${build.dir}/${dict.src.file}" conf="${ipadic.type}" symlink="${ivy.symlink}"/>
-     <!-- TODO: we should checksum too -->
-     <gunzip src="${build.dir}/${dict.src.file}"/>
-     <untar src="${build.dir}/${ipadic.version}.tar" dest="${build.dir}"/>
-  </target>
-
-  <target name="patch-dict" depends="download-dict">
-    <patch patchfile="src/tools/patches/Noun.proper.csv.patch"
-           originalfile="${dict.src.dir}/Noun.proper.csv"/>
-  </target>
-
-  <target name="build-dict" depends="compile, patch-dict">
-    <sequential>
-      <delete verbose="true">
-        <fileset dir="${resources.dir}/org/apache/lucene/analysis/ja/dict" includes="**/*"/>
-      </delete>
-      <!-- TODO: optimize the dictionary construction a bit so that you don't need 1G -->
-      <java fork="true" failonerror="true" maxmemory="1g" classname="org.apache.lucene.analysis.ja.util.DictionaryBuilder">
-        <classpath refid="classpath"/>
-        <assertions>
-          <enable package="org.apache.lucene"/>
-        </assertions>
-        <arg value="${dict.format}"/>
-        <arg value="${dict.src.dir}"/>
-        <arg value="${dict.target.dir}"/>
-        <arg value="${dict.encoding}"/>
-        <arg value="${dict.normalize}"/>
-      </java>
-    </sequential>
-  </target>
-
-  <target name="compile-test" depends="module-build.compile-test"/>
-
-  <target name="regenerate" depends="build-dict"/>
-
-</project>
diff --git a/lucene/analysis/kuromoji/ivy.xml b/lucene/analysis/kuromoji/ivy.xml
deleted file mode 100644
index ee256b9..0000000
--- a/lucene/analysis/kuromoji/ivy.xml
+++ /dev/null
@@ -1,37 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-kuromoji"/>
-  
-  <configurations defaultconfmapping="ipadic->default;naist->default"> <!-- 'master' conf not available to map to -->
-    <conf name="default" description="explicitly declare this configuration in order to not download dictionaries unless explicitly called for"/>
-    <conf name="ipadic" description="ipadic dictionary" transitive="false"/>
-    <conf name="naist" description="naist-jdic dictionary" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="mecab" name="mecab-ipadic" rev="${/mecab/mecab-ipadic}" conf="ipadic"> 
-      <artifact name="ipadic" type=".tar.gz" url="https://jaist.dl.sourceforge.net/project/mecab/mecab-ipadic/2.7.0-20070801/mecab-ipadic-2.7.0-20070801.tar.gz"/>
-    </dependency>
-    <dependency org="mecab" name="mecab-naist-jdic" rev="${/mecab/mecab-naist-jdic}" conf="naist">
-      <artifact name="mecab-naist-jdic" type=".tar.gz" url=" https://rwthaachen.dl.osdn.jp/naist-jdic/53500/mecab-naist-jdic-0.6.3b-20111013.tar.gz"/>
-    </dependency>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/analysis/morfologik/build.xml b/lucene/analysis/morfologik/build.xml
deleted file mode 100644
index fca0622..0000000
--- a/lucene/analysis/morfologik/build.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-morfologik" default="default">
-  <description>
-    Analyzer for dictionary stemming, built-in Polish dictionary
-  </description>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <fileset dir="lib"/>
-    <path refid="base.classpath"/>
-  </path>
-  
-  
-  <path id="test.classpath">
-    <path refid="test.base.classpath" />
-    <pathelement path="src/test-files" />
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-</project>
diff --git a/lucene/analysis/morfologik/ivy.xml b/lucene/analysis/morfologik/ivy.xml
deleted file mode 100644
index f0cc234..0000000
--- a/lucene/analysis/morfologik/ivy.xml
+++ /dev/null
@@ -1,31 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-morfologik"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.carrot2" name="morfologik-polish" rev="${/org.carrot2/morfologik-polish}" conf="compile"/>
-    <dependency org="org.carrot2" name="morfologik-fsa" rev="${/org.carrot2/morfologik-fsa}" conf="compile"/>
-    <dependency org="org.carrot2" name="morfologik-stemming" rev="${/org.carrot2/morfologik-stemming}" conf="compile"/>
-    <dependency org="ua.net.nlp" name="morfologik-ukrainian-search" rev="${/ua.net.nlp/morfologik-ukrainian-search}" conf="compile"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/lucene/analysis/nori/build.xml b/lucene/analysis/nori/build.xml
deleted file mode 100644
index 7d5b0b9..0000000
--- a/lucene/analysis/nori/build.xml
+++ /dev/null
@@ -1,84 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-nori" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-
-  <description>
-    Korean Morphological Analyzer
-  </description>
-
-  <!-- currently whether rat detects this as binary or not
-       is platform dependent?! -->
-  <property name="rat.excludes" value="**/*.txt,**/bocchan.utf-8"/>
-
-  <!-- we don't want to pull in ipadic/naist etc -->
-  <property name="ivy.default.configuration" value="default"/>
-  <import file="../analysis-module-build.xml"/>
-
-  <!-- default configuration for Korean: uses mecab-ko-dic -->
-  <property name="dict.type" value="mecab-ko-dic"/>
-  <property name="dict.version" value="mecab-ko-dic-2.0.3-20170922" />
-
-  <property name="dict.src.file" value="${dict.version}.tar.gz" />
-  <property name="dict.src.dir" value="${build.dir}/${dict.version}" />
-  <property name="dict.encoding" value="utf-8"/>
-  <property name="dict.normalize" value="false"/>
-  <property name="dict.target.dir" location="${resources.dir}"/>
-
-  <available type="dir" file="${build.dir}/${dict.version}" property="mecab-ko.dict.available"/>
-
-  <path id="classpath">
-    <dirset dir="${build.dir}">
-      <include name="classes/java"/>
-    </dirset>
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-  <target name="download-dict" depends="ivy-availability-check,ivy-fail,ivy-configure" unless="mecab-ko.dict.available">
-    <ivy:retrieve pattern="${build.dir}/${dict.src.file}" conf="${dict.type}" symlink="${ivy.symlink}"/>
-    <!-- TODO: we should checksum too -->
-    <gunzip src="${build.dir}/${dict.src.file}"/>
-    <untar src="${build.dir}/${dict.version}.tar" dest="${build.dir}"/>
-  </target>
-
-  <target name="build-dict" depends="compile, download-dict">
-    <sequential>
-      <delete verbose="true">
-        <fileset dir="${resources.dir}/org/apache/lucene/analysis/ko/dict" includes="**/*"/>
-      </delete>
-      <!-- TODO: optimize the dictionary construction a bit so that you don't need 1G -->
-      <java fork="true" failonerror="true" maxmemory="1g" classname="org.apache.lucene.analysis.ko.util.DictionaryBuilder">
-        <classpath refid="classpath"/>
-        <assertions>
-          <enable package="org.apache.lucene"/>
-        </assertions>
-        <arg value="${dict.src.dir}"/>
-        <arg value="${dict.target.dir}"/>
-        <arg value="${dict.encoding}"/>
-        <arg value="${dict.normalize}"/>
-      </java>
-    </sequential>
-  </target>
-
-  <target name="compile-test" depends="module-build.compile-test"/>
-
-  <target name="regenerate" depends="build-dict"/>
-</project>
diff --git a/lucene/analysis/nori/ivy.xml b/lucene/analysis/nori/ivy.xml
deleted file mode 100644
index 8d32937..0000000
--- a/lucene/analysis/nori/ivy.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-nori"/>
-
-  <configurations defaultconfmapping="mecab-ko-dic->default"> <!-- 'master' conf not available to map to -->
-    <conf name="default" description="explicitly declare this configuration in order to not download dictionaries unless explicitly called for"/>
-    <conf name="mecab-ko-dic" description="mecab-ko dictionary for Korean" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="mecab" name="mecab-ko-dic" rev="${/mecab/mecab-ko-dic}" conf="mecab-ko-dic">
-      <artifact name="mecab-ko-dic" type=".tar.gz" url="https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.0.3-20170922.tar.gz" />
-    </dependency>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/analysis/opennlp/build.xml b/lucene/analysis/opennlp/build.xml
deleted file mode 100644
index 04f7eac..0000000
--- a/lucene/analysis/opennlp/build.xml
+++ /dev/null
@@ -1,118 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-opennlp" default="default">
-
-  <description>
-    OpenNLP Library Integration
-  </description>
-
-  <path id="opennlpjars">
-    <fileset dir="lib"/>
-  </path>
-
-  <property name="test.model.data.dir" location="src/tools/test-model-data"/>
-  <property name="tests.userdir" location="src/test-files"/>
-  <property name="test.model.dir" location="${tests.userdir}/org/apache/lucene/analysis/opennlp"/>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <property name="analysis-extras.conf.dir"
-            location="${common.dir}/../solr/contrib/analysis-extras/src/test-files/analysis-extras/solr/collection1/conf"/>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="opennlpjars"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <path id="test.classpath">
-    <path refid="test.base.classpath"/>
-    <pathelement path="${tests.userdir}"/>
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-
-  <!--
-    This does not create real NLP models, just small unencumbered ones for the unit tests.
-    All text taken from reuters corpus.
-    Tags applied with online demos at CCG Urbana-Champaign.
-    -->
-  <target name="train-test-models" description="Train all small test models for unit tests" depends="resolve">
-    <mkdir dir="${test.model.dir}"/>
-    <!-- https://opennlp.apache.org/docs/1.9.0/manual/opennlp.html#tools.sentdetect.training -->
-    <trainModel command="SentenceDetectorTrainer" lang="en" data="sentences.txt" model="en-test-sent.bin"/>
-    <copy file="${test.model.dir}/en-test-sent.bin" todir="${analysis-extras.conf.dir}"/>
-
-    <!-- https://opennlp.apache.org/docs/1.9.0/manual/opennlp.html#tools.tokenizer.training -->
-    <trainModel command="TokenizerTrainer" lang="en" data="tokenizer.txt" model="en-test-tokenizer.bin"/>
-    <copy file="${test.model.dir}/en-test-tokenizer.bin" todir="${analysis-extras.conf.dir}"/>
-
-    <!-- https://opennlp.apache.org/docs/1.9.0/manual/opennlp.html#tools.postagger.training -->
-    <trainModel command="POSTaggerTrainer" lang="en" data="pos.txt" model="en-test-pos-maxent.bin"/>
-
-    <!-- https://opennlp.apache.org/docs/1.9.0/manual/opennlp.html#tools.chunker.training -->
-    <trainModel command="ChunkerTrainerME" lang="en" data="chunks.txt" model="en-test-chunker.bin"/>
-
-    <!-- https://opennlp.apache.org/docs/1.9.0/manual/opennlp.html#tools.namefind.training -->
-    <trainModel command="TokenNameFinderTrainer" lang="en" data="ner.txt" model="en-test-ner.bin">
-      <extra-args>
-        <arg value="-params"/>
-        <arg value="ner_TrainerParams.txt"/>
-      </extra-args>
-    </trainModel>
-    <copy file="${test.model.dir}/en-test-ner.bin" todir="${analysis-extras.conf.dir}"/>
-
-    <!-- https://opennlp.apache.org/docs/1.9.0/manual/opennlp.html#tools.lemmatizer.training -->
-    <trainModel command="LemmatizerTrainerME" lang="en" data="lemmas.txt" model="en-test-lemmatizer.bin"/>
-  </target>
-
-  <macrodef name="trainModel">
-    <attribute name="command"/>
-    <attribute name="lang"/>
-    <attribute name="data"/>
-    <attribute name="model"/>
-    <element name="extra-args" optional="true"/>
-    <sequential>
-      <java classname="opennlp.tools.cmdline.CLI"
-            dir="${test.model.data.dir}"
-            fork="true"
-            failonerror="true">
-        <classpath>
-          <path refid="opennlpjars"/>
-        </classpath>
-
-        <arg value="@{command}"/>
-
-        <arg value="-lang"/>
-        <arg value="@{lang}"/>
-
-        <arg value="-data"/>
-        <arg value="@{data}"/>
-
-        <arg value="-model"/>
-        <arg value="${test.model.dir}/@{model}"/>
-
-        <extra-args/>
-      </java>
-    </sequential>
-  </macrodef>
-
-  <target name="regenerate" depends="train-test-models"/>
-</project>
diff --git a/lucene/analysis/opennlp/ivy.xml b/lucene/analysis/opennlp/ivy.xml
deleted file mode 100644
index cbbae64..0000000
--- a/lucene/analysis/opennlp/ivy.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-opennlp" />
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.apache.opennlp" name="opennlp-tools" rev="${/org.apache.opennlp/opennlp-tools}" transitive="false" conf="compile" />
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}" />
-  </dependencies>
-</ivy-module>
diff --git a/lucene/analysis/phonetic/build.xml b/lucene/analysis/phonetic/build.xml
deleted file mode 100644
index 49d5726..0000000
--- a/lucene/analysis/phonetic/build.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-phonetic" default="default">
-
-  <description>
-    Analyzer for indexing phonetic signatures (for sounds-alike search)
-  </description>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <fileset dir="lib"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-</project>
diff --git a/lucene/analysis/phonetic/ivy.xml b/lucene/analysis/phonetic/ivy.xml
deleted file mode 100644
index df7d800..0000000
--- a/lucene/analysis/phonetic/ivy.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-phonetic"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="commons-codec" name="commons-codec" rev="${/commons-codec/commons-codec}" conf="compile"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/lucene/analysis/smartcn/build.xml b/lucene/analysis/smartcn/build.xml
deleted file mode 100644
index 01e0682..0000000
--- a/lucene/analysis/smartcn/build.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-smartcn" default="default">
-
-  <description>
-    Analyzer for indexing Chinese
-  </description>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
-</project>
diff --git a/lucene/analysis/smartcn/ivy.xml b/lucene/analysis/smartcn/ivy.xml
deleted file mode 100644
index 842688d..0000000
--- a/lucene/analysis/smartcn/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-smartcn"/>
-</ivy-module>
diff --git a/lucene/analysis/stempel/build.xml b/lucene/analysis/stempel/build.xml
deleted file mode 100644
index 64a823a..0000000
--- a/lucene/analysis/stempel/build.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="analyzers-stempel" default="default">
-
-  <description>
-    Analyzer for indexing Polish
-  </description>
-
-  <import file="../analysis-module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-  
-  <target name="compile-core" depends="jar-analyzers-common, common.compile-core"/>
-</project>
diff --git a/lucene/analysis/stempel/ivy.xml b/lucene/analysis/stempel/ivy.xml
deleted file mode 100644
index afccee3..0000000
--- a/lucene/analysis/stempel/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="analyzers-stempel"/>
-</ivy-module>
diff --git a/lucene/backward-codecs/build.xml b/lucene/backward-codecs/build.xml
deleted file mode 100644
index 3de2979..0000000
--- a/lucene/backward-codecs/build.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<project name="backward-codecs" default="default">
-
-  <description>
-    Codecs for older versions of Lucene.
-  </description>
-
-  <import file="../module-build.xml"/>
-
-</project>
diff --git a/lucene/backward-codecs/ivy.xml b/lucene/backward-codecs/ivy.xml
deleted file mode 100644
index fe933f0..0000000
--- a/lucene/backward-codecs/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="backward-codecs"/>
-</ivy-module>
diff --git a/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene50/Lucene50StoredFieldsFormat.java b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene50/Lucene50StoredFieldsFormat.java
new file mode 100644
index 0000000..6f3b162
--- /dev/null
+++ b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene50/Lucene50StoredFieldsFormat.java
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene50;
+
+
+import java.io.IOException;
+import java.util.Objects;
+
+import org.apache.lucene.codecs.StoredFieldsFormat;
+import org.apache.lucene.codecs.StoredFieldsReader;
+import org.apache.lucene.codecs.StoredFieldsWriter;
+import org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat;
+import org.apache.lucene.codecs.compressing.CompressionMode;
+import org.apache.lucene.index.FieldInfos;
+import org.apache.lucene.index.SegmentInfo;
+import org.apache.lucene.index.StoredFieldVisitor;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.util.packed.DirectMonotonicWriter;
+
+/**
+ * Lucene 5.0 stored fields format.
+ *
+ * <p><b>Principle</b>
+ * <p>This {@link StoredFieldsFormat} compresses blocks of documents in
+ * order to improve the compression ratio compared to document-level
+ * compression. It uses the <a href="http://code.google.com/p/lz4/">LZ4</a>
+ * compression algorithm by default in 16KB blocks, which is fast to compress 
+ * and very fast to decompress data. Although the default compression method 
+ * that is used ({@link Mode#BEST_SPEED BEST_SPEED}) focuses more on speed than on 
+ * compression ratio, it should provide interesting compression ratios
+ * for redundant inputs (such as log files, HTML or plain text). For higher
+ * compression, you can choose ({@link Mode#BEST_COMPRESSION BEST_COMPRESSION}), which uses 
+ * the <a href="http://en.wikipedia.org/wiki/DEFLATE">DEFLATE</a> algorithm with 60KB blocks 
+ * for a better ratio at the expense of slower performance. 
+ * These two options can be configured like this:
+ * <pre class="prettyprint">
+ *   // the default: for high performance
+ *   indexWriterConfig.setCodec(new Lucene54Codec(Mode.BEST_SPEED));
+ *   // instead for higher performance (but slower):
+ *   // indexWriterConfig.setCodec(new Lucene54Codec(Mode.BEST_COMPRESSION));
+ * </pre>
+ * <p><b>File formats</b>
+ * <p>Stored fields are represented by three files:
+ * <ol>
+ * <li><a id="field_data"></a>
+ * <p>A fields data file (extension <code>.fdt</code>). This file stores a compact
+ * representation of documents in compressed blocks of 16KB or more. When
+ * writing a segment, documents are appended to an in-memory <code>byte[]</code>
+ * buffer. When its size reaches 16KB or more, some metadata about the documents
+ * is flushed to disk, immediately followed by a compressed representation of
+ * the buffer using the
+ * <a href="https://github.com/lz4/lz4">LZ4</a>
+ * <a href="http://fastcompression.blogspot.fr/2011/05/lz4-explained.html">compression format</a>.</p>
+ * <p>Notes
+ * <ul>
+ * <li>If documents are larger than 16KB then chunks will likely contain only
+ * one document. However, documents can never spread across several chunks (all
+ * fields of a single document are in the same chunk).</li>
+ * <li>When at least one document in a chunk is large enough so that the chunk
+ * is larger than 32KB, the chunk will actually be compressed in several LZ4
+ * blocks of 16KB. This allows {@link StoredFieldVisitor}s which are only
+ * interested in the first fields of a document to not have to decompress 10MB
+ * of data if the document is 10MB, but only 16KB.</li>
+ * <li>Given that the original lengths are written in the metadata of the chunk,
+ * the decompressor can leverage this information to stop decoding as soon as
+ * enough data has been decompressed.</li>
+ * <li>In case documents are incompressible, the overhead of the compression format
+ * is less than 0.5%.</li>
+ * </ul>
+ * </li>
+ * <li><a id="field_index"></a>
+ * <p>A fields index file (extension <code>.fdx</code>). This file stores two
+ * {@link DirectMonotonicWriter monotonic arrays}, one for the first doc IDs of
+ * each block of compressed documents, and another one for the corresponding
+ * offsets on disk. At search time, the array containing doc IDs is
+ * binary-searched in order to find the block that contains the expected doc ID,
+ * and the associated offset on disk is retrieved from the second array.</p>
+ * <li><a id="field_meta"></a>
+ * <p>A fields meta file (extension <code>.fdm</code>). This file stores metadata
+ * about the monotonic arrays stored in the index file.</p>
+ * </li>
+ * </ol>
+ * <p><b>Known limitations</b>
+ * <p>This {@link StoredFieldsFormat} does not support individual documents
+ * larger than (<code>2<sup>31</sup> - 2<sup>14</sup></code>) bytes.
+ * @lucene.experimental
+ */
+public class Lucene50StoredFieldsFormat extends StoredFieldsFormat {
+  
+  /** Configuration option for stored fields. */
+  public static enum Mode {
+    /** Trade compression ratio for retrieval speed. */
+    BEST_SPEED,
+    /** Trade retrieval speed for compression ratio. */
+    BEST_COMPRESSION
+  }
+  
+  /** Attribute key for compression mode. */
+  public static final String MODE_KEY = Lucene50StoredFieldsFormat.class.getSimpleName() + ".mode";
+  
+  final Mode mode;
+  
+  /** Stored fields format with default options */
+  public Lucene50StoredFieldsFormat() {
+    this(Mode.BEST_SPEED);
+  }
+  
+  /** Stored fields format with specified mode */
+  public Lucene50StoredFieldsFormat(Mode mode) {
+    this.mode = Objects.requireNonNull(mode);
+  }
+
+  @Override
+  public final StoredFieldsReader fieldsReader(Directory directory, SegmentInfo si, FieldInfos fn, IOContext context) throws IOException {
+    String value = si.getAttribute(MODE_KEY);
+    if (value == null) {
+      throw new IllegalStateException("missing value for " + MODE_KEY + " for segment: " + si.name);
+    }
+    Mode mode = Mode.valueOf(value);
+    return impl(mode).fieldsReader(directory, si, fn, context);
+  }
+
+  @Override
+  public StoredFieldsWriter fieldsWriter(Directory directory, SegmentInfo si, IOContext context) throws IOException {
+    throw new UnsupportedOperationException("Old codecs may only be used for reading");
+  }
+  
+  StoredFieldsFormat impl(Mode mode) {
+    switch (mode) {
+      case BEST_SPEED: 
+        return new CompressingStoredFieldsFormat("Lucene50StoredFieldsFastData", CompressionMode.FAST, 1 << 14, 128, 10);
+      case BEST_COMPRESSION: 
+        return new CompressingStoredFieldsFormat("Lucene50StoredFieldsHighData", CompressionMode.HIGH_COMPRESSION, 61440, 512, 10);
+      default: throw new AssertionError();
+    }
+  }
+}
diff --git a/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene84/Lucene84Codec.java b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene84/Lucene84Codec.java
index bef5633..90918c1 100644
--- a/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene84/Lucene84Codec.java
+++ b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene84/Lucene84Codec.java
@@ -97,7 +97,7 @@
   }
 
   @Override
-  public final StoredFieldsFormat storedFieldsFormat() {
+  public StoredFieldsFormat storedFieldsFormat() {
     return storedFieldsFormat;
   }
 
diff --git a/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene86/Lucene86Codec.java b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene86/Lucene86Codec.java
new file mode 100644
index 0000000..e297465
--- /dev/null
+++ b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene86/Lucene86Codec.java
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.codecs.lucene86;
+
+import java.util.Objects;
+
+import org.apache.lucene.codecs.Codec;
+import org.apache.lucene.codecs.CompoundFormat;
+import org.apache.lucene.codecs.DocValuesFormat;
+import org.apache.lucene.codecs.FieldInfosFormat;
+import org.apache.lucene.codecs.FilterCodec;
+import org.apache.lucene.codecs.LiveDocsFormat;
+import org.apache.lucene.codecs.NormsFormat;
+import org.apache.lucene.codecs.PointsFormat;
+import org.apache.lucene.codecs.PostingsFormat;
+import org.apache.lucene.codecs.SegmentInfoFormat;
+import org.apache.lucene.codecs.StoredFieldsFormat;
+import org.apache.lucene.codecs.TermVectorsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50TermVectorsFormat;
+import org.apache.lucene.codecs.lucene60.Lucene60FieldInfosFormat;
+import org.apache.lucene.codecs.lucene80.Lucene80NormsFormat;
+import org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat;
+import org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat;
+import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;
+
+/**
+ * Implements the Lucene 8.6 index format, with configurable per-field postings
+ * and docvalues formats.
+ * <p>
+ * If you want to reuse functionality of this codec in another codec, extend
+ * {@link FilterCodec}.
+ *
+ * @see org.apache.lucene.codecs.lucene86 package documentation for file format details.
+ *
+ * @lucene.experimental
+ */
+public class Lucene86Codec extends Codec {
+  private final TermVectorsFormat vectorsFormat = new Lucene50TermVectorsFormat();
+  private final FieldInfosFormat fieldInfosFormat = new Lucene60FieldInfosFormat();
+  private final SegmentInfoFormat segmentInfosFormat = new Lucene86SegmentInfoFormat();
+  private final LiveDocsFormat liveDocsFormat = new Lucene50LiveDocsFormat();
+  private final CompoundFormat compoundFormat = new Lucene50CompoundFormat();
+  private final PointsFormat pointsFormat = new Lucene86PointsFormat();
+  private final PostingsFormat defaultFormat;
+
+  private final PostingsFormat postingsFormat = new PerFieldPostingsFormat() {
+    @Override
+    public PostingsFormat getPostingsFormatForField(String field) {
+      return Lucene86Codec.this.getPostingsFormatForField(field);
+    }
+  };
+
+  private final DocValuesFormat docValuesFormat = new PerFieldDocValuesFormat() {
+    @Override
+    public DocValuesFormat getDocValuesFormatForField(String field) {
+      return Lucene86Codec.this.getDocValuesFormatForField(field);
+    }
+  };
+
+  private final StoredFieldsFormat storedFieldsFormat;
+
+  /**
+   * Instantiates a new codec.
+   */
+  public Lucene86Codec() {
+    this(Lucene50StoredFieldsFormat.Mode.BEST_SPEED);
+  }
+
+  /**
+   * Instantiates a new codec, specifying the stored fields compression
+   * mode to use.
+   * @param mode stored fields compression mode to use for newly
+   *             flushed/merged segments.
+   */
+  public Lucene86Codec(Lucene50StoredFieldsFormat.Mode mode) {
+    super("Lucene86");
+    this.storedFieldsFormat = new Lucene50StoredFieldsFormat(Objects.requireNonNull(mode));
+    this.defaultFormat = new Lucene84PostingsFormat();
+  }
+
+  @Override
+  public StoredFieldsFormat storedFieldsFormat() {
+    return storedFieldsFormat;
+  }
+
+  @Override
+  public final TermVectorsFormat termVectorsFormat() {
+    return vectorsFormat;
+  }
+
+  @Override
+  public final PostingsFormat postingsFormat() {
+    return postingsFormat;
+  }
+
+  @Override
+  public final FieldInfosFormat fieldInfosFormat() {
+    return fieldInfosFormat;
+  }
+
+  @Override
+  public final SegmentInfoFormat segmentInfoFormat() {
+    return segmentInfosFormat;
+  }
+
+  @Override
+  public final LiveDocsFormat liveDocsFormat() {
+    return liveDocsFormat;
+  }
+
+  @Override
+  public final CompoundFormat compoundFormat() {
+    return compoundFormat;
+  }
+
+  @Override
+  public final PointsFormat pointsFormat() {
+    return pointsFormat;
+  }
+
+  /** Returns the postings format that should be used for writing
+   *  new segments of <code>field</code>.
+   *
+   *  The default implementation always returns "Lucene84".
+   *  <p>
+   *  <b>WARNING:</b> if you subclass, you are responsible for index
+   *  backwards compatibility: future version of Lucene are only
+   *  guaranteed to be able to read the default implementation.
+   */
+  public PostingsFormat getPostingsFormatForField(String field) {
+    return defaultFormat;
+  }
+
+  /** Returns the docvalues format that should be used for writing
+   *  new segments of <code>field</code>.
+   *
+   *  The default implementation always returns "Lucene80".
+   *  <p>
+   *  <b>WARNING:</b> if you subclass, you are responsible for index
+   *  backwards compatibility: future version of Lucene are only
+   *  guaranteed to be able to read the default implementation.
+   */
+  public DocValuesFormat getDocValuesFormatForField(String field) {
+    return defaultDVFormat;
+  }
+
+  @Override
+  public final DocValuesFormat docValuesFormat() {
+    return docValuesFormat;
+  }
+
+  private final DocValuesFormat defaultDVFormat = DocValuesFormat.forName("Lucene80");
+
+  private final NormsFormat normsFormat = new Lucene80NormsFormat();
+
+  @Override
+  public final NormsFormat normsFormat() {
+    return normsFormat;
+  }
+}
diff --git a/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene86/package.html b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene86/package.html
new file mode 100644
index 0000000..10560c6
--- /dev/null
+++ b/lucene/backward-codecs/src/java/org/apache/lucene/codecs/lucene86/package.html
@@ -0,0 +1,25 @@
+<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<html>
+<head>
+   <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
+</head>
+<body>
+Lucene 8.6 file format.
+</body>
+</html>
diff --git a/lucene/backward-codecs/src/resources/META-INF/services/org.apache.lucene.codecs.Codec b/lucene/backward-codecs/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
index cf7a945..d673233 100644
--- a/lucene/backward-codecs/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
+++ b/lucene/backward-codecs/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
@@ -15,3 +15,4 @@
 
 org.apache.lucene.codecs.lucene80.Lucene80Codec
 org.apache.lucene.codecs.lucene84.Lucene84Codec
+org.apache.lucene.codecs.lucene86.Lucene86Codec
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/Lucene50RWStoredFieldsFormat.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/Lucene50RWStoredFieldsFormat.java
new file mode 100644
index 0000000..82d1c96
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/Lucene50RWStoredFieldsFormat.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene50;
+
+import java.io.IOException;
+
+import org.apache.lucene.codecs.StoredFieldsWriter;
+import org.apache.lucene.index.SegmentInfo;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+
+/**
+ * RW impersonation of Lucene50StoredFieldsFormat.
+ */
+public final class Lucene50RWStoredFieldsFormat extends Lucene50StoredFieldsFormat {
+
+  /** No-argument constructor. */
+  public Lucene50RWStoredFieldsFormat() {
+    super();
+  }
+
+  /** Constructor that takes a mode. */
+  public Lucene50RWStoredFieldsFormat(Lucene50StoredFieldsFormat.Mode mode) {
+    super(mode);
+  }
+
+  @Override
+  public StoredFieldsWriter fieldsWriter(Directory directory, SegmentInfo si, IOContext context) throws IOException {
+    String previous = si.putAttribute(MODE_KEY, mode.name());
+    if (previous != null && previous.equals(mode.name()) == false) {
+      throw new IllegalStateException("found existing value for " + MODE_KEY + " for segment: " + si.name +
+          "old=" + previous + ", new=" + mode.name());
+    }
+    return impl(mode).fieldsWriter(directory, si, context);
+  }
+
+}
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormat.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormat.java
new file mode 100644
index 0000000..fec9e43
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormat.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene50;
+
+
+import org.apache.lucene.codecs.Codec;
+import org.apache.lucene.codecs.lucene86.Lucene86RWCodec;
+import org.apache.lucene.index.BaseStoredFieldsFormatTestCase;
+
+public class TestLucene50StoredFieldsFormat extends BaseStoredFieldsFormatTestCase {
+  @Override
+  protected Codec getCodec() {
+    return new Lucene86RWCodec();
+  }
+}
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatHighCompression.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatHighCompression.java
new file mode 100644
index 0000000..41b4b84
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatHighCompression.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene50;
+
+
+import com.carrotsearch.randomizedtesting.generators.RandomPicks;
+import org.apache.lucene.codecs.Codec;
+import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.Mode;
+import org.apache.lucene.codecs.lucene86.Lucene86RWCodec;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.document.StoredField;
+import org.apache.lucene.index.BaseStoredFieldsFormatTestCase;
+import org.apache.lucene.index.DirectoryReader;
+import org.apache.lucene.index.IndexWriter;
+import org.apache.lucene.index.IndexWriterConfig;
+import org.apache.lucene.store.Directory;
+
+public class TestLucene50StoredFieldsFormatHighCompression extends BaseStoredFieldsFormatTestCase {
+  @Override
+  protected Codec getCodec() {
+    return new Lucene86RWCodec(Mode.BEST_COMPRESSION);
+  }
+  
+  /**
+   * Change compression params (leaving it the same for old segments)
+   * and tests that nothing breaks.
+   */
+  public void testMixedCompressions() throws Exception {
+    Directory dir = newDirectory();
+    for (int i = 0; i < 10; i++) {
+      IndexWriterConfig iwc = newIndexWriterConfig();
+      iwc.setCodec(new Lucene86RWCodec(RandomPicks.randomFrom(random(), Mode.values())));
+      IndexWriter iw = new IndexWriter(dir, newIndexWriterConfig());
+      Document doc = new Document();
+      doc.add(new StoredField("field1", "value1"));
+      doc.add(new StoredField("field2", "value2"));
+      iw.addDocument(doc);
+      if (random().nextInt(4) == 0) {
+        iw.forceMerge(1);
+      }
+      iw.commit();
+      iw.close();
+    }
+    
+    DirectoryReader ir = DirectoryReader.open(dir);
+    assertEquals(10, ir.numDocs());
+    for (int i = 0; i < 10; i++) {
+      Document doc = ir.document(i);
+      assertEquals("value1", doc.get("field1"));
+      assertEquals("value2", doc.get("field2"));
+    }
+    ir.close();
+    // checkindex
+    dir.close();
+  }
+  
+  public void testInvalidOptions() {
+    expectThrows(NullPointerException.class, () -> {
+      new Lucene86RWCodec(null);
+    });
+
+    expectThrows(NullPointerException.class, () -> {
+      new Lucene50StoredFieldsFormat(null);
+    });
+  }
+}
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatMergeInstance.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatMergeInstance.java
similarity index 100%
rename from lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatMergeInstance.java
rename to lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatMergeInstance.java
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene70/TestLucene70SegmentInfoFormat.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene70/TestLucene70SegmentInfoFormat.java
index ac516a1..d9dd019 100644
--- a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene70/TestLucene70SegmentInfoFormat.java
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene70/TestLucene70SegmentInfoFormat.java
@@ -18,8 +18,7 @@
 package org.apache.lucene.codecs.lucene70;
 
 import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.codecs.FilterCodec;
-import org.apache.lucene.codecs.SegmentInfoFormat;
+import org.apache.lucene.codecs.lucene84.Lucene84RWCodec;
 import org.apache.lucene.index.BaseSegmentInfoFormatTestCase;
 import org.apache.lucene.util.Version;
 
@@ -32,11 +31,6 @@
 
   @Override
   protected Codec getCodec() {
-    return new FilterCodec("Lucene84", Codec.forName("Lucene84")) {
-      @Override
-      public SegmentInfoFormat segmentInfoFormat() {
-        return new Lucene70RWSegmentInfoFormat();
-      }
-    };
+    return new Lucene84RWCodec();
   }
 }
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene84/Lucene84RWCodec.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene84/Lucene84RWCodec.java
index c1fd467..0f74e79 100644
--- a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene84/Lucene84RWCodec.java
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene84/Lucene84RWCodec.java
@@ -18,6 +18,8 @@
 
 import org.apache.lucene.codecs.PointsFormat;
 import org.apache.lucene.codecs.SegmentInfoFormat;
+import org.apache.lucene.codecs.StoredFieldsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50RWStoredFieldsFormat;
 import org.apache.lucene.codecs.lucene60.Lucene60RWPointsFormat;
 import org.apache.lucene.codecs.lucene70.Lucene70RWSegmentInfoFormat;
 
@@ -36,4 +38,9 @@
     return new Lucene70RWSegmentInfoFormat();
   }
 
+  @Override
+  public StoredFieldsFormat storedFieldsFormat() {
+    return new Lucene50RWStoredFieldsFormat();
+  }
+
 }
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene86/Lucene86RWCodec.java b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene86/Lucene86RWCodec.java
new file mode 100644
index 0000000..72e2bee
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/codecs/lucene86/Lucene86RWCodec.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene86;
+
+import org.apache.lucene.codecs.StoredFieldsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50RWStoredFieldsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat;
+
+/**
+ * RW impersonation of {@link Lucene86Codec}.
+ */
+public class Lucene86RWCodec extends Lucene86Codec {
+
+  private final StoredFieldsFormat storedFieldsFormat;
+
+  /** No arguments constructor. */
+  public Lucene86RWCodec() {
+    storedFieldsFormat = new Lucene50RWStoredFieldsFormat();
+  }
+
+  /** Constructor that takes a mode. */
+  public Lucene86RWCodec(Lucene50StoredFieldsFormat.Mode mode) {
+    storedFieldsFormat = new Lucene50RWStoredFieldsFormat(mode);
+  }
+
+  @Override
+  public StoredFieldsFormat storedFieldsFormat() {
+    return storedFieldsFormat;
+  }
+
+}
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java b/lucene/backward-codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
index fe2db06..43d5be1 100644
--- a/lucene/backward-codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
@@ -309,7 +309,11 @@
     "8.5.2-cfs",
     "8.5.2-nocfs",
     "8.6.0-cfs",
-    "8.6.0-nocfs"
+    "8.6.0-nocfs",
+    "8.6.1-cfs",
+    "8.6.1-nocfs",
+    "8.6.2-cfs",
+    "8.6.2-nocfs"
   };
 
   public static String[] getOldNames() {
@@ -328,7 +332,9 @@
     "sorted.8.5.0",
     "sorted.8.5.1",
     "sorted.8.5.2",
-    "sorted.8.6.0"
+    "sorted.8.6.0",
+    "sorted.8.6.1",
+    "sorted.8.6.2"
   };
 
   public static String[] getOldSortedNames() {
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.1-cfs.zip b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.1-cfs.zip
new file mode 100644
index 0000000..476a094
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.1-cfs.zip
Binary files differ
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.1-nocfs.zip b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.1-nocfs.zip
new file mode 100644
index 0000000..5b92d54
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.1-nocfs.zip
Binary files differ
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.2-cfs.zip b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.2-cfs.zip
new file mode 100644
index 0000000..2ed7139
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.2-cfs.zip
Binary files differ
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.2-nocfs.zip b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.2-nocfs.zip
new file mode 100644
index 0000000..66577df
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/index.8.6.2-nocfs.zip
Binary files differ
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/sorted.8.6.1.zip b/lucene/backward-codecs/src/test/org/apache/lucene/index/sorted.8.6.1.zip
new file mode 100644
index 0000000..c0c1f90
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/sorted.8.6.1.zip
Binary files differ
diff --git a/lucene/backward-codecs/src/test/org/apache/lucene/index/sorted.8.6.2.zip b/lucene/backward-codecs/src/test/org/apache/lucene/index/sorted.8.6.2.zip
new file mode 100644
index 0000000..b42607b
--- /dev/null
+++ b/lucene/backward-codecs/src/test/org/apache/lucene/index/sorted.8.6.2.zip
Binary files differ
diff --git a/lucene/benchmark/build.xml b/lucene/benchmark/build.xml
deleted file mode 100644
index 2f53ff4..0000000
--- a/lucene/benchmark/build.xml
+++ /dev/null
@@ -1,289 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="benchmark" default="default">
-
-    <description>
-      System for benchmarking Lucene
-    </description>
-
-    <import file="../module-build.xml"/>
-    <property name="working.dir" location="work"/>
-
-    <!-- benchmark creates lots of sysout stuff, so dont run forbidden! -->
-    <target name="-check-forbidden-sysout"/>
-
-    <target name="check-files">
-        <available file="temp/news20.tar.gz" property="news20.exists"/>
-
-        <available file="${working.dir}/20_newsgroup" property="news20.expanded"/>
-
-        <available file="temp/reuters21578.tar.gz" property="reuters.exists"/>
-        <available file="${working.dir}/reuters" property="reuters.expanded"/>
-        <available file="${working.dir}/reuters-out" property="reuters.extracted"/>
-        <available file="temp/20news-18828.tar.gz" property="20news-18828.exists"/>
-        <available file="${working.dir}/20news-18828" property="20news-18828.expanded"/>
-        <available file="${working.dir}/mini_newsgroups" property="mini.expanded"/>
-        <available file="temp/allCountries.txt.bz2" property="geonames.exists"/>
-        <available file="${working.dir}/geonames" property="geonames.expanded"/>
-        
-        <available file="temp/enwiki-20070527-pages-articles.xml.bz2" property="enwiki.exists"/>
-        <available file="temp/enwiki-20070527-pages-articles.xml" property="enwiki.expanded"/>
-        <available file="${working.dir}/enwiki.txt" property="enwiki.extracted"/>
-      <available file="temp/${top.100k.words.archive.filename}"
-                   property="top.100k.words.archive.present"/>
-      <available file="${working.dir}/top100k-out" 
-                   property="top.100k.word.files.expanded"/>
-    </target>
-
-    <target name="enwiki-files" depends="check-files">
-        <mkdir dir="temp"/>
-        <antcall target="get-enwiki"/>
-        <antcall target="expand-enwiki"/>
-    </target>
-
-    <target name="get-enwiki" unless="enwiki.exists">
-        <get src="https://home.apache.org/~dsmiley/data/enwiki-20070527-pages-articles.xml.bz2"
-             dest="temp/enwiki-20070527-pages-articles.xml.bz2"/>
-    </target>
-
-    <target name="expand-enwiki"  unless="enwiki.expanded">
-        <bunzip2 src="temp/enwiki-20070527-pages-articles.xml.bz2" dest="temp"/>
-    </target>
-
-    <target name="geonames-files" depends="check-files" description="Get data for spatial.alg">
-        <mkdir dir="temp"/>
-        <antcall target="get-geonames"/>
-        <antcall target="expand-geonames"/>
-    </target>
-
-    <target name="get-geonames" unless="geonames.exists">
-        <!-- note: latest data is at: https://download.geonames.org/export/dump/allCountries.zip
-         and then randomize with: gsort -R -S 1500M file.txt > file_random.txt
-         and then compress with: bzip2 -9 -k file_random.txt -->
-      <get src="https://home.apache.org/~dsmiley/data/geonames_20130921_randomOrder_allCountries.txt.bz2"
-             dest="temp/allCountries.txt.bz2"/>
-    </target>
-
-    <target name="expand-geonames" unless="geonames.expanded">
-        <mkdir dir="${working.dir}/geonames"/>
-        <bunzip2 src="temp/allCountries.txt.bz2" dest="${working.dir}/geonames"/>
-    </target>
-
-    <target name="get-news-20" unless="20news-18828.exists">
-        <get src="https://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz"
-             dest="temp/news20.tar.gz"/>
-
-    </target>
-    <target name="get-reuters" unless="reuters.exists">
-        <!-- Please note: there is no HTTPS url. As this is only test data, we don't care: -->
-        <get src="http://www.daviddlewis.com/resources/testcollections/reuters21578/reuters21578.tar.gz"
-            dest="temp/reuters21578.tar.gz"/>
-    </target>
-
-    <target name="expand-news-20"  unless="news20.expanded">
-        <gunzip src="temp/news20.tar.gz" dest="temp"/>
-        <untar src="temp/news20.tar" dest="${working.dir}"/>
-    </target>
-    <target name="expand-reuters" unless="reuters.expanded">
-        <gunzip src="temp/reuters21578.tar.gz" dest="temp"/>
-        <mkdir dir="${working.dir}/reuters"/>
-        <untar src="temp/reuters21578.tar" dest="${working.dir}/reuters"/>
-        <delete >
-            <fileset dir="${working.dir}/reuters">
-                <include name="*.txt"/>
-            </fileset>
-        </delete>
-
-    </target>
-    <target name="extract-reuters" depends="check-files" unless="reuters.extracted">
-        <java classname="org.apache.lucene.benchmark.utils.ExtractReuters" maxmemory="1024M" fork="true">
-            <classpath refid="run.classpath"/>
-            <arg file="${working.dir}/reuters"/>
-            <arg file="${working.dir}/reuters-out"/>
-        </java>
-    </target>
-    <target name="get-20news-18828" unless="20news-18828.exists">
-        <!-- TODO: URL no longer works (404 Not found): -->
-        <get src="https://people.csail.mit.edu/u/j/jrennie/public_html/20Newsgroups/20news-18828.tar.gz"
-             dest="temp/20news-18828.tar.gz"/>
-
-    </target>
-    <target name="expand-20news-18828" unless="20news-18828.expanded">
-        <gunzip src="temp/20news-18828.tar.gz" dest="temp"/>
-        <untar src="temp/20news-18828.tar" dest="${working.dir}"/>
-    </target>
-    <target name="get-mini-news" unless="mini.exists">
-        <get src="https://kdd.ics.uci.edu/databases/20newsgroups/mini_newsgroups.tar.gz"
-             dest="temp/mini_newsgroups.tar.gz"/>
-    </target>
-    <target name="expand-mini-news" unless="mini.expanded">
-        <gunzip src="temp/mini_newsgroups.tar.gz" dest="temp"/>
-        <untar src="temp/mini_newsgroups.tar" dest="${working.dir}"/>
-    </target>
-
-  <property name="top.100k.words.archive.filename" 
-            value="top.100k.words.de.en.fr.uk.wikipedia.2009-11.tar.bz2"/>
-  <property name="top.100k.words.archive.base.url"
-            value="https://home.apache.org/~rmuir/wikipedia"/>
-  <target name="get-top-100k-words-archive" unless="top.100k.words.archive.present">
-    <mkdir dir="temp"/>
-      <get src="${top.100k.words.archive.base.url}/${top.100k.words.archive.filename}"
-           dest="temp/${top.100k.words.archive.filename}"/>
-  </target>
-  <target name="expand-top-100k-word-files" unless="top.100k.word.files.expanded">
-    <mkdir dir="${working.dir}/top100k-out"/>
-      <untar src="temp/${top.100k.words.archive.filename}"
-             overwrite="true" compression="bzip2" dest="${working.dir}/top100k-out"/>
-  </target>
-  
-  <target name="top-100k-wiki-word-files" depends="check-files">
-    <mkdir dir="${working.dir}"/>
-    <antcall target="get-top-100k-words-archive"/>
-    <antcall target="expand-top-100k-word-files"/>
-  </target>
-  
-    <target name="get-files" depends="check-files">
-        <mkdir dir="temp"/>
-        <antcall target="get-reuters"/>
-        <antcall target="expand-reuters"/>
-        <antcall target="extract-reuters"/>
-    </target>
-
-    <path id="classpath">
-      <pathelement path="${memory.jar}"/>
-      <pathelement path="${highlighter.jar}"/>
-      <pathelement path="${analyzers-common.jar}"/>
-      <pathelement path="${queryparser.jar}"/>
-      <pathelement path="${facet.jar}"/>
-      <pathelement path="${spatial-extras.jar}"/>
-      <pathelement path="${queries.jar}"/>
-      <pathelement path="${codecs.jar}"/>
-      <pathelement path="${join.jar}"/>
-      <path refid="base.classpath"/>
-      <fileset dir="lib"/>
-    </path>
-    <path id="run.classpath">
-        <path refid="classpath"/>
-        <pathelement location="${build.dir}/classes/java"/>
-        <pathelement path="${benchmark.ext.classpath}"/>
-    </path>
-
-    <target name="javadocs" depends="javadocs-memory,javadocs-highlighter,javadocs-analyzers-common,
-      javadocs-queryparser,javadocs-facet,javadocs-spatial-extras,compile-core,check-javadocs-uptodate"
-            unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../memory"/>
-        <link href="../highlighter"/>
-        <link href="../analyzers-common"/>
-        <link href="../queryparser"/>
-        <link href="../facet"/>
-        <link href="../spatial-extras"/>
-      </links>
-    </invoke-module-javadoc>
-    </target>
-
-    <property name="task.alg" location="conf/micro-standard.alg"/>
-    <property name="task.mem" value="140M"/>
-
-    <target name="run-task" depends="compile,check-files,get-files" 
-     description="Run compound penalty perf test (optional: -Dtask.alg=your-algorithm-file -Dtask.mem=java-max-mem)">
-        <echo>Working Directory: ${working.dir}</echo>
-        <java classname="org.apache.lucene.benchmark.byTask.Benchmark" maxmemory="${task.mem}" fork="true">
-            <classpath refid="run.classpath"/>
-            <arg file="${task.alg}"/>
-        </java>
-    </target>
-
-    <target name="enwiki" depends="compile,check-files,enwiki-files">
-        <echo>Working Directory: ${working.dir}</echo>
-        <java classname="org.apache.lucene.benchmark.byTask.Benchmark" maxmemory="1024M" fork="true">
-            <assertions>
-              <enable/>
-            </assertions>
-            <classpath refid="run.classpath"/>
-            <arg file="conf/extractWikipedia.alg"/>
-        </java>
-    </target>
-
-  <property name="collation.alg.file" location="conf/collation.alg"/>
-  <property name="collation.output.file" 
-            value="${working.dir}/collation.benchmark.output.txt"/>
-  <property name="collation.jira.output.file" 
-            value="${working.dir}/collation.bm2jira.output.txt"/>
-  
-  <path id="collation.runtime.classpath">
-    <path refid="run.classpath"/>
-      <pathelement path="${analyzers-icu.jar}"/>
-  </path>
-  
-  <target name="collation" depends="compile,jar-analyzers-icu,top-100k-wiki-word-files">
-      <echo>Running benchmark with alg file: ${collation.alg.file}</echo>
-      <java fork="true" classname="org.apache.lucene.benchmark.byTask.Benchmark" 
-            maxmemory="${task.mem}" output="${collation.output.file}">
-        <classpath refid="collation.runtime.classpath"/>
-        <arg file="${collation.alg.file}"/>
-      </java>
-      <echo>Benchmark output is in file: ${collation.output.file}</echo>
-      <echo>Converting to JIRA table format...</echo>
-      <exec executable="${perl.exe}" output="${collation.jira.output.file}" failonerror="true">
-        <arg value="-CSD"/>
-        <arg value="scripts/collation.bm2jira.pl"/>
-        <arg value="${collation.output.file}"/>
-      </exec>
-      <echo>Benchmark output in JIRA table format is in file: ${collation.jira.output.file}</echo>
-  </target>
-  
-    <property name="shingle.alg.file" location="conf/shingle.alg"/>
-    <property name="shingle.output.file" 
-              value="${working.dir}/shingle.benchmark.output.txt"/>
-    <property name="shingle.jira.output.file" 
-              value="${working.dir}/shingle.bm2jira.output.txt"/>
-  
-    <path id="shingle.runtime.classpath">
-      <path refid="run.classpath"/>
-    </path>
-  
-    <target name="shingle" depends="compile,get-files">
-      <echo>Running benchmark with alg file: ${shingle.alg.file}</echo>
-      <java fork="true" classname="org.apache.lucene.benchmark.byTask.Benchmark" 
-            maxmemory="${task.mem}" output="${shingle.output.file}">
-        <classpath refid="run.classpath"/>
-        <arg file="${shingle.alg.file}"/>
-      </java>
-      <echo>Benchmark output is in file: ${shingle.output.file}</echo>
-      <echo>Converting to JIRA table format...</echo>
-      <exec executable="${perl.exe}" output="${shingle.jira.output.file}" failonerror="true">
-        <arg value="-CSD"/>
-        <arg value="scripts/shingle.bm2jira.pl"/>
-        <arg value="${shingle.output.file}"/>
-      </exec>
-      <echo>Benchmark output in JIRA table format is in file: ${shingle.jira.output.file}</echo>
-    </target>
-
-    <target name="init" depends="module-build.init,jar-memory,jar-highlighter,jar-analyzers-common,jar-queryparser,jar-facet,jar-spatial-extras,jar-codecs,jar-join"/>
-  
-    <target name="compile-test" depends="copy-alg-files-for-testing,module-build.compile-test"/>
-    <target name="copy-alg-files-for-testing" description="copy .alg files as resources for testing">
-      <copy todir="${build.dir}/classes/test/conf">
-        <fileset dir="conf"/>
-      </copy>
-    </target>
-</project>
diff --git a/lucene/benchmark/ivy.xml b/lucene/benchmark/ivy.xml
deleted file mode 100644
index 23c208c..0000000
--- a/lucene/benchmark/ivy.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="benchmark"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.apache.commons" name="commons-compress" rev="${/org.apache.commons/commons-compress}" conf="compile"/>
-    <dependency org="xerces" name="xercesImpl" rev="${/xerces/xercesImpl}" conf="compile"/>
-    <dependency org="net.sourceforge.nekohtml" name="nekohtml" rev="${/net.sourceforge.nekohtml/nekohtml}" conf="compile"/>
-    <dependency org="com.ibm.icu" name="icu4j" rev="${/com.ibm.icu/icu4j}" conf="compile"/>
-    <dependency org="org.locationtech.spatial4j" name="spatial4j" rev="${/org.locationtech.spatial4j/spatial4j}" conf="compile"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/CreateIndexTask.java b/lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/CreateIndexTask.java
index db64781..e44b046 100644
--- a/lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/CreateIndexTask.java
+++ b/lucene/benchmark/src/java/org/apache/lucene/benchmark/byTask/tasks/CreateIndexTask.java
@@ -29,7 +29,7 @@
 import org.apache.lucene.benchmark.byTask.utils.Config;
 import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.codecs.PostingsFormat;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
+import org.apache.lucene.codecs.lucene87.Lucene87Codec;
 import org.apache.lucene.index.ConcurrentMergeScheduler;
 import org.apache.lucene.index.IndexCommit;
 import org.apache.lucene.index.IndexDeletionPolicy;
@@ -138,7 +138,7 @@
     if (defaultCodec == null && postingsFormat != null) {
       try {
         final PostingsFormat postingsFormatChosen = PostingsFormat.forName(postingsFormat);
-        iwConf.setCodec(new Lucene86Codec() {
+        iwConf.setCodec(new Lucene87Codec() {
           @Override
           public PostingsFormat getPostingsFormatForField(String field) {
             return postingsFormatChosen;
diff --git a/lucene/build.xml b/lucene/build.xml
deleted file mode 100644
index a5e17ad..0000000
--- a/lucene/build.xml
+++ /dev/null
@@ -1,586 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="lucene" default="default" basedir="."
-         xmlns:jacoco="antlib:org.jacoco.ant"
-         xmlns:artifact="antlib:org.apache.maven.artifact.ant">
-
-  <import file="common-build.xml"/>
-
-  <path id="classpath">
-    <pathelement location="${common.dir}/build/core/classes/java"/>
-  </path>
-
-  <patternset id="binary.build.dist.patterns"
-              includes="docs/,**/*.jar,**/*.war"
-              excludes="poms/**,**/*-src.jar,**/*-javadoc.jar"
-  />
-  <patternset id="binary.root.dist.patterns"
-              includes="LICENSE.txt,NOTICE.txt,README.md,
-                        MIGRATE.md,JRE_VERSION_MIGRATION.md,
-                        SYSTEM_REQUIREMENTS.md,
-                        CHANGES.txt,
-                        **/lib/*.jar,
-                        licenses/**,
-                        */docs/,**/README*"
-              excludes="build/**,site/**,tools/**,**/lib/*servlet-api*.jar,dev-docs/**"
-  />
-
-  <!-- ================================================================== -->
-  <!-- Prepares the build directory                                       -->
-  <!-- ================================================================== -->
-
-  <target name="test-core" description="Runs unit tests for the core Lucene code">
-    <ant dir="${common.dir}/core" target="test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- "-clover.load" is *not* a useless dependency. do not remove -->
-  <target name="test" depends="-clover.load, -init-totals, test-core, test-test-framework, test-modules, -check-totals"
-          description="Runs all unit tests (core, modules and back-compat)"
-  />
-  <target name="test-nocompile">
-    <fail message="Target 'test-nocompile' will not run recursively.  First change directory to the module you want to test."/>
-  </target>
-
-
-  <target name="pitest" depends="pitest-modules"
-          description="Runs pitests (core, modules and back-compat)"
-  />
-
-  <target name="beast">
-    <fail message="The Beast only works inside of individual modules"/>
-  </target>
-
-  <target name="compile-core" depends="compile-lucene-core"/>
-
-  <!-- lucene/test-framework is excluded from compilation -->
-  <target name="compile" depends="init,compile-lucene-core,compile-codecs"
-          description="Compiles core, codecs, and all modules">
-    <modules-crawl target="compile-core"/>
-  </target>
-
-  <!-- Validation (license/notice/api checks). -->
-  <target name="validate" depends="check-licenses,rat-sources,check-forbidden-apis" description="Validate stuff." />
-
-  <!-- Validation here depends on compile-tools: but we want to compile modules' tools too -->
-  <target name="compile-tools" depends="common.compile-tools">
-    <modules-crawl target="compile-tools"/>
-  </target>
-
-  <target name="check-licenses" depends="compile-tools,resolve,load-custom-tasks" description="Validate license stuff.">
-    <property name="skipRegexChecksum" value=""/>
-    <license-check-macro dir="${basedir}" licensedir="${common.dir}/licenses">
-      <additional-filters>
-        <replaceregex pattern="^jetty([^/]+)$" replace="jetty" flags="gi" />
-        <replaceregex pattern="slf4j-([^/]+)$" replace="slf4j" flags="gi" />
-        <replaceregex pattern="javax\.servlet([^/]+)$" replace="javax.servlet" flags="gi" />
-        <replaceregex pattern="(bcmail|bcprov)-([^/]+)$" replace="\1" flags="gi" />
-      </additional-filters>
-    </license-check-macro>
-  </target>
-
-  <target name="check-lib-versions" depends="compile-tools,resolve,load-custom-tasks"
-          description="Verify that the '/org/name' keys in ivy-versions.properties are sorted lexically and are neither duplicates nor orphans, and that all dependencies in all ivy.xml files use rev=&quot;$${/org/name}&quot; format.">
-    <lib-versions-check-macro dir="${common.dir}/.."
-                              centralized.versions.file="${common.dir}/ivy-versions.properties"
-                              top.level.ivy.settings.file="${common.dir}/top-level-ivy-settings.xml"
-                              ivy.resolution-cache.dir="${ivy.resolution-cache.dir}"
-                              ivy.lock-strategy="${ivy.lock-strategy}"
-                              common.build.dir="${common.build.dir}"
-                              ignore.conflicts.file="${common.dir}/ivy-ignore-conflicts.properties"/>
-  </target>
-
-  <!-- -install-forbidden-apis is *not* a useless dependency. do not remove -->
-  <target name="check-forbidden-apis" depends="-install-forbidden-apis" description="Check forbidden API calls in compiled class files">
-    <subant target="check-forbidden-apis" failonerror="true" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-      <fileset dir="tools" includes="build.xml"/>
-    </subant>
-    <modules-crawl target="check-forbidden-apis"/>
-  </target>
-
-  <target name="resolve">
-    <sequential>
-      <ant dir="test-framework" target="resolve" inheritall="false">
-         <propertyset refid="uptodate.and.compiled.properties"/>
-      </ant>
-      <ant dir="${common.dir}/tools" target="resolve" inheritAll="false">
-         <propertyset refid="uptodate.and.compiled.properties"/>
-      </ant>
-      <modules-crawl target="resolve"/>
-    </sequential>
-  </target>
-
-  <target name="documentation" description="Generate all documentation"
-    depends="javadocs,changes-to-html,process-webpages"/>
-  <target name="javadoc" depends="javadocs"/>
-  <target name="javadocs" description="Generate javadoc" depends="javadocs-lucene-core, javadocs-modules, javadocs-test-framework"/>
-
-  <target name="documentation-lint" depends="-ecj-javadoc-lint,-documentation-lint-unsupported" if="documentation-lint.supported"
-          description="Validates the generated documentation (HTML errors, broken links,...)">
-    <!-- we use antcall here, otherwise ANT will run all dependent targets: -->
-    <antcall target="-documentation-lint"/>
-  </target>
-
-  <!-- we check for broken links across all documentation -->
-  <target name="-documentation-lint" depends="documentation">
-    <echo message="Checking for broken links..."/>
-    <check-broken-links dir="build/docs"/>
-    <echo message="Checking for missing docs..."/>
-    <!-- TODO: change this level=method -->
-    <check-missing-javadocs dir="build/docs" level="class"/>
-    <!-- too many classes to fix overall to just enable
-         the above to be level="method" right now, but we
-         can prevent the modules that don't have problems
-         from getting any worse -->
-    <!-- analyzers-common: problems -->
-    <check-missing-javadocs dir="build/docs/analyzers-icu" level="method"/>
-    <!-- analyzers-kuromoji: problems -->
-    <check-missing-javadocs dir="build/docs/analyzers-morfologik" level="method"/>
-    <check-missing-javadocs dir="build/docs/analyzers-phonetic" level="method"/>
-    <!-- analyzers-smartcn: problems -->
-    <check-missing-javadocs dir="build/docs/analyzers-stempel" level="method"/>
-    <!-- benchmark: problems -->
-    <check-missing-javadocs dir="build/docs/classification" level="method"/>
-    <!-- codecs: problems -->
-    <!-- core: problems -->
-    <check-missing-javadocs dir="build/docs/demo" level="method"/>
-    <check-missing-javadocs dir="build/docs/expressions" level="method"/>
-    <check-missing-javadocs dir="build/docs/facet" level="method"/>
-    <!-- grouping: problems -->
-    <!-- highlighter: problems -->
-    <check-missing-javadocs dir="build/docs/join" level="method"/>
-    <check-missing-javadocs dir="build/docs/memory" level="method"/>
-    <!-- misc: problems -->
-    <!-- queries: problems -->
-    <!-- queryparser: problems -->
-    <!-- sandbox: problems -->
-    <!-- spatial-extras: problems -->
-    <check-missing-javadocs dir="build/docs/suggest" level="method"/>
-    <!-- test-framework: problems -->
-
-    <!-- too much to fix core/ for now, but enforce full javadocs for key packages -->
-    <check-missing-javadocs dir="build/docs/core/org/apache/lucene/util/automaton" level="method"/>
-    <check-missing-javadocs dir="build/docs/core/org/apache/lucene/analysis" level="method"/>
-    <check-missing-javadocs dir="build/docs/core/org/apache/lucene/document" level="method"/>
-    <check-missing-javadocs dir="build/docs/core/org/apache/lucene/search/similarities" level="method"/>
-    <check-missing-javadocs dir="build/docs/core/org/apache/lucene/index" level="method"/>
-    <check-missing-javadocs dir="build/docs/core/org/apache/lucene/codecs" level="method"/>
-
-    <check-missing-javadocs dir="build/docs/spatial3d" level="method"/>
-  </target>
-  
-  <target name="-ecj-javadoc-lint" depends="compile,compile-test,-ecj-javadoc-lint-unsupported,-ecj-resolve" if="ecj-javadoc-lint.supported">
-    <subant target="-ecj-javadoc-lint" failonerror="true" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-    </subant>
-    <modules-crawl target="-ecj-javadoc-lint"/>
-  </target>
-
-  <target name="process-webpages" depends="resolve-markdown">
-    <makeurl property="process-webpages.buildfiles" separator="|">
-      <fileset dir="." includes="**/build.xml" excludes="build.xml,analysis/*,build/**,tools/**,site/**"/>
-    </makeurl>
-    <property name="Codec.java" location="core/src/java/org/apache/lucene/codecs/Codec.java"/>
-    <loadfile srcfile="${Codec.java}" property="defaultCodec" encoding="UTF-8">
-      <filterchain>
-        <!--  private static Codec defaultCodec   =   LOADER    .   lookup    (   "LuceneXXX"                 )   ; -->
-        <containsregex pattern="^.*defaultCodec\s*=\s*LOADER\s*\.\s*lookup\s*\(\s*&quot;([^&quot;]+)&quot;\s*\)\s*;.*$" replace="\1"/>
-        <fixcrlf eol="unix" eof="remove" />
-        <deletecharacters chars="\n"/>
-      </filterchain>
-    </loadfile>
-
-    <!--
-      The XSL input file is ignored completely, but XSL expects one to be given,
-      so we pass ourself (${ant.file}) here. The list of module build.xmls is given
-      via string parameter, that must be splitted by the XSL at '|'.
-    --> 
-    <xslt in="${ant.file}" out="${javadoc.dir}/index.html" style="site/xsl/index.xsl" force="true">
-      <outputproperty name="method" value="html"/>
-      <outputproperty name="version" value="4.0"/>
-      <outputproperty name="encoding" value="UTF-8"/>
-      <outputproperty name="indent" value="yes"/>
-      <param name="buildfiles" expression="${process-webpages.buildfiles}"/>
-      <param name="version" expression="${version}"/>
-      <param name="defaultCodec" expression="${defaultCodec}"/>
-    </xslt>
-    
-    <markdown todir="${javadoc.dir}">
-      <fileset dir="." includes="MIGRATE.md,JRE_VERSION_MIGRATION.md,SYSTEM_REQUIREMENTS.md"/>
-      <globmapper from="*.md" to="*.html"/>
-    </markdown>
-
-    <copy todir="${javadoc.dir}">
-      <fileset dir="site/html"/>
-    </copy>
-  </target>
-  
-  <target name="javadocs-modules" description="Generate javadoc for modules classes">
-    <modules-crawl target="javadocs"/>
-  </target>
-
-  <!-- rat-sources-typedef is *not* a useless dependency. do not remove -->
-  <target name="rat-sources" depends="rat-sources-typedef,common.rat-sources">
-    <subant target="rat-sources" failonerror="true" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-      <fileset dir="tools" includes="build.xml"/>
-    </subant>
-    <modules-crawl target="rat-sources"/>
-  </target>
-
-  <!-- ================================================================== -->
-  <!-- D I S T R I B U T I O N                                            -->
-  <!-- ================================================================== -->
-  <!--                                                                    -->
-  <!-- ================================================================== -->
-  <target name="package" depends="jar-core, jar-test-framework, build-modules, init-dist, documentation"/>
-
-  <target name="nightly" depends="test, package-tgz">
-  </target>
-
-  <!-- ================================================================== -->
-  <!-- Packages the distribution with zip                                 -->
-  <!-- ================================================================== -->
-  <!--                                                                    -->
-  <!-- ================================================================== -->
-  <target name="package-zip" depends="package"
-    description="--> Generates the Lucene distribution as .zip">
-
-    <delete file="${dist.dir}/lucene-${version}.zip"/>
-    <zip destfile="${dist.dir}/lucene-${version}.zip">
-      <zipfileset prefix="lucene-${version}" dir=".">
-        <patternset refid="binary.root.dist.patterns"/>
-      </zipfileset>
-      <zipfileset prefix="lucene-${version}" dir="${build.dir}">
-        <patternset refid="binary.build.dist.patterns"/>
-      </zipfileset>
-      <zipfileset prefix="lucene-${version}" dir="${build.dir}" includes="**/*.sh,**/*.bat" filemode="755"/>
-    </zip>
-    <make-checksums file="${dist.dir}/lucene-${version}.zip"/>
-  </target>
-
-  <!-- ================================================================== -->
-  <!-- packages the distribution with tar-gzip                            -->
-  <!-- ================================================================== -->
-  <!--                                                                    -->
-  <!-- ================================================================== -->
-
-  <!-- TODO: fix this stuff to not be a duplicate of the zip logic above! -->
-  <target name="package-tgz" depends="package"
-    description="--> Generates the lucene distribution as .tgz">
-
-    <delete file="${dist.dir}/lucene-${version}.tgz"/>
-    <tar tarfile="${dist.dir}/lucene-${version}.tgz"
-      longfile="gnu" compression="gzip">
-      <tarfileset prefix="lucene-${version}" dir=".">
-        <patternset refid="binary.root.dist.patterns"/>
-      </tarfileset>
-      <tarfileset prefix="lucene-${version}" dir="${build.dir}">
-        <patternset refid="binary.build.dist.patterns"/>
-      </tarfileset>
-      <tarfileset prefix="lucene-${version}" dir="${build.dir}" includes="**/*.sh,**/*.bat" filemode="755"/>
-    </tar>
-    <make-checksums file="${dist.dir}/lucene-${version}.tgz"/>
-  </target>
-
-  <!-- ================================================================== -->
-  <!-- packages the distribution with zip and tar-gzip                    -->
-  <!-- ================================================================== -->
-  <!--                                                                    -->
-  <!-- ================================================================== -->
-  <target name="package-all-binary" depends="package-zip, package-tgz"
-    description="--> Generates the .tgz and .zip distributions"/>
-
-  <!-- ================================================================== -->
-  <!-- same as package-all. it is just here for compatibility.            -->
-  <!-- ================================================================== -->
-  <!--                                                                    -->
-  <!-- ================================================================== -->
-  <target name="dist" depends="package-all-binary"/>
-
-  <!-- ================================================================== -->
-  <!-- S O U R C E  D I S T R I B U T I O N                               -->
-  <!-- ================================================================== -->
-    <target name="init-dist" >
-
-        <!-- Package is not called first if packaging src standalone, so the dist.dir may not exist -->
-        <mkdir dir="${build.dir}"/>
-        <mkdir dir="${dist.dir}"/>
-        <mkdir dir="${maven.dist.dir}"/>
-    </target>
-
-  <!-- ================================================================== -->
-  <!-- Packages the sources with tar-gzip               -->
-  <!-- ================================================================== -->
-  <target name="package-tgz-src" depends="init-dist"
-          description="--> Generates the Lucene source distribution as .tgz">
-    <property name="source.package.file"
-              location="${dist.dir}/lucene-${version}-src.tgz"/>
-    <delete file="${source.package.file}"/>
-    <export-source source.dir="."/>
-
-    <!-- Exclude javadoc package-list files under licenses incompatible with the ASL -->
-    <delete dir="${src.export.dir}/tools/javadoc/java8"/>
-
-    <!-- because we only package the "lucene/" folder, we have to adjust dir to work on: -->
-    <property name="local.src.export.dir" location="${src.export.dir}/lucene"/>
-    
-    <build-changes changes.src.file="${local.src.export.dir}/CHANGES.txt"
-                   changes.target.dir="${local.src.export.dir}/docs/changes"
-                   changes.product="lucene"/>
-    <tar tarfile="${source.package.file}" compression="gzip" longfile="gnu">
-      <tarfileset prefix="lucene-${version}" dir="${local.src.export.dir}"/>
-    </tar>
-    <make-checksums file="${source.package.file}"/>
-  </target>
-
-
-  <!-- ================================================================== -->
-  <!-- Packages the sources from local working copy with tar-gzip     -->
-  <!-- ================================================================== -->
-  <target name="package-local-src-tgz" depends="init-dist"
-    description="--> Packages the Lucene source from the local working copy">
-    <mkdir dir="${common.dir}/build"/>
-    <property name="source.package.file"
-              value="${common.dir}/build/lucene-${version}-src.tgz"/>
-    <delete file="${source.package.file}"/>
-    <tar tarfile="${source.package.file}" compression="gzip" longfile="gnu">
-      <tarfileset prefix="lucene-${version}" dir=".">
-        <patternset refid="lucene.local.src.package.patterns"/>
-      </tarfileset>
-    </tar>
-  </target>
-
-  <!-- ================================================================== -->
-  <!-- same as package-tgz-src. it is just here for compatibility.        -->
-  <!-- ================================================================== -->
-  <target name="dist-src" depends="package-tgz-src"/>
-
-  <target name="dist-all" depends="dist, dist-src, -dist-changes"/>
-
-  <!-- copy changes/ to the release folder -->
-  <target name="-dist-changes">
-   <copy todir="${dist.dir}/changes">
-     <fileset dir="${build.dir}/docs/changes"/>
-   </copy>
-  </target>
-
-  <target name="prepare-release-no-sign" depends="clean, dist-all, generate-maven-artifacts"/>
-  <target name="prepare-release" depends="prepare-release-no-sign, sign-artifacts"/>
-
-  <target name="-dist-maven" depends="install-maven-tasks">
-    <sequential>
-      <m2-deploy pom.xml="${filtered.pom.templates.dir}/pom.xml"/>        <!-- Lucene/Solr grandparent POM -->
-      <m2-deploy pom.xml="${filtered.pom.templates.dir}/lucene/pom.xml"/> <!-- Lucene parent POM -->
-      <subant target="-dist-maven" failonerror="true" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="${common.dir}/core" includes="build.xml"/>
-        <fileset dir="${common.dir}/test-framework" includes="build.xml"/>
-      </subant>
-      <modules-crawl target="-dist-maven"/>
-    </sequential>
-  </target>
-
-  <target name="-install-to-maven-local-repo" depends="install-maven-tasks">
-    <sequential>
-      <m2-install pom.xml="${filtered.pom.templates.dir}/pom.xml"/>        <!-- Lucene/Solr grandparent POM -->
-      <m2-install pom.xml="${filtered.pom.templates.dir}/lucene/pom.xml"/> <!-- Lucene parent POM -->
-      <subant target="-install-to-maven-local-repo" failonerror="true" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="${common.dir}/core" includes="build.xml"/>
-        <fileset dir="${common.dir}/test-framework" includes="build.xml"/>
-      </subant>
-      <modules-crawl target="-install-to-maven-local-repo"/>
-    </sequential>
-  </target>
-
-  <target name="generate-maven-artifacts" depends="-unpack-lucene-tgz">
-    <ant dir=".." target="resolve" inheritall="false"/>
-    <antcall target="-filter-pom-templates" inheritall="false"/>
-    <antcall target="-dist-maven" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-  </target>
-  
-  <target name="-validate-maven-dependencies" depends="compile-tools, install-maven-tasks, load-custom-tasks">
-    <sequential>
-      <subant target="-validate-maven-dependencies" failonerror="true" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="${common.dir}/core" includes="build.xml"/>
-        <fileset dir="${common.dir}/test-framework" includes="build.xml"/>
-      </subant>
-      
-      <modules-crawl target="-validate-maven-dependencies"/>
-    </sequential>
-  </target>
-  
-  <!-- ================================================================== -->
-  <!-- support for signing the artifacts using gpg                        -->
-  <!-- ================================================================== -->
-  <target name="sign-artifacts">
-    <sign-artifacts-macro artifacts.dir="${dist.dir}"/>
-  </target>
-
-  <target name="build-modules" depends="compile-test"
-          description="Builds all additional modules and their tests">
-    <modules-crawl target="build-artifacts-and-tests"/>
-  </target>
-
-  <target name="compile-test" description="Builds core, codecs, test-framework, and modules tests">
-    <sequential>
-      <ant dir="core" target="compile-test" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-      </ant>
-      <ant dir="test-framework" target="compile-test" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-      </ant>
-      <modules-crawl target="compile-test"/>
-    </sequential>
-  </target>
-
-  <target name="test-test-framework">
-      <ant dir="test-framework" target="test" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-      </ant>
-  </target>
-  
-  <target name="test-modules">
-    <modules-crawl target="test"/>
-  </target>
-
-  <target name="changes-to-html">
-    <build-changes changes.product="lucene"/>
-  </target>
-
-  <target name="pitest-modules" depends="compile-test">
-    <modules-crawl target="pitest" failonerror="false"/>
-  </target>
-
-  <target name="jacoco" description="Generates JaCoCo code coverage reports" depends="-jacoco-install">
-    <!-- run jacoco for each module -->
-    <ant dir="${common.dir}/core" target="jacoco" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <ant dir="${common.dir}/test-framework" target="jacoco" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <modules-crawl target="jacoco"/>
-
-    <!-- produce aggregate report -->
-    <property name="jacoco.output.dir" location="${jacoco.report.dir}/lucene-all"/>
-    <!-- try to clean output dir to prevent any confusion -->
-    <delete dir="${jacoco.output.dir}" failonerror="false"/>
-    <mkdir dir="${jacoco.output.dir}"/>
-
-    <jacoco:report>
-      <executiondata>
-        <fileset dir="${common.dir}/build" includes="**/jacoco.db"/>
-      </executiondata>
-      <structure name="${Name} aggregate JaCoCo coverage report">
-        <classfiles>
-          <fileset dir="${common.dir}/build">
-             <include name="**/classes/java/**/*.class"/>
-             <exclude name="tools/**"/>
-          </fileset>
-        </classfiles>
-        <!-- TODO: trying to specify source files could maybe work, but would
-             double the size of the reports -->
-      </structure>
-      <html destdir="${jacoco.output.dir}" footer="Copyright ${year} Apache Software Foundation.  All Rights Reserved."/>
-    </jacoco:report>
-  </target>
-
-  <!--
-   Committer helpers
-   -->
-
-  <property name="patch.file" value="${basedir}/../patches/${patch.name}"/>
-  <!-- Apply a patch.  Assumes  patch can be applied in the basedir.
-  -Dpatch.name assumes the patch is located in ${basedir}/../patches/${patch.name}
-  -Dpatch.file means the patch can be located anywhere on the file system
-  -->
-  <target name="apply-patch" depends="clean" description="Apply a patch file.  Set -Dpatch.file, or -Dpatch.name when the patch is in the directory ../patches/">
-    <patch patchfile="${patch.file}" strip="0"/>
-  </target>
-
-  <target name="jar-test-framework">
-    <ant dir="${common.dir}/test-framework" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- Override common-build.xml definition to check for the jar already being up-to-date -->
-  <target name="jar-core" depends="check-lucene-core-uptodate,compile-lucene-core" unless="lucene-core.uptodate">
-    <ant dir="${common.dir}/core" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="core.compiled" value="true"/>
-    <property name="lucene-core.uptodate" value="true"/>
-  </target>
-
-  <target name="jar" depends="jar-core,jar-test-framework"
-          description="Jars core, codecs, test-framework, and all modules">
-    <modules-crawl target="jar-core"/>
-  </target>
-
-  <target name="jar-src" description="create source jars for all modules">
-    <ant dir="${common.dir}/core" target="jar-src" inheritAll="false" />
-    <ant dir="${common.dir}/test-framework" target="jar-src" inheritAll="false" />
-    <modules-crawl target="jar-src"/>
-  </target>
-
-  <target name="get-jenkins-line-docs" unless="enwiki.exists">
-    <sequential>
-      <!-- TODO: can get .lzma instead (it's ~17% smaller) but there's no builtin ant support...? -->
-      <get src="http://home.apache.org/~mikemccand/enwiki.random.lines.txt.bz2"
-           dest="enwiki.random.lines.txt.bz2"/>
-      <bunzip2 src="enwiki.random.lines.txt.bz2" dest="enwiki.random.lines.txt"/>
-    </sequential>
-  </target>
-
-  <target name="jar-checksums" depends="resolve">
-    <jar-checksum-macro srcdir="${common.dir}" dstdir="${common.dir}/licenses"/>
-  </target>
-
-  <target name="regenerate">
-    <subant target="regenerate" failonerror="true" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-    </subant>
-    <modules-crawl target="regenerate"/>
-  </target>
-
-  <target name="-append-module-dependencies-properties">
-    <sequential>
-      <ant dir="core" target="common.-append-module-dependencies-properties" inheritall="false"/>
-      <ant dir="test-framework" target="common.-append-module-dependencies-properties" inheritall="false"/>
-      <modules-crawl target="-append-module-dependencies-properties"/>
-    </sequential>
-  </target>
-</project>
diff --git a/lucene/classification/build.xml b/lucene/classification/build.xml
deleted file mode 100644
index 43bcb4b..0000000
--- a/lucene/classification/build.xml
+++ /dev/null
@@ -1,55 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="classification" default="default">
-  <description>
-    Classification module for Lucene
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <path refid="base.classpath"/>
-    <pathelement path="${queries.jar}"/>
-    <pathelement path="${grouping.jar}"/>
-    <pathelement path="${analyzers-common.jar}"/>
-  </path>
-
-  <path id="test.classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <pathelement location="${codecs.jar}"/>
-    <path refid="test.base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-grouping,jar-queries,jar-analyzers-common,common.compile-core" />
-
-  <target name="jar-core" depends="common.jar-core" />
-
-  <target name="javadocs" depends="javadocs-grouping,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../queries"/>
-        <link href="../analyzers-common"/>
-        <link href="../grouping"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-</project>
diff --git a/lucene/classification/ivy.xml b/lucene/classification/ivy.xml
deleted file mode 100644
index 036c217..0000000
--- a/lucene/classification/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="classification"/>
-</ivy-module>
diff --git a/lucene/codecs/build.xml b/lucene/codecs/build.xml
deleted file mode 100644
index c50af6c..0000000
--- a/lucene/codecs/build.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<project name="codecs" default="default">
-  <description>
-    Lucene codecs and postings formats.
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <target name="-dist-maven" depends="-dist-maven-src-java"/>
-
-  <target name="-install-to-maven-local-repo" depends="-install-src-java-to-maven-local-repo"/>
-</project>
diff --git a/lucene/codecs/ivy.xml b/lucene/codecs/ivy.xml
deleted file mode 100644
index f966fd5..0000000
--- a/lucene/codecs/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="codecs"/>
-</ivy-module>
diff --git a/lucene/common-build.xml b/lucene/common-build.xml
deleted file mode 100644
index 7bb6e55..0000000
--- a/lucene/common-build.xml
+++ /dev/null
@@ -1,2605 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="common" xmlns:artifact="antlib:org.apache.maven.artifact.ant" 
-                       xmlns:ivy="antlib:org.apache.ivy.ant"
-                       xmlns:junit4="antlib:com.carrotsearch.junit4"
-                       xmlns:jacoco="antlib:org.jacoco.ant"
-                       xmlns:rsel="antlib:org.apache.tools.ant.types.resources.selectors">
-  <description>
-    This file is designed for importing into a main build file, and not intended
-    for standalone use.
-  </description>
-
-  <dirname file="${ant.file.common}" property="common.dir"/>
-  
-  <!-- Give user a chance to override without editing this file
-      (and without typing -D each time it compiles it -->
-  <property file="${user.home}/lucene.build.properties"/>
-  <property file="${user.home}/build.properties"/>
-  <property file="${common.dir}/build.properties"/>
-
-  <property name="dev-tools.dir" location="${common.dir}/../dev-tools"/>
-  <property name="prettify.dir" location="${common.dir}/tools/prettify"/>
-  <property name="license.dir" location="${common.dir}/licenses"/>
-  <property name="ivysettings.xml" location="${common.dir}/default-nested-ivy-settings.xml"/>
-
-  <tstamp>
-    <format property="current.year" pattern="yyyy"/>
-    <format property="DSTAMP" pattern="yyyy-MM-dd"/>
-    <format property="TSTAMP" pattern="HH:mm:ss"/>
-    <!-- datetime format that is safe to treat as part of a dotted version -->
-    <format property="dateversion" pattern="yyyy.MM.dd.HH.mm.ss" />
-  </tstamp>
-
-  <property name="Name" value="Lucene"/>
-  <property name="name" value="${ant.project.name}"/>
-  
-  <!-- include version number from property file (includes "version.*" properties) -->
-  <loadproperties srcFile="${common.dir}/version.properties"/>
-  
-  <fail message="'version.base' property must be 'x.y.z' (major, minor, bugfix) or 'x.y.z.1/2' (+ prerelease) and numeric only: ${version.base}">
-    <condition>
-      <not><matches pattern="^\d+\.\d+\.\d+(|\.1|\.2)$" casesensitive="true" string="${version.base}"/></not>
-    </condition>
-  </fail>
-
-  <fail message="If you pass -Dversion=... to set a release version, it must match &quot;${version.base}&quot;, optionally followed by a suffix (e.g., &quot;-SNAPSHOT&quot;).">
-    <condition>
-      <not><matches pattern="^\Q${version.base}\E(|\-.*)$" casesensitive="true" string="${version}"/></not>
-    </condition>
-  </fail>
-  
-  <fail message="Your ~/.ant/lib folder or the main classpath of Ant contains some version of ASM. Please remove it, otherwise this build can't run correctly.">
-    <condition>
-      <available classname="org.objectweb.asm.ClassReader"/>
-    </condition>
-  </fail>
-
-  <property name="year" value="2000-${current.year}"/>
-  
-  <!-- Lucene modules unfortunately don't have the "lucene-" prefix, so we add it if no prefix is given in $name: -->
-  <condition property="final.name" value="${name}-${version}" else="lucene-${name}-${version}">
-    <matches pattern="^(lucene|solr)\b" string="${name}"/>
-  </condition>
-
-  <!-- we exclude ext/*.jar because we don't want example/lib/ext logging jars on the cp -->
-  <property name="common.classpath.excludes" value="**/*.txt,**/*.template,**/*.sha1,**/*.sha512,ext/*.jar" />
-
-  <property name="build.dir" location="build"/>
-  <!-- Needed in case a module needs the original build, also for compile-tools to be called from a module -->
-  <property name="common.build.dir" location="${common.dir}/build"/>
-
-  <property name="ivy.bootstrap.version" value="2.4.0" /> <!-- UPGRADE NOTE: update disallowed_ivy_jars_regex below -->
-  <property name="disallowed_ivy_jars_regex" value="ivy-2\.[0123].*\.jar"/>
-
-  <property name="ivy.default.configuration" value="*"/>
-
-  <!-- Running ant targets in parralel may require this set to false because ivy:retrieve tasks may race with resolve -->
-  <property name="ivy.sync" value="true"/>
-  <property name="ivy.resolution-cache.dir" location="${common.build.dir}/ivy-resolution-cache"/>
-  <property name="ivy.lock-strategy" value="artifact-lock-nio"/>
-
-  <property name="local.caches" location="${common.dir}/../.caches" />
-  <property name="tests.cachedir"  location="${local.caches}/test-stats" />
-  <property name="tests.cachefile" location="${common.dir}/tools/junit4/cached-timehints.txt" />
-  <property name="tests.cachefilehistory" value="10" />
-
-  <path id="junit-path">
-    <fileset dir="${common.dir}/test-framework/lib"/>
-  </path>
-
-  <!-- default arguments to pass to JVM executing tests -->
-  <property name="args" value="-XX:TieredStopAtLevel=1"/>
-
-  <property name="tests.seed" value="" />
-
-  <!-- This is a hack to be able to override the JVM count for special modules that don't like parallel tests: -->
-  <property name="tests.jvms" value="auto" />
-  <property name="tests.jvms.override" value="${tests.jvms}" />
-
-  <property name="tests.multiplier" value="1" />
-  <property name="tests.codec" value="random" />
-  <property name="tests.postingsformat" value="random" />
-  <property name="tests.docvaluesformat" value="random" />
-  <property name="tests.locale" value="random" />
-  <property name="tests.timezone" value="random" />
-  <property name="tests.directory" value="random" />
-  <property name="tests.linedocsfile" value="europarl.lines.txt.gz" />
-  <property name="tests.loggingfile" location="${common.dir}/tools/junit4/logging.properties"/>
-  <property name="tests.nightly" value="false" />
-  <property name="tests.weekly" value="false" />
-  <property name="tests.monster" value="false" />
-  <property name="tests.slow" value="true" />
-  <property name="tests.cleanthreads.sysprop" value="perMethod"/>
-  <property name="tests.verbose" value="false"/>
-  <property name="tests.infostream" value="${tests.verbose}"/>
-  <property name="tests.filterstacks" value="true"/>
-  <property name="tests.luceneMatchVersion" value="${version.base}"/>
-  <property name="tests.asserts" value="true" />
-  <property name="tests.policy" location="${common.dir}/tools/junit4/tests.policy"/>
-
-  <condition property="tests.asserts.bug.jdk8205399" value="-da:java.util.HashMap" else="">
-    <!-- LUCENE-8991 / JDK-8205399: HashMap assertion bug until java12-->
-    <and>
-      <or>
-        <contains string="${java.vm.name}" substring="hotspot" casesensitive="false"/>
-        <contains string="${java.vm.name}" substring="openjdk" casesensitive="false"/>
-        <contains string="${java.vm.name}" substring="jrockit" casesensitive="false"/>
-      </or>
-      <or>
-        <equals arg1="${java.specification.version}" arg2="1.8"/>
-        <equals arg1="${java.specification.version}" arg2="9"/>
-        <equals arg1="${java.specification.version}" arg2="10"/>
-        <equals arg1="${java.specification.version}" arg2="11"/>
-      </or>
-      <isfalse value="${tests.asserts.hashmap}" />
-    </and>
-  </condition>
-  
-  <condition property="tests.asserts.args" value="-ea -esa ${tests.asserts.bug.jdk8205399}" else="">
-    <istrue value="${tests.asserts}"/>
-  </condition>
-
-  <condition property="tests.heapsize" value="1024M" else="512M">
-    <isset property="run.clover"/>
-  </condition>
-  
-  <condition property="tests.clover.args" value="-XX:ReservedCodeCacheSize=192m -Dclover.pertest.coverage=off" else="">
-    <isset property="run.clover"/>
-  </condition>
-
-  <!-- Override these in your local properties to your desire. -->
-  <!-- Show simple class names (no package) in test suites. -->
-  <property name="tests.useSimpleNames" value="false" />
-  <!-- Max width for class name truncation.  -->
-  <property name="tests.maxClassNameColumns" value="10000" />
-  <!-- Show suite summaries for tests. -->
-  <property name="tests.showSuiteSummary" value="true" />
-  <!-- Show timestamps in console test reports. -->
-  <property name="tests.timestamps" value="false" />
-  <!-- Heartbeat in seconds for reporting long running tests or hung forked JVMs. -->
-  <property name="tests.heartbeat" value="60" />
-
-  <!-- Configure test emission to console for each type of status -->
-  <property name="tests.showError" value="true" />
-  <property name="tests.showFailure" value="true" />
-  <property name="tests.showIgnored" value="true" />
-
-  <!-- Display at most this many failures as a summary at the end of junit4 run. -->
-  <property name="tests.showNumFailures" value="10" />
-
-  <property name="javac.deprecation" value="off"/>
-  <property name="javac.debug" value="on"/>
-  <property name="javac.release" value="11"/>
-  <property name="javac.args" value="-Xlint -Xlint:-deprecation -Xlint:-serial"/>
-  <property name="javadoc.link" value="https://docs.oracle.com/en/java/javase/11/docs/api/"/>
-  <property name="javadoc.link.junit" value="https://junit.org/junit4/javadoc/4.12/"/>
-  <property name="javadoc.packagelist.dir" location="${common.dir}/tools/javadoc"/>
-  <available file="${javadoc.packagelist.dir}/java11/package-list" property="javadoc.java11.packagelist.exists"/>
-  <property name="javadoc.access" value="protected"/>
-  <property name="javadoc.charset" value="utf-8"/>
-  <property name="javadoc.dir" location="${common.dir}/build/docs"/>
-  <property name="javadoc.maxmemory" value="512m" />
-  
-  <!-- We must have the index disabled for releases, as Java 11 includes a javascript search engine with GPL license: -->
-  <property name="javadoc.noindex" value="true"/>
-
-  <property name="javadoc.doclint.args" value="-Xdoclint:all,-missing"/>
-  <!---proc:none was added because of LOG4J2-1925 / JDK-8186647 -->
-  <property name="javac.doclint.args" value="-Xdoclint:all/protected -Xdoclint:-missing -proc:none"/>
-
-  <condition property="javadoc.nomodule.args" value="--no-module-directories" else="">
-    <or>
-      <equals arg1="${java.specification.version}" arg2="11"/>
-      <equals arg1="${java.specification.version}" arg2="12"/>
-    </or>
-  </condition>
-  
-  <!-- Javadoc classpath -->
-  <path id="javadoc.classpath">
-    <path refid="classpath"/>
-    <pathelement location="${ant.home}/lib/ant.jar"/>
-    <fileset dir=".">
-      <exclude name="build/**/*.jar"/>
-      <include name="**/lib/*.jar"/>
-    </fileset>
-  </path>
-  
-  <property name="changes.src.dir" location="${common.dir}/site/changes"/>
-  <property name="changes.target.dir" location="${common.dir}/build/docs/changes"/>
-
-  <property name="project.name" value="site"/> <!-- todo: is this used by anakia or something else? -->
-  <property name="build.encoding" value="utf-8"/>
-
-  <property name="src.dir" location="src/java"/>
-  <property name="resources.dir" location="${src.dir}/../resources"/>
-  <property name="tests.src.dir" location="src/test"/>
-  <available property="module.has.tests" type="dir" file="${tests.src.dir}"/>
-  <property name="dist.dir" location="${common.dir}/dist"/>
-  <property name="maven.dist.dir" location="${dist.dir}/maven"/>
-  <makeurl file="${maven.dist.dir}" property="m2.repository.url" validate="false"/>
-  <property name="m2.repository.private.key" value="${user.home}/.ssh/id_dsa"/>
-  <property name="m2.repository.id" value="local"/>
-  <property name="m2.credentials.prompt" value="true"/>
-  <property name="maven.repository.id" value="remote"/>
-
-  <property name="tests.workDir" location="${build.dir}/test"/>
-  <property name="junit.output.dir" location="${build.dir}/test"/>
-  <property name="junit.reports" location="${build.dir}/test/reports"/>
-
-  <property name="manifest.file" location="${build.dir}/MANIFEST.MF"/>
-
-  <property name="git.exe" value="git" />
-  <property name="perl.exe" value="perl" />
-
-  <property name="python3.exe" value="python3" />
-
-  <property name="gpg.exe" value="gpg" />
-  <property name="gpg.key" value="CODE SIGNING KEY" />
-
-  <property name="filtered.pom.templates.dir" location="${common.dir}/build/poms"/>
-
-  <property name="clover.db.dir" location="${common.dir}/build/clover/db"/>
-  <property name="clover.report.dir" location="${common.dir}/build/clover/reports"/>
-
-  <property name="jacoco.report.dir" location="${common.dir}/build/jacoco"/>
-
-  <property name="pitest.report.dir" location="${common.dir}/build/pitest/${name}/reports"/>
-  <property name="pitest.distance" value="0" />
-  <property name="pitest.threads" value="2" />
-  <property name="pitest.testCases" value="org.apache.*" />
-  <property name="pitest.maxMutations" value="0" />
-  <property name="pitest.timeoutFactor" value="1.25" />
-  <property name="pitest.timeoutConst" value="3000" />
-  <property name="pitest.targetClasses" value="org.apache.*" />
-
-  <!-- a reasonable default exclusion set, can be overridden for special cases -->
-  <property name="rat.excludes" value="**/TODO,**/*.txt,**/*.iml,**/*.gradle"/>
-  
-  <!-- These patterns can be defined to add additional files for checks, relative to module's home dir -->
-  <property name="rat.additional-includes" value=""/>
-  <property name="rat.additional-excludes" value=""/>
-
-  <propertyset id="uptodate.and.compiled.properties" dynamic="true">
-    <propertyref regex=".*\.uptodate$$"/>
-    <propertyref regex=".*\.compiled$$"/>
-    <propertyref regex=".*\.loaded$$"/>
-    <propertyref name="lucene.javadoc.url"/><!-- for Solr -->
-    <propertyref name="tests.totals.tmpfile" />
-    <propertyref name="git-autoclean.disabled"/>
-  </propertyset>
-
-  <patternset id="lucene.local.src.package.patterns"
-              excludes="**/pom.xml,**/*.iml,**/*.jar,build/**,dist/**,benchmark/work/**,benchmark/temp/**,tools/javadoc/java11/**"
-  />
-
-  <!-- Default exclude sources and javadoc jars from Ivy fetch to save time and bandwidth -->
-  <condition property="ivy.exclude.types" 
-      value=""
-      else="source|javadoc">
-    <isset property="fetch.sources.javadocs"/>
-  </condition>
-
-  <target name="install-ant-contrib" unless="ant-contrib.uptodate" depends="ivy-availability-check,ivy-fail,ivy-configure">
-    <property name="ant-contrib.uptodate" value="true"/>
-     <ivy:cachepath organisation="ant-contrib" module="ant-contrib" revision="1.0b3"
-       inline="true" conf="master" type="jar" pathid="ant-contrib.classpath"/>
-    <taskdef resource="ant-contrib.tasks" classpathref="ant-contrib.classpath"/>
-  </target>
-
-  <!-- Check for minimum supported ANT version. -->
-  <fail message="Minimum supported ANT version is 1.8.2. Yours: ${ant.version}">
-    <condition>
-      <not><antversion atleast="1.8.2" /></not>
-    </condition>
-  </fail>
-
-  <fail message="Cannot run with ANT version 1.10.2 - see https://issues.apache.org/jira/browse/LUCENE-8189">
-    <condition>
-      <antversion exactly="1.10.2"/>
-    </condition>
-  </fail>
-
-  <fail message="Minimum supported Java version is 11.">
-    <condition>
-      <not><hasmethod classname="java.lang.String" method="repeat"/></not>
-    </condition>
-  </fail>
-
-  <condition property="documentation-lint.supported">
-    <and>
-      <or>
-        <contains string="${java.vm.name}" substring="hotspot" casesensitive="false"/>
-        <contains string="${java.vm.name}" substring="openjdk" casesensitive="false"/>
-        <contains string="${java.vm.name}" substring="jrockit" casesensitive="false"/>
-      </or>
-      <or>
-        <equals arg1="${java.specification.version}" arg2="11"/>
-        <equals arg1="${java.specification.version}" arg2="12"/>
-        <equals arg1="${java.specification.version}" arg2="13"/>
-      </or>
-    </and>
-  </condition>
-
-  <!-- workaround for https://issues.apache.org/bugzilla/show_bug.cgi?id=53347 -->
-  <condition property="build.compiler" value="javac1.7">
-    <or>
-      <antversion exactly="1.8.3" />
-      <antversion exactly="1.8.4" />
-    </or>
-  </condition>
-
-  <target name="-documentation-lint-unsupported" unless="documentation-lint.supported">
-    <fail message="Linting documentation HTML is not supported on this Java version (JVM (${java.vm.name}).">
-      <condition>
-        <not><isset property="is.jenkins.build"/></not>
-      </condition>
-    </fail>
-    <echo level="warning" message="WARN: Linting documentation HTML is not supported on this Java version (JVM (${java.vm.name}). NOTHING DONE!"/>
-  </target>
-
-  <!-- Import custom ANT tasks. -->
-  <import file="${common.dir}/tools/custom-tasks.xml" />
-
-  <target name="clean"
-    description="Removes contents of build and dist directories">
-    <delete dir="${build.dir}"/>
-    <delete dir="${dist.dir}"/>
-    <delete file="velocity.log"/>
-  </target>
-
-  <target name="init" depends="git-autoclean,resolve">
-    <!-- currently empty -->
-  </target>
-  
-  <!-- Keep track of GIT branch and do "ant clean" on root folder when changed, to prevent bad builds... -->
-  
-  <property name="gitHeadFile" location="${common.dir}/../.git/HEAD"/>
-  <property name="gitHeadLocal" location="${common.dir}/build/git-HEAD"/>
-  <available file="${gitHeadFile}" property="isGitCheckout"/>
-
-  <target name="git-autoclean" depends="-check-git-state,-git-cleanroot,-copy-git-state"/>
-  
-  <target name="-check-git-state" if="isGitCheckout" unless="git-autoclean.disabled">
-    <condition property="gitHeadChanged">
-      <and>
-        <available file="${gitHeadLocal}"/>
-        <not><filesmatch file1="${gitHeadFile}" file2="${gitHeadLocal}"/></not>
-      </and>
-    </condition>
-  </target>
-
-  <target name="-git-cleanroot" depends="-check-git-state" if="gitHeadChanged" unless="git-autoclean.disabled">
-    <echo message="Git branch changed, cleaning up for sane build..."/>
-    <ant dir="${common.dir}/.." target="clean" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-  
-  <target name="-copy-git-state" if="isGitCheckout" unless="git-autoclean.disabled">
-    <mkdir dir="${common.dir}/build"/>
-    <copy file="${gitHeadFile}" tofile="${gitHeadLocal}"/>
-    <property name="git-autoclean.disabled" value="true"/>
-  </target>
-
-  <!-- IVY stuff -->
-
-  <target name="ivy-configure">
-     <!-- [DW] ivy loses its configuration for some reason. cannot explain this. if
-          you have an idea, fix it.
-          unless="ivy.settings.uptodate" -->
-    <!-- override: just for safety, should be unnecessary -->
-    <ivy:configure file="${common.dir}/top-level-ivy-settings.xml" override="true"/>
-    <!-- <property name="ivy.settings.uptodate" value="true"/> -->
-  </target>
-
-  <condition property="ivy.symlink">
-    <os family="unix"/>
-  </condition>
-
-  <target name="resolve" depends="ivy-availability-check,ivy-configure">
-    <!-- todo, make this a property or something. 
-         only special cases need bundles -->
-    <ivy:retrieve type="jar,bundle,test,test-jar,tests" log="download-only" symlink="${ivy.symlink}"
-                  conf="${ivy.default.configuration}" sync="${ivy.sync}"/>
-  </target>
-
-  <property name="ivy_install_path" location="${user.home}/.ant/lib" />
-  <property name="ivy_bootstrap_url1" value="https://repo1.maven.org/maven2"/>
-  <property name="ivy_bootstrap_url2" value="https://repo2.maven.org/maven2"/>
-  <property name="ivy_checksum_sha1" value="5abe4c24bbe992a9ac07ca563d5bd3e8d569e9ed"/>
-
-  <target name="ivy-availability-check" unless="ivy.available">
-    <path id="disallowed.ivy.jars">
-      <fileset dir="${ivy_install_path}">
-        <filename regex="${disallowed_ivy_jars_regex}"/>
-      </fileset>
-    </path>
-    <loadresource property="disallowed.ivy.jars.list">
-      <string value="${toString:disallowed.ivy.jars}"/>
-      <filterchain><tokenfilter><replacestring from="jar:" to="jar, "/></tokenfilter></filterchain>
-    </loadresource>
-    <condition property="disallowed.ivy.jar.found">
-      <resourcecount when="greater" count="0">
-        <path refid="disallowed.ivy.jars"/>
-      </resourcecount>
-    </condition>
-    <antcall target="-ivy-fail-disallowed-ivy-version"/>
-
-    <condition property="ivy.available">
-      <typefound uri="antlib:org.apache.ivy.ant" name="configure" />
-    </condition>
-    <antcall target="ivy-fail" />
-  </target>
-
-  <target name="-ivy-fail-disallowed-ivy-version" if="disallowed.ivy.jar.found">
-    <sequential>
-      <echo message="Please delete the following disallowed Ivy jar(s): ${disallowed.ivy.jars.list}"/>
-      <fail>Found disallowed Ivy jar(s): ${disallowed.ivy.jars.list}</fail>
-    </sequential>
-  </target>
-
-  <target name="ivy-fail" unless="ivy.available">
-   <echo>
-     This build requires Ivy and Ivy could not be found in your ant classpath.
-
-     (Due to classpath issues and the recursive nature of the Lucene/Solr 
-     build system, a local copy of Ivy can not be used an loaded dynamically 
-     by the build.xml)
-
-     You can either manually install a copy of Ivy ${ivy.bootstrap.version} in your ant classpath:
-       http://ant.apache.org/manual/install.html#optionalTasks
-
-     Or this build file can do it for you by running the Ivy Bootstrap target:
-       ant ivy-bootstrap     
-     
-     Either way you will only have to install Ivy one time.
-
-     'ant ivy-bootstrap' will install a copy of Ivy into your Ant User Library:
-       ${user.home}/.ant/lib
-     
-     If you would prefer, you can have it installed into an alternative 
-     directory using the "-Divy_install_path=/some/path/you/choose" option, 
-     but you will have to specify this path every time you build Lucene/Solr 
-     in the future...
-       ant ivy-bootstrap -Divy_install_path=/some/path/you/choose
-       ...
-       ant -lib /some/path/you/choose clean compile
-       ...
-       ant -lib /some/path/you/choose clean compile
-
-     If you have already run ivy-bootstrap, and still get this message, please 
-     try using the "--noconfig" option when running ant, or editing your global
-     ant config to allow the user lib to be loaded.  See the wiki for more details:
-       http://wiki.apache.org/lucene-java/DeveloperTips#Problems_with_Ivy.3F
-    </echo>
-    <fail>Ivy is not available</fail>
-  </target>
-
-  <target name="ivy-bootstrap" description="Download and install Ivy in the users ant lib dir" 
-          depends="-ivy-bootstrap1,-ivy-bootstrap2,-ivy-checksum,-ivy-remove-old-versions"/>
-
-  <!-- try to download from repo1.maven.org -->
-  <target name="-ivy-bootstrap1">
-    <ivy-download src="${ivy_bootstrap_url1}" dest="${ivy_install_path}"/>
-    <available file="${ivy_install_path}/ivy-${ivy.bootstrap.version}.jar" property="ivy.bootstrap1.success" />
-  </target> 
-
-  <target name="-ivy-bootstrap2" unless="ivy.bootstrap1.success">
-    <ivy-download src="${ivy_bootstrap_url2}" dest="${ivy_install_path}"/>
-  </target>
-
-  <target name="-ivy-checksum">
-    <checksum file="${ivy_install_path}/ivy-${ivy.bootstrap.version}.jar"
-              property="${ivy_checksum_sha1}"
-              algorithm="SHA"
-              verifyproperty="ivy.checksum.success"/>
-    <fail message="Checksum mismatch for ivy-${ivy.bootstrap.version}.jar. Please download this file manually">
-      <condition>
-        <isfalse value="${ivy.checksum.success}"/>
-      </condition>
-    </fail>
-  </target>
-  
-  <target name="-ivy-remove-old-versions">
-    <delete verbose="true" failonerror="true">
-      <fileset dir="${ivy_install_path}">
-        <filename regex="${disallowed_ivy_jars_regex}"/>
-      </fileset>
-    </delete>
-  </target>
-   
-  <macrodef name="ivy-download">
-      <attribute name="src"/>
-      <attribute name="dest"/>
-    <sequential>
-      <mkdir dir="@{dest}"/>
-      <echo message="installing ivy ${ivy.bootstrap.version} to ${ivy_install_path}"/>
-      <get src="@{src}/org/apache/ivy/ivy/${ivy.bootstrap.version}/ivy-${ivy.bootstrap.version}.jar"
-           dest="@{dest}/ivy-${ivy.bootstrap.version}.jar" usetimestamp="true" ignoreerrors="true"/>
-    </sequential>
-  </macrodef>
-
-  <target name="compile-core" depends="init, clover"
-          description="Compiles core classes">
-    <compile
-      srcdir="${src.dir}"
-      destdir="${build.dir}/classes/java">
-      <classpath refid="classpath"/>
-    </compile>
-
-    <!-- Copy the resources folder (if existent) -->
-    <copy todir="${build.dir}/classes/java">
-      <fileset dir="${resources.dir}" erroronmissingdir="no"/>
-    </copy>
-  </target>
-
-  <target name="compile" depends="compile-core">
-    <!-- convenience target to compile core -->
-  </target>
-
-  <target name="jar-core" depends="compile-core">
-    <jarify/>
-  </target>
-
-  <property name="lucene.tgz.file" location="${common.dir}/dist/lucene-${version}.tgz"/>
-  <available file="${lucene.tgz.file}" property="lucene.tgz.exists"/>
-  <property name="lucene.tgz.unpack.dir" location="${common.build.dir}/lucene.tgz.unpacked"/>
-  <patternset id="patternset.lucene.solr.jars">
-    <include name="**/lucene-*.jar"/>
-    <include name="**/solr-*.jar"/>
-  </patternset>
-  <available type="dir" file="${lucene.tgz.unpack.dir}" property="lucene.tgz.unpack.dir.exists"/>
-  <target name="-ensure-lucene-tgz-exists" unless="lucene.tgz.exists">
-    <ant dir="${common.dir}" target="package-tgz" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-  <target name="-unpack-lucene-tgz" unless="lucene.tgz.unpack.dir.exists">
-    <antcall target="-ensure-lucene-tgz-exists" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-    <mkdir dir="${lucene.tgz.unpack.dir}"/>
-    <untar compression="gzip" src="${lucene.tgz.file}" dest="${lucene.tgz.unpack.dir}">
-      <patternset refid="patternset.lucene.solr.jars"/>
-    </untar>
-  </target>
-  <property name="dist.jar.dir.prefix" value="${lucene.tgz.unpack.dir}/lucene"/>
-  <pathconvert property="dist.jar.dir.suffix">
-    <mapper>
-      <chainedmapper>
-        <globmapper from="${common.dir}*" to="*"/>
-        <globmapper from="*build.xml" to="*"/>
-      </chainedmapper>
-    </mapper>
-    <path location="${ant.file}"/>
-  </pathconvert>
-
-  <macrodef name="m2-deploy" description="Builds a Maven artifact">
-    <element name="artifact-attachments" optional="yes"/>
-    <element name="parent-poms" optional="yes"/>
-    <element name="credentials" optional="yes"/>
-    <attribute name="pom.xml"/>
-    <attribute name="jar.file" default="${dist.jar.dir.prefix}-${version}/${dist.jar.dir.suffix}/${final.name}.jar"/>
-    <sequential>
-      <artifact:install-provider artifactId="wagon-ssh" version="1.0-beta-7">
-        <remoteRepository id="${maven.repository.id}" url="${ivy_bootstrap_url1}" />
-      </artifact:install-provider>
-      <parent-poms/>
-      <artifact:pom id="maven.project" file="@{pom.xml}">
-        <remoteRepository id="${maven.repository.id}" url="${ivy_bootstrap_url1}" />
-      </artifact:pom>
-      <artifact:deploy file="@{jar.file}">
-        <artifact-attachments/>
-        <remoteRepository id="${m2.repository.id}" url="${m2.repository.url}">
-          <credentials/>
-        </remoteRepository>
-        <pom refid="maven.project"/>
-      </artifact:deploy>
-      <artifact:install file="@{jar.file}">
-        <artifact-attachments/>
-        <pom refid="maven.project"/>
-      </artifact:install>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="m2-install" description="Installs a Maven artifact into the local repository">
-    <element name="parent-poms" optional="yes"/>
-    <attribute name="pom.xml"/>
-    <attribute name="jar.file" default="${dist.jar.dir.prefix}-${version}/${dist.jar.dir.suffix}/${final.name}.jar"/>
-    <sequential>
-      <parent-poms/>
-      <artifact:pom id="maven.project" file="@{pom.xml}"/>
-      <artifact:install file="@{jar.file}">
-        <pom refid="maven.project"/>
-      </artifact:install>
-    </sequential>
-  </macrodef>
-
-  <!-- validate maven dependencies -->
-  <macrodef name="m2-validate-dependencies">
-      <attribute name="pom.xml"/>
-      <attribute name="licenseDirectory"/>
-      <element name="excludes" optional="true"/>
-      <element name="additional-filters" optional="true"/>
-    <sequential>
-      <artifact:dependencies filesetId="maven.fileset" useScope="test" type="jar">
-        <artifact:pom file="@{pom.xml}"/>
-        <!-- disable completely, so this has no chance to download any updates from anywhere: -->
-        <remoteRepository id="apache.snapshots" url="foobar://disabled/">
-          <snapshots enabled="false"/>
-          <releases enabled="false"/>
-        </remoteRepository>
-      </artifact:dependencies>
-      <licenses licenseDirectory="@{licenseDirectory}">
-        <restrict>
-          <fileset refid="maven.fileset"/>
-          <rsel:not>
-            <excludes/>
-          </rsel:not>
-        </restrict>
-        <licenseMapper>
-          <chainedmapper>
-            <filtermapper refid="license-mapper-defaults"/>
-            <filtermapper>
-              <additional-filters/>
-            </filtermapper>
-          </chainedmapper>
-        </licenseMapper>
-      </licenses>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="build-manifest" description="Builds a manifest file">
-    <attribute name="title"/>
-    <attribute name="implementation.title"/>
-    <attribute name="manifest.file" default="${manifest.file}"/>
-    <element name="additional-manifest-attributes" optional="true"/>
-    <sequential>
-      <local name="-checkoutid"/>
-      <local name="-giterr"/>
-      <local name="checkoutid"/>
-      
-      <!-- If possible, include the GIT hash into manifest: -->
-      <exec dir="." executable="${git.exe}" outputproperty="-checkoutid" errorproperty="-giterr" failifexecutionfails="false">
-        <arg value="log"/>
-        <arg value="--format=%H"/>
-        <arg value="-n"/>
-        <arg value="1"/>
-      </exec>
-      <condition property="checkoutid" value="${-checkoutid}" else="unknown">
-        <matches pattern="^[0-9a-z]+$" string="${-checkoutid}" casesensitive="false" multiline="true"/>
-      </condition>
-
-      <!-- create manifest: -->
-      <manifest file="@{manifest.file}">
-        <!--
-        http://java.sun.com/j2se/1.5.0/docs/guide/jar/jar.html#JAR%20Manifest
-        http://java.sun.com/j2se/1.5.0/docs/guide/versioning/spec/versioning2.html
-        http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Package.html
-        http://java.sun.com/j2se/1.5.0/docs/api/java/util/jar/package-summary.html
-        http://java.sun.com/developer/Books/javaprogramming/JAR/basics/manifest.html
-        -->
-        <!-- Don't set 'Manifest-Version' it identifies the version of the
-             manifest file format, and should always be 1.0 (the default)
-
-             Don't set 'Created-by' attribute, its purpose is
-             to identify the version of java used to build the jar,
-             which ant will do by default.
-
-             Ant will happily override these with bogus strings if you
-             tell it to, so don't.
-
-             NOTE: we don't use section info because all of our manifest data
-             applies to the entire jar/war ... no package specific info.
-        -->
-        <attribute name="Extension-Name" value="@{implementation.title}"/>
-        <attribute name="Specification-Title" value="@{title}"/>
-        <!-- spec version must match "digit+{.digit+}*" -->
-        <attribute name="Specification-Version" value="${spec.version}"/>
-        <attribute name="Specification-Vendor"
-                   value="The Apache Software Foundation"/>
-        <attribute name="Implementation-Title" value="@{implementation.title}"/>
-        <!-- impl version can be any string -->
-        <attribute name="Implementation-Version"
-                   value="${version} ${checkoutid} - ${user.name} - ${DSTAMP} ${TSTAMP}"/>
-        <attribute name="Implementation-Vendor"
-                   value="The Apache Software Foundation"/>
-        <attribute name="X-Compile-Source-JDK" value="${javac.release}"/>
-        <attribute name="X-Compile-Target-JDK" value="${javac.release}"/>
-        <additional-manifest-attributes />
-      </manifest>
-    </sequential>
-  </macrodef>
-  
-  <macrodef name="jarify" description="Builds a JAR file">
-    <attribute name="basedir" default="${build.dir}/classes/java"/>
-    <attribute name="destfile" default="${build.dir}/${final.name}.jar"/>
-    <attribute name="title" default="Lucene Search Engine: ${ant.project.name}"/>
-    <attribute name="excludes" default="**/pom.xml,**/*.iml"/>
-    <attribute name="metainf.source.dir" default="${common.dir}"/>
-    <attribute name="implementation.title" default="org.apache.lucene"/>
-    <attribute name="manifest.file" default="${manifest.file}"/>
-    <element name="filesets" optional="true"/>
-    <element name="jarify-additional-manifest-attributes" optional="true"/>
-    <sequential>
-      <build-manifest title="@{title}"
-                      implementation.title="@{implementation.title}"
-                      manifest.file="@{manifest.file}">
-        <additional-manifest-attributes>
-          <jarify-additional-manifest-attributes />
-        </additional-manifest-attributes>
-      </build-manifest>
-      
-      <jar destfile="@{destfile}"
-           basedir="@{basedir}"
-           manifest="@{manifest.file}"
-           excludes="@{excludes}">
-        <metainf dir="@{metainf.source.dir}" includes="LICENSE.txt,NOTICE.txt"/>
-        <filesets />
-      </jar>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="module-uptodate">
-    <attribute name="name"/>
-    <attribute name="property"/>
-    <attribute name="jarfile"/>
-    <attribute name="module-src-name" default="@{name}"/>
-    <sequential>
-      <uptodate property="@{property}" targetfile="@{jarfile}">
-        <srcfiles dir="${common.dir}/@{module-src-name}/src/java" includes="**/*.java"/>
-      </uptodate>
-    </sequential>
-  </macrodef>
-
-  <property name="lucene-core.jar" value="${common.dir}/build/core/lucene-core-${version}.jar"/>
-  <target name="check-lucene-core-uptodate" unless="lucene-core.uptodate">
-    <uptodate property="lucene-core.uptodate" targetfile="${lucene-core.jar}">
-       <srcfiles dir="${common.dir}/core/src/java" includes="**/*.java"/>
-    </uptodate>
-  </target>
-  <target name="jar-lucene-core" unless="lucene-core.uptodate" depends="check-lucene-core-uptodate">
-    <ant dir="${common.dir}/core" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="lucene-core.uptodate" value="true"/>
-  </target>
-  
-  <target name="compile-lucene-core" unless="core.compiled">
-    <ant dir="${common.dir}/core" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="core.compiled" value="true"/>
-  </target>
-
-  <target name="check-lucene-core-javadocs-uptodate" unless="core-javadocs.uptodate">
-    <uptodate property="core-javadocs.uptodate" targetfile="${common.dir}/build/core/lucene-core-${version}-javadoc.jar">
-       <srcfiles dir="${common.dir}/core/src/java" includes="**/*.java"/>
-    </uptodate>
-  </target>
-
-  <target name="check-lucene-codecs-javadocs-uptodate" unless="codecs-javadocs.uptodate">
-    <uptodate property="codecs-javadocs.uptodate" targetfile="${common.dir}/build/codecs/lucene-codecs-${version}-javadoc.jar">
-       <srcfiles dir="${common.dir}/codecs/src/java" includes="**/*.java"/>
-    </uptodate>
-  </target>
-
-  <target name="javadocs-lucene-core" depends="check-lucene-core-javadocs-uptodate" unless="core-javadocs.uptodate">
-    <ant dir="${common.dir}/core" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="core-javadocs.uptodate" value="true"/>
-  </target>
-
-  <target name="compile-codecs" unless="codecs.compiled">
-    <ant dir="${common.dir}/codecs" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="codecs.compiled" value="true"/>
-  </target>
-
-  <target name="javadocs-lucene-codecs" depends="check-lucene-codecs-javadocs-uptodate" unless="codecs-javadocs.uptodate">
-    <ant dir="${common.dir}/codecs" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="codecs-javadocs.uptodate" value="true"/>
-  </target>
-
-  <target name="compile-test-framework" unless="lucene.test.framework.compiled">
-    <ant dir="${common.dir}/test-framework" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="lucene.test.framework.compiled" value="true"/>
-  </target>
-
-  <target name="check-lucene-test-framework-javadocs-uptodate" 
-          unless="lucene.test.framework-javadocs.uptodate">
-    <uptodate property="lucene.test.framework-javadocs.uptodate" 
-         targetfile="${common.dir}/build/test-framework/lucene-test-framework-${version}-javadoc.jar">
-       <srcfiles dir="${common.dir}/test-framework/src/java" includes="**/*.java"/>
-    </uptodate>
-  </target>
-
-  <target name="javadocs-test-framework" 
-          depends="check-lucene-test-framework-javadocs-uptodate"
-          unless="lucene.test.framework-javadocs.uptodate">
-    <ant dir="${common.dir}/test-framework" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="lucene.test.framework-javadocs.uptodate" value="true"/>
-  </target>
-
-  <target name="compile-tools">
-    <ant dir="${common.dir}/tools" target="compile-core" inheritAll="false"/>
-  </target>
-
-  <target name="compile-test" depends="compile-core,compile-test-framework">
-    <compile-test-macro srcdir="${tests.src.dir}" destdir="${build.dir}/classes/test" test.classpath="test.classpath"/>
-  </target>
-
-  <macrodef name="compile-test-macro" description="Compiles junit tests.">
-    <attribute name="srcdir"/>
-    <attribute name="destdir"/>
-    <attribute name="test.classpath"/>
-    <attribute name="javac.release" default="${javac.release}"/>
-    <sequential>
-      <compile
-        srcdir="@{srcdir}" 
-        destdir="@{destdir}"
-        javac.release="@{javac.release}">
-        <classpath refid="@{test.classpath}"/>
-      </compile>
-
-      <!-- Copy any data files present to the classpath -->
-      <copy todir="@{destdir}">
-        <fileset dir="@{srcdir}" excludes="**/*.java"/>
-      </copy>
-    </sequential>
-  </macrodef>
-
-  <target name="test-updatecache" description="Overwrite tests' timings cache for balancing." depends="install-junit4-taskdef">
-    <touch file="${tests.cachefile}" mkdirs="true" verbose="false" />
-    <junit4:mergehints file="${tests.cachefile}" historyLength="${tests.cachefilehistory}">
-      <resources>
-        <!-- The order is important. Include previous stats first, then append new stats. -->
-        <file file="${tests.cachefile}" />
-        <fileset dir="${tests.cachedir}">
-          <include name="**/*.txt" />
-        </fileset>
-      </resources>
-    </junit4:mergehints>
-  </target>
-
-  <!-- Aliases for tests filters -->
-  <condition property="tests.class" value="*.${testcase}">
-    <isset property="testcase" />
-  </condition>
-  <condition property="tests.method" value="${testmethod}*">
-    <isset property="testmethod" />
-  </condition>
-
-  <condition property="tests.showSuccess" value="true" else="false">
-    <or>
-      <isset property="tests.class" />
-      <isset property="tests.method" />
-    </or>
-  </condition>
-
-  <condition property="tests.showOutput" value="always" else="onerror">
-    <or>
-      <isset property="tests.class" />
-      <isset property="tests.method" />
-      <istrue value="${tests.showSuccess}"/>
-    </or>
-  </condition>
-
-  <!-- Test macro using junit4. -->
-  <macrodef name="test-macro" description="Executes junit tests.">
-    <attribute name="junit.output.dir" default="${junit.output.dir}"/>
-    <attribute name="junit.classpath" default="junit.classpath"/>
-    <attribute name="testsDir" default="${build.dir}/classes/test"/>
-    <attribute name="workDir" default="${tests.workDir}"/>
-    <attribute name="threadNum" default="1"/>
-    <attribute name="tests.nightly" default="${tests.nightly}"/>
-    <attribute name="tests.weekly" default="${tests.weekly}"/>
-    <attribute name="tests.monster" default="${tests.monster}"/>
-    <attribute name="tests.slow" default="${tests.slow}"/>
-    <attribute name="tests.multiplier" default="${tests.multiplier}"/>
-    <attribute name="additional.vm.args" default=""/>
-    <!-- note this enables keeping junit4 files only (not test temp files) -->
-    <attribute name="runner.leaveTemporary" default="false"/>
-      
-    <sequential>
-        <!-- Warn if somebody uses removed properties. -->
-        <fail message="This property has been removed: tests.iter, use -Dtests.iters=N.">
-          <condition>
-            <isset property="tests.iter" />
-          </condition>
-        </fail>
-        <!-- this combo makes no sense LUCENE-4146 -->
-        <fail message="You are attempting to use 'tests.iters' in combination with a 'tests.method' value with does not end in a '*' -- This combination makes no sense, because the 'tests.method' filter will be unable to match the synthetic test names generated by the multiple iterations.">
-          <condition>
-            <and>
-              <isset property="tests.iters" />
-              <isset property="tests.method" />
-              <not>
-                <matches pattern="\*$" string="${tests.method}" />
-              </not>
-            </and>
-          </condition>
-        </fail>
-
-        <!-- Defaults. -->
-        <property name="tests.class"  value="" />
-        <property name="tests.method" value="" />
-        <property name="tests.dynamicAssignmentRatio" value="0.50" /> <!-- 50% of suites -->
-        <property name="tests.haltonfailure" value="true" />
-        <property name="tests.leaveTemporary" value="false" />
-        <!-- 
-           keep junit4 runner files or not (independent of keeping test output files)
-         -->
-        <condition property="junit4.leaveTemporary">
-          <or>
-            <istrue value="${tests.leaveTemporary}"/> 
-            <istrue value="@{runner.leaveTemporary}"/> 
-          </or>
-        </condition>
-        <property name="tests.iters" value="" />
-        <property name="tests.dups"  value="1" />
-        <property name="tests.useSecurityManager"  value="true" />
-
-        <property name="tests.heapdump.args" value=""/>
-
-        <!-- turn on security manager? -->
-        <condition property="java.security.manager" value="org.apache.lucene.util.TestSecurityManager">
-          <istrue value="${tests.useSecurityManager}"/>
-        </condition>
-        
-        <!-- additional arguments -->
-        <property name="tests.runtimespecific.args" value="--illegal-access=deny"/>
-
-        <!-- create a fileset pattern that matches ${tests.class}. -->
-        <loadresource property="tests.explicitclass" quiet="true">
-          <propertyresource name="tests.class" />
-          <filterchain>
-            <tokenfilter>
-              <filetokenizer/>
-              <replacestring from="." to="/"/>
-              <replacestring from="*" to="**"/>
-              <replaceregex pattern="$" replace=".class" />
-            </tokenfilter>
-          </filterchain>
-        </loadresource>
-
-        <!-- Pick the random seed now (unless already set). -->
-        <junit4:pickseed property="tests.seed" />
-
-        <!-- Pick file.encoding based on the random seed. -->
-        <junit4:pickfromlist property="tests.file.encoding" allowundefined="false" seed="${tests.seed}">
-            <!-- Guaranteed support on any JVM. -->
-            <value>US-ASCII</value>   <!-- single byte length -->
-            <value>ISO-8859-1</value> <!-- single byte length -->
-            <value>UTF-8</value>      <!-- variable byte length -->
-            <value><!-- empty/ default encoding. --></value>
-
-            <!--
-            Disabled because of Java 1.8 bug on Linux/ Unix:
-            http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7181721
-
-            <value>UTF-16</value>     
-            <value>UTF-16LE</value>   
-            <value>UTF-16BE</value>
-            -->
-        </junit4:pickfromlist>
-
-        <!-- junit4 does not create this directory. TODO: is this a bug / inconsistency with dir="..."? -->
-        <mkdir dir="@{workDir}/temp" />
-
-        <!-- Local test execution statistics from defaults or local caches, if present. -->
-        <local name="tests.caches" />
-        <available file="${tests.cachedir}/${name}" type="dir" property="tests.caches" value="${tests.cachedir}/${name}" />
-        <property name="tests.caches" location="${common.dir}/tools/junit4" /> <!-- defaults -->
-        <mkdir dir="${tests.cachedir}/${name}" />
-
-        <local name="junit4.stats.nonIgnored" />
-
-        <junit4:junit4
-            taskName="junit4"
-            dir="@{workDir}"
-            tempdir="@{workDir}/temp"
-            maxmemory="${tests.heapsize}"
-
-            statsPropertyPrefix="junit4.stats"
-
-            parallelism="@{threadNum}"
-
-            printSummary="true"
-            haltonfailure="${tests.haltonfailure}" 
-            failureProperty="tests.failed"
-
-            dynamicAssignmentRatio="${tests.dynamicAssignmentRatio}"
-            shuffleOnSlave="true"
-            leaveTemporary="${junit4.leaveTemporary}"
-            seed="${tests.seed}"
-            onNonEmptyWorkDirectory="wipe"
-
-            heartbeat="${tests.heartbeat}"
-            uniqueSuiteNames="false"
-            
-            debugstream="false"
-        >
-            <!-- Classpaths. -->
-            <classpath refid="@{junit.classpath}"/>
-            <classpath refid="clover.classpath" />
-
-            <!-- JVM arguments and system properties. -->
-            <jvmarg line="${args}"/>
-            <jvmarg line="${tests.heapdump.args}"/>
-            <jvmarg line="${tests.clover.args}"/>
-            <jvmarg line="@{additional.vm.args}"/>
-            <jvmarg line="${tests.asserts.args}"/>
-            <jvmarg line="${tests.runtimespecific.args}"/>
-
-            <!-- set the number of times tests should run -->
-            <sysproperty key="tests.iters" value="${tests.iters}"/>
-            <!-- allow tests to control debug prints -->
-            <sysproperty key="tests.verbose" value="${tests.verbose}"/>
-            <!-- even more debugging -->
-            <sysproperty key="tests.infostream" value="${tests.infostream}"/>
-            <!-- set the codec tests should run with -->
-            <sysproperty key="tests.codec" value="${tests.codec}"/>
-            <!-- set the postingsformat tests should run with -->
-            <sysproperty key="tests.postingsformat" value="${tests.postingsformat}"/>
-            <!-- set the docvaluesformat tests should run with -->
-            <sysproperty key="tests.docvaluesformat" value="${tests.docvaluesformat}"/>
-            <!-- set the locale tests should run with -->
-            <sysproperty key="tests.locale" value="${tests.locale}"/>
-            <!-- set the timezone tests should run with -->
-            <sysproperty key="tests.timezone" value="${tests.timezone}"/>
-            <!-- set the directory tests should run with -->
-            <sysproperty key="tests.directory" value="${tests.directory}"/>
-            <!-- set the line file source for oal.util.LineFileDocs -->
-            <sysproperty key="tests.linedocsfile" value="${tests.linedocsfile}"/>
-            <!-- set the Version that tests should run against -->
-            <sysproperty key="tests.luceneMatchVersion" value="${tests.luceneMatchVersion}"/>
-            <!-- for lucene we can be strict, and we don't want false fails even across methods -->
-            <sysproperty key="tests.cleanthreads" value="${tests.cleanthreads.sysprop}"/>
-            <!-- logging config file -->
-            <sysproperty key="java.util.logging.config.file" value="${tests.loggingfile}"/>
-            <!-- set whether or not nightly tests should run -->
-            <sysproperty key="tests.nightly" value="@{tests.nightly}"/>
-              <!-- set whether or not weekly tests should run -->
-            <sysproperty key="tests.weekly" value="@{tests.weekly}"/>
-            <!-- set whether or not monster tests should run -->
-            <sysproperty key="tests.monster" value="@{tests.monster}"/>
-              <!-- set whether or not slow tests should run -->
-            <sysproperty key="tests.slow" value="@{tests.slow}"/>
-              
-            <!-- set whether tests framework should not require java assertions enabled -->
-            <sysproperty key="tests.asserts" value="${tests.asserts}"/>
-
-            <!-- TODO: create propertyset for test properties, so each project can have its own set -->
-            <sysproperty key="tests.multiplier" value="@{tests.multiplier}"/>
-            
-            <!-- Temporary directory a subdir of the cwd. -->
-            <sysproperty key="tempDir" value="./temp" />
-            <sysproperty key="java.io.tmpdir" value="./temp" />
-
-            <!-- Restrict access to certain Java features and install security manager: -->
-            <sysproperty key="common.dir" file="${common.dir}" />
-            <sysproperty key="clover.db.dir" file="${clover.db.dir}" />
-            <syspropertyset>
-                <propertyref prefix="java.security.manager"/>
-            </syspropertyset>
-            <sysproperty key="java.security.policy" file="${tests.policy}" />
-
-            <sysproperty key="tests.LUCENE_VERSION" value="${version.base}"/>
-
-            <sysproperty key="jetty.testMode" value="1"/>
-            <sysproperty key="jetty.insecurerandom" value="1"/>
-            <sysproperty key="solr.directoryFactory" value="org.apache.solr.core.MockDirectoryFactory"/>
-            
-            <!-- disable AWT while running tests -->
-            <sysproperty key="java.awt.headless" value="true"/>
-
-            <!-- turn jenkins blood red for hashmap bugs, even on jdk7 -->
-            <sysproperty key="jdk.map.althashing.threshold" value="0"/>
-
-            <sysproperty key="tests.src.home" value="${user.dir}" />
-
-            <!-- replaces default random source to the nonblocking variant -->
-            <sysproperty key="java.security.egd" value="file:/dev/./urandom"/>
-
-            <!-- Only pass these to the test JVMs if defined in ANT. -->
-            <syspropertyset>
-                <propertyref prefix="tests.maxfailures" />
-                <propertyref prefix="tests.failfast" />
-                <propertyref prefix="tests.badapples" />
-                <propertyref prefix="tests.bwcdir" />
-                <propertyref prefix="tests.timeoutSuite" />
-                <propertyref prefix="tests.disableHdfs" />
-                <propertyref prefix="tests.filter" />
-                <propertyref prefix="tests.awaitsfix" />
-                <propertyref prefix="tests.leavetmpdir" />
-                <propertyref prefix="tests.leaveTemporary" />
-                <propertyref prefix="tests.leavetemporary" />
-                <propertyref prefix="solr.test.leavetmpdir" />
-                <propertyref prefix="solr.tests.use.numeric.points" />
-            </syspropertyset>
-
-            <!-- Pass randomized settings to the forked JVM. -->
-            <syspropertyset ignoreEmpty="true">
-                <propertyref prefix="tests.file.encoding" />
-                <mapper type="glob" from="tests.*" to="*" />
-            </syspropertyset>
-
-            <!-- Use static cached test balancing statistics. -->
-            <balancers>
-                <junit4:execution-times>
-                    <fileset dir="${tests.caches}"  includes="**/*.txt" />
-                </junit4:execution-times>
-            </balancers>            
-
-            <!-- Reporting listeners. -->
-            <listeners>
-                <!-- A simplified console output (maven-like). -->
-                <junit4:report-text
-                    showThrowable="true" 
-                    showStackTraces="true" 
-                    showOutput="${tests.showOutput}" 
-
-                    showStatusOk="${tests.showSuccess}"
-                    showStatusError="${tests.showError}"
-                    showStatusFailure="${tests.showFailure}"
-                    showStatusIgnored="${tests.showIgnored}"
-
-                    showSuiteSummary="${tests.showSuiteSummary}"
-
-                    useSimpleNames="${tests.useSimpleNames}"
-                    maxClassNameColumns="${tests.maxClassNameColumns}"
-                    
-                    timestamps="${tests.timestamps}"
-                    showNumFailures="${tests.showNumFailures}">
-
-                  <!-- Filter stack traces. The default set of filters is similar to Ant's (reflection, assertions, junit's own stuff). -->
-                  <junit4:filtertrace defaults="true" enabled="${tests.filterstacks}">
-                    <!-- Lucene-specific stack frames (test rules mostly). -->
-                    <containsstring contains="at com.carrotsearch.randomizedtesting.RandomizedRunner" />
-                    <containsstring contains="at org.apache.lucene.util.AbstractBeforeAfterRule" />
-                    <containsstring contains="at com.carrotsearch.randomizedtesting.rules." />
-                    <containsstring contains="at org.apache.lucene.util.TestRule" />
-                    <containsstring contains="at com.carrotsearch.randomizedtesting.rules.StatementAdapter" />
-                    <containsstring contains="at com.carrotsearch.randomizedtesting.ThreadLeakControl" />
-
-                    <!-- Add custom filters if you like. Lines that match these will be removed. -->
-                    <!--
-                    <containsstring contains=".." /> 
-                    <containsregex pattern="^(\s+at )(org\.junit\.)" /> 
-                    -->
-                  </junit4:filtertrace>                    
-                </junit4:report-text>
-
-                <!-- Emits full status for all tests, their relative order on forked JVMs. -->
-                <junit4:report-text
-                    file="@{junit.output.dir}/tests-report.txt"
-                    showThrowable="true" 
-                    showStackTraces="true" 
-                    showOutput="always"
-
-                    showStatusOk="true"
-                    showStatusError="true"
-                    showStatusFailure="true"
-                    showStatusIgnored="true"
-
-                    showSuiteSummary="true"
-                    timestamps="true"
-                />
-
-                <!-- Emits status on errors and failures only. -->
-                <junit4:report-text
-                    file="@{junit.output.dir}/tests-failures.txt"
-                    showThrowable="true" 
-                    showStackTraces="true" 
-                    showOutput="onerror" 
-
-                    showStatusOk="false"
-                    showStatusError="true"
-                    showStatusFailure="true"
-                    showStatusIgnored="false"
-
-                    showSuiteSummary="false"
-                    timestamps="true"
-                />
-
-                <!-- Emit the information about tests timings (could be used to determine
-                     the slowest tests or for reuse in balancing). -->
-                <junit4:report-execution-times file="${tests.cachedir}/${name}/timehints.txt" historyLength="20" />
-
-                <!-- ANT-compatible XMLs for jenkins records etc. -->
-                <junit4:report-ant-xml dir="@{junit.output.dir}" outputStreams="no" ignoreDuplicateSuites="true"/>
-
-                <!-- <junit4:report-json file="@{junit.output.dir}/tests-report-${ant.project.name}/index.html" outputStreams="no" /> -->
-
-            </listeners>
-
-            <!-- Input test classes. -->
-            <junit4:duplicate times="${tests.dups}">
-              <fileset dir="@{testsDir}">
-                <include name="**/Test*.class" />
-                <include name="**/*Test.class" />
-                <include name="${tests.explicitclass}" if="tests.explicitclass" />
-                <exclude name="**/*$*" />
-              </fileset>
-            </junit4:duplicate>
-        </junit4:junit4>
-
-        <!-- Append the number of non-ignored (actually executed) tests. -->
-        <echo file="${tests.totals.tmpfile}" append="true" encoding="UTF-8"># module: ${ant.project.name}&#x000a;${junit4.stats.nonIgnored}&#x000a;</echo>
-        
-        <fail message="Beasting executed no tests (a typo in the filter pattern maybe?)">
-          <condition>
-            <and>
-              <isset property="tests.isbeasting"/>
-              <equals arg1="${junit4.stats.nonIgnored}" arg2="0"/>
-            </and>
-          </condition>
-        </fail>
-
-        <!-- Report the 5 slowest tests from this run to the console. -->
-        <echo level="info">5 slowest tests:</echo>
-        <junit4:tophints max="5">
-          <file file="${tests.cachedir}/${name}/timehints.txt" />
-        </junit4:tophints>
-    </sequential>
-  </macrodef>
-
-  <target name="test-times" description="Show the slowest tests (averages)." depends="install-junit4-taskdef">
-    <property name="max" value="10" />
-    <echo>Showing ${max} slowest tests according to local stats. (change with -Dmax=...).</echo>
-    <junit4:tophints max="${max}">
-      <fileset dir="${tests.cachedir}" includes="**/*.txt" />
-    </junit4:tophints>
-
-    <echo>Showing ${max} slowest tests in cached stats. (change with -Dmax=...).</echo>
-    <junit4:tophints max="${max}">
-        <fileset dir="${common.dir}/tools/junit4">
-          <include name="*.txt" />
-        </fileset>
-    </junit4:tophints>
-  </target>
-
-  <target name="test-help" description="Help on 'ant test' syntax.">
-      <echo taskname="help">
-#
-# Test case filtering. --------------------------------------------
-#
-# - 'tests.class' is a class-filtering shell-like glob pattern,
-#   'testcase' is an alias of "tests.class=*.${testcase}"
-# - 'tests.method' is a method-filtering glob pattern.
-#   'testmethod' is an alias of "tests.method=${testmethod}*"
-#
-
-# Run a single test case (variants)
-ant test -Dtests.class=org.apache.lucene.package.ClassName
-ant test "-Dtests.class=*.ClassName"
-ant test -Dtestcase=ClassName
-
-# Run all tests in a package and sub-packages
-ant test "-Dtests.class=org.apache.lucene.package.Test*|org.apache.lucene.package.*Test"
-
-# Run any test methods that contain 'esi' (like: ...r*esi*ze...).
-ant test "-Dtests.method=*esi*"
-
-#
-# Seed and repetitions. -------------------------------------------
-#
-
-# Run with a given seed (seed is a hex-encoded long).
-ant test -Dtests.seed=DEADBEEF
-
-# Repeats _all_ tests of ClassName N times. Every test repetition
-# will have a different seed. NOTE: does not reinitialize
-# between repetitions, use only for idempotent tests.
-ant test -Dtests.iters=N -Dtestcase=ClassName
-
-# Repeats _all_ tests of ClassName N times. Every test repetition
-# will have exactly the same master (dead) and method-level (beef)
-# seed.
-ant test -Dtests.iters=N -Dtestcase=ClassName -Dtests.seed=dead:beef
-
-# Repeats a given test N times (note the filters - individual test
-# repetitions are given suffixes, ie: testFoo[0], testFoo[1], etc...
-# so using testmethod or tests.method ending in a glob is necessary
-# to ensure iterations are run).
-ant test -Dtests.iters=N -Dtestcase=ClassName -Dtestmethod=mytest
-ant test -Dtests.iters=N -Dtestcase=ClassName -Dtests.method=mytest*
-
-# Repeats N times but skips any tests after the first failure or M
-# initial failures.
-ant test -Dtests.iters=N -Dtests.failfast=yes -Dtestcase=...
-ant test -Dtests.iters=N -Dtests.maxfailures=M -Dtestcase=...
-
-# Repeats every suite (class) and any tests inside N times
-# can be combined with -Dtestcase or -Dtests.iters, etc.
-# Can be used for running a single class on multiple JVMs
-# in parallel.
-ant test -Dtests.dups=N ...
-
-# Test beasting: Repeats every suite with same seed per class
-# (N times in parallel) and each test inside (M times). The whole
-# run is repeated (beasting) P times in a loop, with a different
-# master seed. You can combine beasting with any other parameter,
-# just replace "test" with "beast" and give -Dbeast.iters=P
-# (P >> 1).
-ant beast -Dtests.dups=N -Dtests.iters=M -Dbeast.iters=P \
-  -Dtestcase=ClassName
-
-#
-# Test groups. ----------------------------------------------------
-#
-# test groups can be enabled or disabled (true/false). Default
-# value provided below in [brackets].
-
-ant -Dtests.nightly=[false]   - nightly test group (@Nightly)
-ant -Dtests.weekly=[false]    - weekly tests (@Weekly)
-ant -Dtests.awaitsfix=[false] - known issue (@AwaitsFix)
-ant -Dtests.badapples=[true]  - flakey tests (@BadApple)
-ant -Dtests.slow=[true]       - slow tests (@Slow)
-
-# An alternative way to select just one (or more) groups of tests
-# is to use the -Dtests.filter property:
-
--Dtests.filter="@slow"
-
-# would run only slow tests. 'tests.filter' supports Boolean operators
-# 'and, or, not' and grouping, for example:
-
-ant -Dtests.filter="@nightly and not(@awaitsfix or @slow)"
-
-# would run nightly tests but not those also marked as awaiting a fix
-# or slow. Note that tests.filter, if present, has a priority over any
-# individual tests.* properties.
-
-
-#
-# Load balancing and caches. --------------------------------------
-#
-
-# Run sequentially (one slave JVM).
-ant -Dtests.jvms=1 test
-
-# Run with more slave JVMs than the default.
-# Don't count hypercores for CPU-intense tests.
-# Make sure there is enough RAM to handle child JVMs.
-ant -Dtests.jvms=8 test
-
-# Use repeatable suite order on slave JVMs (disables job stealing).
-ant -Dtests.dynamicAssignmentRatio=0 test
-
-# Update global (versioned!) execution times cache (top level).
-ant clean test
-ant -f lucene/build.xml test-updatecache
-
-#
-# Miscellaneous. --------------------------------------------------
-#
-
-# Only run test(s), non-recursively. Faster than "ant test".
-# WARNING: Skips jar download and compilation. Clover not supported.
-ant test-nocompile
-ant -Dtestcase=... test-nocompile
-
-# Run all tests without stopping on errors (inspect log files!).
-ant -Dtests.haltonfailure=false test
-
-# Run more verbose output (slave JVM parameters, etc.).
-ant -verbose test
-
-# Include additional information like what is printed to 
-# sysout/syserr, even if the test passes.
-# Enabled automatically when running for a single test case.
-ant -Dtests.showSuccess=true test
-
-# Change the default suite timeout to 5 seconds.
-ant -Dtests.timeoutSuite=5000! ...
-
-# Display local averaged stats, if any (30 slowest tests).
-ant test-times -Dmax=30
-
-# Display a timestamp alongside each suite/ test.
-ant -Dtests.timestamps=on ...
-
-# Override forked JVM file.encoding
-ant -Dtests.file.encoding=XXX ...
-
-# Don't remove any temporary files under slave directories, even if
-# the test passes (any of the following props):
-ant -Dtests.leaveTemporary=true
-ant -Dtests.leavetmpdir=true
-ant -Dsolr.test.leavetmpdir=true
-
-# Do *not* filter stack traces emitted to the console.
-ant -Dtests.filterstacks=false
-
-# Skip checking for no-executed tests in modules
-ant -Dtests.ifNoTests=ignore ...
-
-# Output test files and reports.
-${tests-output}/tests-report.txt    - full ASCII tests report
-${tests-output}/tests-failures.txt  - failures only (if any)
-${tests-output}/tests-report-*      - HTML5 report with results
-${tests-output}/junit4-*.suites     - per-JVM executed suites
-                                      (important if job stealing).
-      </echo>
-  </target>
-
-  <target name="install-junit4-taskdef" depends="ivy-configure">
-    <!-- JUnit4 taskdef. -->
-    <ivy:cachepath organisation="com.carrotsearch.randomizedtesting" module="junit4-ant" revision="${/com.carrotsearch.randomizedtesting/junit4-ant}"
-                   type="jar" inline="true" log="download-only" pathid="path.junit4" />
-
-    <taskdef uri="antlib:com.carrotsearch.junit4">
-      <classpath refid="path.junit4" />
-    </taskdef>
-  </target>
-
-  <target name="test" depends="clover,compile-test,install-junit4-taskdef,validate,-init-totals,-test,-check-totals" description="Runs unit tests"/>
-  <target name="beast" depends="install-ant-contrib,clover,compile-test,install-junit4-taskdef,validate,-init-totals,-beast,-check-totals" description="Runs unit tests in a loop (-Dbeast.iters=n)"/>
-
-  <target name="test-nocompile" depends="-clover.disable,install-junit4-taskdef,-init-totals,-test,-check-totals"
-          description="Only runs unit tests.  Jars are not downloaded; compilation is not updated; and Clover is not enabled."/>
-
-  <target name="-jacoco-install">
-    <!-- download jacoco from ivy if needed -->
-    <ivy:cachepath organisation="org.jacoco" module="org.jacoco.ant" type="jar" inline="true" revision="0.8.3"
-                   log="download-only" pathid="jacoco.classpath" />
-
-    <!-- install jacoco ant tasks -->
-    <taskdef uri="antlib:org.jacoco.ant" resource="org/jacoco/ant/antlib.xml">
-        <classpath refid="jacoco.classpath"/>
-    </taskdef>
-  </target>
-
-  <target name="-jacoco-test" depends="clover,compile-test,install-junit4-taskdef,validate,-init-totals">
-    <!-- hack: ant task computes absolute path, but we need a relative path, so its per-testrunner -->
-    <jacoco:agent property="agentvmparam.raw"/>
-    <property name="agentvmparam" value="${agentvmparam.raw}destfile=jacoco.db,append=false"/>
-  
-    <!-- create output dir if needed -->
-    <mkdir dir="${junit.output.dir}"/>
-
-    <!-- run tests, with agent vm args, and keep runner files around -->
-    <test-macro threadNum="${tests.jvms.override}" additional.vm.args="${agentvmparam}" runner.leaveTemporary="true"/>
-  </target>
-
-  <target name="-jacoco-report" depends="-check-totals">
-    <property name="jacoco.output.dir" location="${jacoco.report.dir}/${name}"/>
-    <!-- try to clean output dir to prevent any confusion -->
-    <delete dir="${jacoco.output.dir}" failonerror="false"/>
-    <mkdir dir="${jacoco.output.dir}"/>
-
-    <!-- print jacoco reports -->
-    <jacoco:report>
-      <executiondata>
-        <fileset dir="${junit.output.dir}" includes="**/jacoco.db"/>
-      </executiondata>
-      <structure name="${final.name} JaCoCo coverage report">
-        <classfiles>
-          <fileset dir="${build.dir}/classes/java"/>
-        </classfiles>
-        <sourcefiles>
-          <fileset dir="${src.dir}"/>
-        </sourcefiles>
-      </structure>
-      <html destdir="${jacoco.output.dir}" footer="Copyright ${year} Apache Software Foundation.  All Rights Reserved."/>
-    </jacoco:report>
-  </target>
-
-  <target name="jacoco" depends="-jacoco-install,-jacoco-test,-jacoco-report" description="Generates JaCoCo coverage report"/>
-
-  <!-- Run the actual tests (must be wrapped with -init-totals, -check-totals) -->
-  <target name="-test">
-    <mkdir dir="${junit.output.dir}"/>
-    <test-macro threadNum="${tests.jvms.override}" />
-  </target>
-
-  <!-- Beast the actual tests (must be wrapped with -init-totals, -check-totals) -->
-  <target name="-beast" depends="resolve-groovy,install-ant-contrib">
-    <fail message="The Beast only works inside of individual modules (where 'junit.classpath' is defined)">
-      <condition>
-        <not><isreference refid="junit.classpath"/></not>
-      </condition>
-    </fail>
-    <taskdef name="antcallback" classname="net.sf.antcontrib.logic.AntCallBack" classpathref="ant-contrib.classpath"/>
-    <groovy classpathref="ant-contrib.classpath" taskname="beaster" src="${common.dir}/tools/src/groovy/run-beaster.groovy"/>
-    <fail message="Fail baby fail" if="groovy.error"/>
-  </target>
-
-  <target name="-check-totals" if="tests.totals.toplevel" depends="resolve-groovy">
-    <!-- We are concluding a test pass at the outermost level. Sum up all executed tests. -->
-    <groovy><![CDATA[
-      import org.apache.tools.ant.BuildException;
-      
-      total = 0;
-      statsFile = new File(properties["tests.totals.tmpfile"]);
-      statsFile.eachLine("UTF-8", { line ->
-        if (line ==~ /^[0-9]+/) {
-          total += Integer.valueOf(line);
-        }
-      });
-      statsFile.delete();
-
-      if (total == 0 && !"ignore".equals(project.getProperty("tests.ifNoTests"))) {
-        throw new BuildException("Not even a single test was executed (a typo in the filter pattern maybe?).");
-      }
-
-      // Interesting but let's keep the build output quiet.
-      // task.log("Grand total of all executed tests (including sub-modules): " + total);
-    ]]></groovy>
-  </target>
-
-  <!-- The groovy dependency is wanted: this is done early before any test or any other submodule is ran, to prevent permgen errors! -->
-  <target name="-init-totals" unless="tests.totals.tmpfile" depends="resolve-groovy">
-    <mkdir dir="${build.dir}" />
-    <tempfile property="tests.totals.tmpfile"
-              destdir="${build.dir}"
-              prefix=".test-totals-"
-              suffix=".tmp"
-              deleteonexit="true"
-              createfile="true" />
-    <property name="tests.totals.toplevel" value="true" />
-  </target>
-
-  <!--
-   See http://issues.apache.org/jira/browse/LUCENE-721
-   -->
-  <target name="clover" depends="-clover.disable,-clover.load,-clover.classpath,-clover.setup"/>
-  
-  <target name="-clover.load" depends="ivy-availability-check,ivy-configure" if="run.clover" unless="clover.loaded">
-    <fail>TODO: Code coverage with OpenClover does not yet work with Java 11 - see https://issues.apache.org/jira/browse/LUCENE-8763</fail>
-    <echo>Code coverage with OpenClover enabled.</echo>
-    <ivy:cachepath organisation="org.openclover" module="clover" revision="4.2.1"
-      inline="true" conf="master" pathid="clover.classpath"/>
-    <taskdef resource="cloverlib.xml" classpathref="clover.classpath" />
-    <mkdir dir="${clover.db.dir}"/>
-    <!-- This is a hack, instead of setting "clover.loaded" to "true", we set it
-     to the stringified classpath. So it can be passed down to subants,
-     and reloaded by "-clover.classpath" task (see below): -->
-    <pathconvert property="clover.loaded" refid="clover.classpath"/>
-  </target>
-  
-  <target name="-clover.classpath" if="run.clover">
-    <!-- redefine the clover classpath refid for tests by using the hack above: -->
-    <path id="clover.classpath" path="${clover.loaded}"/>
-  </target>
-
-  <target name="-clover.setup" if="run.clover">
-    <clover-setup initString="${clover.db.dir}/coverage.db" encoding="${build.encoding}">
-      <fileset dir="${src.dir}" erroronmissingdir="no"/>
-      <testsources dir="${tests.src.dir}" erroronmissingdir="no">
-        <exclude name="**/TestOpenNLP*Factory.java"/><!-- https://bitbucket.org/openclover/clover/issues/59 -->
-      </testsources>
-    </clover-setup>
-  </target>
-
-  <target name="-clover.disable" unless="run.clover">
-    <!-- define dummy clover path used by junit -->
-    <path id="clover.classpath"/>
-  </target>
-
-  <target name="pitest" if="run.pitest" depends="compile-test,install-junit4-taskdef,clover,validate"
-      description="Run Unit tests using pitest mutation testing. To use, specify -Drun.pitest=true on the command line.">
-    <echo>Code coverage with pitest enabled.</echo>
-    <ivy:cachepath
-        organisation="org.pitest" module="pitest-ant"
-        inline="true"
-        pathid="pitest.framework.classpath" />
-    <pitest-macro />
-  </target>
-
-  <target name="generate-test-reports" description="Generates test reports">
-    <mkdir dir="${junit.reports}"/>
-    <junitreport todir="${junit.output.dir}">
-      <!-- this fileset let's the task work for individual modules,
-           as well as the project as a whole
-       -->
-      <fileset dir="${build.dir}">
-        <include name="**/test/TEST-*.xml"/>
-      </fileset>
-      <report format="frames" todir="${junit.reports}"/>
-    </junitreport>
-  </target>
-
-  <target name="jar" depends="jar-core">
-    <!-- convenience target to package core JAR -->
-  </target>
-
-  <target name="jar-src">
-    <sequential>
-      <mkdir dir="${build.dir}" />
-      <jarify basedir="${src.dir}" destfile="${build.dir}/${final.name}-src.jar">
-        <filesets>
-          <fileset dir="${resources.dir}" erroronmissingdir="no"/>
-        </filesets>
-      </jarify>
-    </sequential>
-  </target>
-
-  <target name="default" depends="jar-core"/>
-
-  <available type="file" file="pom.xml" property="pom.xml.present"/>
-
-  <!-- TODO, this is really unintuitive how we depend on a target that does not exist -->
-  <target name="javadocs">
-    <fail message="You must redefine the javadocs task to do something!!!!!"/>
-  </target>
-
-  <target name="install-maven-tasks" unless="maven-tasks.uptodate" depends="ivy-availability-check,ivy-configure">
-    <property name="maven-tasks.uptodate" value="true"/>
-    <ivy:cachepath organisation="org.apache.maven" module="maven-ant-tasks" revision="2.1.3"
-             inline="true" conf="master" type="jar" pathid="maven-ant-tasks.classpath"/>
-    <taskdef resource="org/apache/maven/artifact/ant/antlib.xml" 
-             uri="antlib:org.apache.maven.artifact.ant" 
-             classpathref="maven-ant-tasks.classpath"/>
-  </target>
-
-  <target name="-dist-maven" depends="install-maven-tasks, jar-src, javadocs">
-    <sequential>
-      <property name="top.level.dir" location="${common.dir}/.."/>
-      <pathconvert property="pom.xml">
-        <mapper>
-          <chainedmapper>
-            <globmapper from="${top.level.dir}*" to="${filtered.pom.templates.dir}*"/>
-            <globmapper from="*build.xml" to="*pom.xml"/>
-          </chainedmapper>
-        </mapper>
-        <path location="${ant.file}"/>
-      </pathconvert>
-      <m2-deploy pom.xml="${pom.xml}">
-        <artifact-attachments>
-          <attach file="${build.dir}/${final.name}-src.jar"
-                  classifier="sources"/>
-          <attach file="${build.dir}/${final.name}-javadoc.jar"
-                  classifier="javadoc"/>
-        </artifact-attachments>
-      </m2-deploy>
-    </sequential>
-  </target>
-
-  <target name="-install-to-maven-local-repo" depends="install-maven-tasks">
-    <sequential>
-      <property name="top.level.dir" location="${common.dir}/.."/>
-      <pathconvert property="pom.xml">
-        <mapper>
-          <chainedmapper>
-            <globmapper from="${top.level.dir}*" to="${filtered.pom.templates.dir}*"/>
-            <globmapper from="*build.xml" to="*pom.xml"/>
-          </chainedmapper>
-        </mapper>
-        <path location="${ant.file}"/>
-      </pathconvert>
-      <artifact:pom id="maven.project" file="${pom.xml}"/>
-      <artifact:install file="${dist.jar.dir.prefix}-${version}/${dist.jar.dir.suffix}/${final.name}.jar">
-        <pom refid="maven.project"/>
-      </artifact:install>
-    </sequential>
-  </target>
-
-  <target name="-install-src-java-to-maven-local-repo" depends="install-maven-tasks">
-    <sequential>
-      <property name="top.level.dir" location="${common.dir}/.."/>
-      <pathconvert property="pom.xml">
-        <mapper>
-          <chainedmapper>
-            <globmapper from="${top.level.dir}*" to="${filtered.pom.templates.dir}*"/>
-            <globmapper from="*build.xml" to="*/src/java/pom.xml"/>
-          </chainedmapper>
-        </mapper>
-        <path location="${ant.file}"/>
-      </pathconvert>
-      <artifact:pom id="maven.project" file="${pom.xml}"/>
-      <artifact:install file="${dist.jar.dir.prefix}-${version}/${dist.jar.dir.suffix}/${final.name}.jar">
-        <pom refid="maven.project"/>
-      </artifact:install>
-    </sequential>
-  </target>
-
-  <target name="-dist-maven-src-java" depends="install-maven-tasks, jar-src, javadocs">
-    <sequential>
-      <property name="top.level.dir" location="${common.dir}/.."/>
-      <pathconvert property="pom.xml">
-        <mapper>
-          <chainedmapper>
-            <globmapper from="${top.level.dir}*" to="${filtered.pom.templates.dir}*"/>
-            <globmapper from="*build.xml" to="*/src/java/pom.xml"/>
-          </chainedmapper>
-        </mapper>
-        <path location="${ant.file}"/>
-      </pathconvert>
-      <m2-deploy pom.xml="${pom.xml}">
-        <artifact-attachments>
-          <attach file="${build.dir}/${final.name}-src.jar"
-                  classifier="sources"/>
-          <attach file="${build.dir}/${final.name}-javadoc.jar"
-                  classifier="javadoc"/>
-        </artifact-attachments>
-      </m2-deploy>
-    </sequential>
-  </target>
-  
-  <target name="-validate-maven-dependencies.init">
-    <!-- find the correct pom.xml path and assigns it to property pom.xml -->
-    <property name="top.level.dir" location="${common.dir}/.."/>
-    <pathconvert property="maven.pom.xml">
-      <mapper>
-        <chainedmapper>
-          <globmapper from="${top.level.dir}*" to="${filtered.pom.templates.dir}*"/>
-          <globmapper from="*build.xml" to="*pom.xml"/>
-        </chainedmapper>
-      </mapper>
-      <path location="${ant.file}"/>
-    </pathconvert>
-    
-    <!-- convert ${version} to be a glob pattern, so snapshot versions are allowed: -->
-    <loadresource property="maven.version.glob">
-      <propertyresource name="version"/>
-      <filterchain>
-        <tokenfilter>
-          <filetokenizer/>
-          <replacestring from="-SNAPSHOT" to="-*"/>
-        </tokenfilter>
-      </filterchain>
-    </loadresource>
-  </target>
-  
-  <target name="-validate-maven-dependencies" depends="-validate-maven-dependencies.init">
-    <m2-validate-dependencies pom.xml="${maven.pom.xml}" licenseDirectory="${license.dir}">
-      <additional-filters>
-        <replaceregex pattern="jetty([^/]+)$" replace="jetty" flags="gi" />
-        <replaceregex pattern="slf4j-([^/]+)$" replace="slf4j" flags="gi" />
-        <replaceregex pattern="javax\.servlet([^/]+)$" replace="javax.servlet" flags="gi" />
-      </additional-filters>
-      <excludes>
-        <rsel:name name="**/lucene-*-${maven.version.glob}.jar" handledirsep="true"/>
-      </excludes>
-    </m2-validate-dependencies>
-  </target>
-  
-  <property name="module.dependencies.properties.file" location="${common.build.dir}/module.dependencies.properties"/>
-
-  <target name="-append-module-dependencies-properties">
-    <sequential>
-      <property name="top.level.dir" location="${common.dir}/.."/>
-      <pathconvert property="classpath.list" pathsep="," dirsep="/" setonempty="true">
-        <path refid="classpath"/>
-        <globmapper from="${top.level.dir}/*" to="*" handledirsep="true"/>
-      </pathconvert>
-      <pathconvert property="test.classpath.list" pathsep="," dirsep="/" setonempty="true">
-        <path refid="test.classpath"/>
-        <globmapper from="${top.level.dir}/*" to="*" handledirsep="true"/>
-      </pathconvert>
-      <echo append="true" file="${module.dependencies.properties.file}">
-${ant.project.name}.dependencies=${classpath.list}
-${ant.project.name}.test.dependencies=${test.classpath.list}
-      </echo>
-    </sequential>
-  </target>
-
-  <property name="maven.dependencies.filters.file" location="${common.build.dir}/maven.dependencies.filters.properties"/>
-
-  <target name="-get-maven-dependencies" depends="compile-tools,load-custom-tasks">
-    <ant dir="${common.dir}/.." target="-append-all-modules-dependencies-properties" inheritall="false"/>
-    <get-maven-dependencies-macro
-        dir="${common.dir}/.."
-        centralized.versions.file="${common.dir}/ivy-versions.properties"
-        module.dependencies.properties.file="${module.dependencies.properties.file}"
-        maven.dependencies.filters.file="${maven.dependencies.filters.file}"/>
-  </target>
-
-  <target name="-get-maven-poms" depends="-get-maven-dependencies">
-    <property name="maven-build-dir" location="${common.dir}/../maven-build"/>
-    <copy todir="${maven-build-dir}" overwrite="true" encoding="UTF-8">
-      <fileset dir="${common.dir}/../dev-tools/maven"/>
-      <filterset begintoken="@" endtoken="@">
-        <filter token="version" value="${version}"/>
-        <filter token="version.base" value="${version.base}"/>
-        <filter token="spec.version" value="${spec.version}"/>
-      </filterset>
-      <filterset>
-        <filtersfile file="${maven.dependencies.filters.file}"/>
-      </filterset>
-      <globmapper from="*.template" to="*"/>
-    </copy>
-  </target>
-
-  <target name="-filter-pom-templates" depends="-get-maven-dependencies">
-    <mkdir dir="${filtered.pom.templates.dir}"/>
-    <copy todir="${common.dir}/build/poms" overwrite="true" encoding="UTF-8" filtering="on">
-      <fileset dir="${common.dir}/../dev-tools/maven"/>
-      <filterset begintoken="@" endtoken="@">
-        <filter token="version" value="${version}"/>
-      </filterset>
-      <filterset>
-        <filtersfile file="${maven.dependencies.filters.file}"/>
-      </filterset>
-      <globmapper from="*.template" to="*"/>
-    </copy>
-  </target>
-
-  <target name="stage-maven-artifacts">
-    <sequential>
-      <property name="output.build.xml" location="${build.dir}/stage_maven_build.xml"/>
-      <property name="dev-tools.scripts.dir" value="${common.dir}/../dev-tools/scripts"/>
-      <fail message="maven.dist.dir '${maven.dist.dir}' does not exist!">
-        <condition>
-          <not>
-            <available file="${maven.dist.dir}" type="dir"/>
-          </not>
-        </condition>
-      </fail>
-      <exec dir="." executable="${perl.exe}" failonerror="false" outputproperty="stage.maven.script.output"
-        resultproperty="stage.maven.script.success">
-        <arg value="-CSD"/>
-        <arg value="${dev-tools.scripts.dir}/write.stage.maven.build.xml.pl"/>
-        <arg value="${maven.dist.dir}"/>              <!-- Maven distribution artifacts directory -->
-        <arg value="${output.build.xml}"/>            <!-- Ant build file to be written -->
-        <arg value="${common.dir}/common-build.xml"/> <!-- Imported from the ant file to be written -->
-        <arg value="${m2.credentials.prompt}"/>
-        <arg value="${m2.repository.id}"/>
-      </exec>
-      <echo message="${stage.maven.script.output}"/>
-      <fail message="maven stage script failed!">
-        <condition>
-          <not>
-            <equals arg1="${stage.maven.script.success}" arg2="0"/>
-          </not>
-        </condition>
-      </fail>
-    </sequential>
-    <echo>Invoking target stage-maven in ${output.build.xml} now...</echo>
-    <ant target="stage-maven" antfile="${output.build.xml}" inheritall="false">
-      <property name="m2.repository.id" value="${m2.repository.id}"/>
-      <property name="m2.repository.url" value="${m2.repository.url}"/>
-    </ant>
-  </target>
-
-  <target name="rat-sources-typedef" unless="rat.loaded" depends="ivy-availability-check,ivy-configure">
-    <ivy:cachepath organisation="org.apache.rat" module="apache-rat" revision="0.11" transitive="false" inline="true" conf="master" type="jar" pathid="rat.classpath"/>
-    <typedef resource="org/apache/rat/anttasks/antlib.xml" uri="antlib:org.apache.rat.anttasks" classpathref="rat.classpath"/>
-    <property name="rat.loaded" value="true"/>
-  </target>
-
-  <target name="rat-sources" depends="rat-sources-typedef"
-    description="runs the tasks over source and test files">
-    <!-- create a temp file for the log to go to -->
-    <tempfile property="rat.sources.logfile"
-              prefix="rat"
-              destdir="${java.io.tmpdir}"/>
-    <!-- run rat, going to the file -->
-    <rat:report xmlns:rat="antlib:org.apache.rat.anttasks" 
-                reportFile="${rat.sources.logfile}" addDefaultLicenseMatchers="true">
-      <fileset dir="." includes="*.xml ${rat.additional-includes}" excludes="${rat.additional-excludes}"/>
-      <fileset dir="${src.dir}" excludes="${rat.excludes}" erroronmissingdir="false"/>
-      <fileset dir="${tests.src.dir}" excludes="${rat.excludes}" erroronmissingdir="false"/>
-
-      <!-- TODO: Check all resource files. Currently not all stopword and similar files have no header! -->
-      <fileset dir="${resources.dir}" includes="META-INF/**" erroronmissingdir="false"/>
-      
-      <!-- BSD 4-clause stuff (is disallowed below) -->
-      <rat:substringMatcher licenseFamilyCategory="BSD4 "
-             licenseFamilyName="Original BSD License (with advertising clause)">
-        <pattern substring="All advertising materials"/>
-      </rat:substringMatcher>
-
-      <!-- BSD-like stuff -->
-      <rat:substringMatcher licenseFamilyCategory="BSD  "
-             licenseFamilyName="Modified BSD License">
-      <!-- brics automaton -->
-        <pattern substring="Copyright (c) 2001-2009 Anders Moeller"/>
-      <!-- snowball -->
-        <pattern substring="Copyright (c) 2001, Dr Martin Porter"/>
-      <!-- UMASS kstem -->
-        <pattern substring="THIS SOFTWARE IS PROVIDED BY UNIVERSITY OF MASSACHUSETTS AND OTHER CONTRIBUTORS"/>
-      <!-- Egothor -->
-        <pattern substring="Egothor Software License version 1.00"/>
-      <!-- JaSpell -->
-        <pattern substring="Copyright (c) 2005 Bruno Martins"/>
-      <!-- d3.js -->
-        <pattern substring="THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS"/>
-      <!-- highlight.js -->
-        <pattern substring="THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS"/>
-      </rat:substringMatcher>
-
-      <!-- MIT-like -->
-      <rat:substringMatcher licenseFamilyCategory="MIT  "
-             licenseFamilyName="The MIT License">
-      <!-- ICU license -->
-        <pattern substring="Permission is hereby granted, free of charge, to any person obtaining a copy"/>
-      </rat:substringMatcher>
-
-      <!-- Apache -->
-      <rat:substringMatcher licenseFamilyCategory="AL   "
-             licenseFamilyName="Apache">
-        <pattern substring="Licensed to the Apache Software Foundation (ASF) under"/>
-        <!-- this is the old-school one under some files -->
-        <pattern substring="Licensed under the Apache License, Version 2.0 (the &quot;License&quot;)"/>
-      </rat:substringMatcher>
-
-      <rat:substringMatcher licenseFamilyCategory="GEN  "
-             licenseFamilyName="Generated">
-      <!-- svg files generated by gnuplot -->
-        <pattern substring="Produced by GNUPLOT"/>
-      <!-- snowball stemmers generated by snowball compiler -->
-        <pattern substring="Generated by Snowball"/>
-      <!-- parsers generated by antlr -->
-        <pattern substring="ANTLR GENERATED CODE"/>
-      </rat:substringMatcher>
-
-      <!-- built in approved licenses -->
-      <rat:approvedLicense familyName="Apache"/>
-      <rat:approvedLicense familyName="The MIT License"/>
-      <rat:approvedLicense familyName="Modified BSD License"/>
-      <rat:approvedLicense familyName="Generated"/>
-    </rat:report>
-    <!-- now print the output, for review -->
-    <loadfile property="rat.output" srcFile="${rat.sources.logfile}"/>
-    <echo taskname="rat">${rat.output}</echo>
-    <delete>
-      <fileset file="${rat.sources.logfile}">
-        <and>
-          <containsregexp expression="^0 Unknown Licenses"/>
-          <not>
-            <containsregexp expression="^\s+!"/>
-          </not>
-        </and>
-      </fileset>
-    </delete>
-    <!-- fail if we didnt find the pattern -->
-    <fail message="Rat problems were found!">
-      <condition>
-        <available file="${rat.sources.logfile}"/>
-      </condition>
-    </fail>
-  </target>
-
-  <!--+
-      | M A C R O S
-      +-->
-  <macrodef name="compile">
-    <attribute name="srcdir"/>
-    <attribute name="destdir"/>
-    <attribute name="javac.release" default="${javac.release}"/>
-    <attribute name="includeantruntime" default="${javac.includeAntRuntime}" />
-
-    <element name="nested" implicit="yes" optional="yes"/>
-
-    <sequential>
-      <mkdir dir="@{destdir}"/>
-      <javac
-        includeAntRuntime="@{includeantruntime}"
-        encoding="${build.encoding}"
-        srcdir="@{srcdir}"
-        destdir="@{destdir}"
-        deprecation="${javac.deprecation}"
-        debug="${javac.debug}">
-        <nested/>
-        <!-- <compilerarg line="-Xmaxwarns 10000000"/>
-        <compilerarg line="-Xmaxerrs 10000000"/> -->
-        <compilerarg line="${javac.args}"/>
-        <compilerarg line="--release @{javac.release}"/>
-        <compilerarg line="${javac.doclint.args}"/>
-      </javac>
-    </sequential>
-  </macrodef>
-
-  <!-- ECJ Javadoc linting: -->
-  
-  <condition property="ecj-javadoc-lint.supported">
-    <or>
-      <equals arg1="${java.specification.version}" arg2="11"/>
-      <equals arg1="${java.specification.version}" arg2="12"/>
-      <equals arg1="${java.specification.version}" arg2="13"/>
-    </or>
-  </condition>
-
-  <condition property="ecj-javadoc-lint-tests.supported">
-    <and>
-      <isset property="ecj-javadoc-lint.supported"/>
-      <isset property="module.has.tests"/>
-    </and>
-  </condition>
-
-  <target name="-ecj-javadoc-lint-unsupported" unless="ecj-javadoc-lint.supported">
-    <fail message="Linting documentation with ECJ is not supported on this Java version (${java.specification.version}).">
-      <condition>
-        <not><isset property="is.jenkins.build"/></not>
-      </condition>
-    </fail>
-    <echo level="warning" message="WARN: Linting documentation with ECJ is not supported on this Java version (${java.specification.version}). NOTHING DONE!"/>
-  </target>
-
-  <target name="-ecj-javadoc-lint" depends="-ecj-javadoc-lint-unsupported,-ecj-javadoc-lint-src,-ecj-javadoc-lint-tests"/>
-
-  <target name="-ecj-javadoc-lint-src" depends="-ecj-resolve" if="ecj-javadoc-lint.supported">
-    <ecj-macro srcdir="${src.dir}" configuration="${common.dir}/tools/javadoc/ecj.javadocs.prefs">
-      <classpath refid="classpath"/>
-    </ecj-macro>
-  </target>
-
-  <target name="-ecj-javadoc-lint-tests" depends="-ecj-resolve" if="ecj-javadoc-lint-tests.supported">
-    <ecj-macro srcdir="${tests.src.dir}" configuration="${common.dir}/tools/javadoc/ecj.javadocs.prefs">
-      <classpath refid="test.classpath"/>
-    </ecj-macro>
-  </target>
-  
-  <target name="-ecj-resolve" unless="ecj.loaded" depends="ivy-availability-check,ivy-configure" if="ecj-javadoc-lint.supported">
-    <ivy:cachepath organisation="org.eclipse.jdt" module="ecj" revision="3.19.0"
-     inline="true" conf="master" type="jar" pathid="ecj.classpath" />
-    <componentdef classname="org.eclipse.jdt.core.JDTCompilerAdapter"
-     classpathref="ecj.classpath" name="ecj-component"/>
-    <property name="ecj.loaded" value="true"/>
-  </target>
-
-  <macrodef name="ecj-macro">
-    <attribute name="srcdir"/>
-    <attribute name="javac.release" default="${javac.release}"/>
-    <attribute name="includeantruntime" default="${javac.includeAntRuntime}" />
-    <attribute name="configuration"/>
-
-    <element name="nested" implicit="yes" optional="yes"/>
-
-    <sequential>
-      <!-- hack: we can tell ECJ not to create classfiles, but it still creates
-           package-info.class files. so redirect output to a tempdir -->
-      <tempfile property="ecj.trash.out" destdir="${java.io.tmpdir}" prefix="ecj"/>
-      <mkdir dir="${ecj.trash.out}"/>
-      <javac
-        includeAntRuntime="@{includeantruntime}"
-        encoding="${build.encoding}"
-        srcdir="@{srcdir}"
-        destdir="${ecj.trash.out}"
-        source="@{javac.release}"
-        target="@{javac.release}"
-        taskname="ecj-lint">
-        <ecj-component/>
-        <nested/>
-      <!-- hack: we can't disable classfile creation right now, because we need
-           to specify a destination for buggy package-info.class files
-        <compilerarg value="-d"/>
-        <compilerarg value="none"/> -->
-        <compilerarg value="-enableJavadoc"/>
-        <compilerarg value="-properties"/>
-        <compilerarg value="@{configuration}"/>
-      </javac>
-      <delete dir="${ecj.trash.out}"/>
-    </sequential>
-  </macrodef>
-
-  <property name="failonjavadocwarning" value="true"/>
-  <macrodef name="invoke-javadoc">
-    <element name="sources" optional="yes"/>
-    <attribute name="destdir"/>
-    <attribute name="title" default="${Name} ${version} API"/>
-    <attribute name="overview" default="${src.dir}/overview.html"/>
-    <attribute name="linksource" default="no"/>
-    <sequential>
-      <antcall target="download-java11-javadoc-packagelist"/>
-      <delete file="@{destdir}/stylesheet.css" failonerror="false"/>
-      <delete file="@{destdir}/script.js" failonerror="false"/>
-      <record name="@{destdir}/log_javadoc.txt" action="start" append="no"/>
-      <javadoc
-          overview="@{overview}"
-          packagenames="org.apache.lucene.*,org.apache.solr.*"
-          destdir="@{destdir}"
-          access="${javadoc.access}"
-          encoding="${build.encoding}"
-          charset="${javadoc.charset}"
-          docencoding="${javadoc.charset}"
-          noindex="${javadoc.noindex}"
-          includenosourcepackages="true"
-          author="true"
-          version="true"
-          linksource="@{linksource}"
-          use="true"
-          failonerror="true"
-          locale="en_US"
-          windowtitle="${Name} ${version} API"
-          doctitle="@{title}"
-          maxmemory="${javadoc.maxmemory}">
-        <tag name="lucene.experimental" 
-          description="WARNING: This API is experimental and might change in incompatible ways in the next release."/>
-        <tag name="lucene.internal"
-        description="NOTE: This API is for internal purposes only and might change in incompatible ways in the next release."/>
-        <tag name="lucene.spi"
-        description="SPI Name (Note: This is case-insensitive. e.g., if the name is 'htmlStrip', 'htmlstrip' can be used when looking up the service):" scope="types"/>
-        <link offline="true" packagelistLoc="${javadoc.dir}"/>
-        <link offline="true" href="${javadoc.link}" packagelistLoc="${javadoc.packagelist.dir}/java11"/>
-        <bottom><![CDATA[
-          <i>Copyright &copy; ${year} Apache Software Foundation.  All Rights Reserved.</i>
-        ]]></bottom>
-        
-        <sources />
-                
-        <classpath refid="javadoc.classpath"/>
-        <arg line="${javadoc.nomodule.args} --release ${javac.release}"/>
-        <arg line="${javadoc.doclint.args}"/>
-        <!-- force locale to be "en_US", as Javadoc tool ignores locale parameter (in some cases) -->
-        <arg line="-J-Duser.language=en -J-Duser.country=US"/>
-      </javadoc>
-      <record name="@{destdir}/log_javadoc.txt" action="stop"/>
-      
-      <!-- append some special table css -->
-      <concat destfile="@{destdir}/stylesheet.css" append="true" fixlastline="true" encoding="UTF-8">
-        <filelist dir="${common.dir}/tools/javadoc" files="table_padding.css"/>
-      </concat>
-      <!-- append prettify to scripts and css -->
-      <concat destfile="@{destdir}/stylesheet.css" append="true" fixlastline="true" encoding="UTF-8">
-        <filelist dir="${prettify.dir}" files="prettify.css"/>
-      </concat>
-      <concat destfile="@{destdir}/script.js" append="true" fixlastline="true" encoding="UTF-8">
-        <filelist dir="${prettify.dir}" files="prettify.js inject-javadocs.js"/>
-      </concat>
-      <fixcrlf srcdir="@{destdir}" includes="stylesheet.css script.js" eol="lf" fixlast="true" encoding="UTF-8" />
-
-      <delete>
-        <fileset file="@{destdir}/log_javadoc.txt">
-          <not>
-            <containsregexp expression="\[javadoc\]\s*[1-9][0-9]*\s*warning"/>
-          </not>
-        </fileset>
-      </delete>
-
-      <fail message="Javadocs warnings were found!">
-        <condition>
-          <and>
-            <available file="@{destdir}/log_javadoc.txt"/>
-            <istrue value="${failonjavadocwarning}"/>
-          </and>
-        </condition>
-      </fail>
-   </sequential>
-  </macrodef>
-
-  <target name="check-javadocs-uptodate">
-    <uptodate property="javadocs-uptodate-${name}" targetfile="${build.dir}/${final.name}-javadoc.jar">
-      <srcfiles dir="${src.dir}">
-        <include name="**/*.java"/>
-        <include name="**/*.html"/>
-      </srcfiles>
-    </uptodate>
-  </target>
-
-  <macrodef name="modules-crawl">
-    <attribute name="target" default=""/>
-    <attribute name="failonerror" default="true"/>
-    <sequential>
-      <subant target="@{target}" failonerror="@{failonerror}" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="." includes="*/build.xml" excludes="build/**,core/**,test-framework/**,tools/**"/>
-      </subant>
-    </sequential>
-  </macrodef>
-
-  <target name="download-java11-javadoc-packagelist" unless="javadoc.java11.packagelist.exists">
-    <mkdir dir="${javadoc.packagelist.dir}/java11"/>
-    <get src="${javadoc.link}/element-list"
-         dest="${javadoc.packagelist.dir}/java11/package-list" ignoreerrors="true"/>
-  </target>
-
-  <!-- VALIDATION work -->
-
-  <!-- Generic placeholder target for if we add other validation tasks -->
-  <target name="validate">
-  </target>
-
-  <property name="src.export.dir" location="${build.dir}/src-export"/>
-  <macrodef name="export-source"
-            description="Exports the source to src.export.dir.">
-    <attribute name="source.dir"/>
-    <sequential>
-      <delete dir="${src.export.dir}" includeemptydirs="true" failonerror="false"/>
-      <exec dir="@{source.dir}" executable="${git.exe}" failonerror="true">
-        <arg value="checkout-index"/>
-        <arg value="-a"/>
-        <arg value="-f"/>
-        <arg value="--prefix=${src.export.dir}/"/>
-      </exec>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="make-checksums" description="Macro for building checksum files">
-    <attribute name="file"/>
-    <sequential>
-      <echo>Building checksums for '@{file}'</echo>
-      <checksum file="@{file}" algorithm="SHA-512" fileext=".sha512" format="MD5SUM" forceoverwrite="yes" readbuffersize="65536"/>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="jar-checksum-macro">
-      <attribute name="srcdir"/>
-      <attribute name="dstdir"/>
-    <sequential>
-      <delete>
-        <fileset dir="@{dstdir}">
-          <include name="**/*.jar.sha1"/>
-
-          <!--
-          Don't delete jetty-start-* because this isn't regerated by ant (but is generated and validated by
-          the gradle build).
-          -->
-          <exclude name="**/jetty-start-*" />
-        </fileset>
-      </delete>
-
-      <!-- checksum task does not have a flatten=true -->
-      <tempfile property="jar-checksum.temp.dir"/>
-      <mkdir dir="${jar-checksum.temp.dir}"/>
-      <copy todir="${jar-checksum.temp.dir}" flatten="true">
-        <fileset dir="@{srcdir}">
-          <include name="**/*.jar"/>
-          <!-- todo make this something passed into the macro and not some hardcoded set -->
-          <exclude name="build/**"/>
-          <exclude name="dist/**"/>
-          <exclude name="package/**"/>
-          <exclude name="example/exampledocs/**"/>
-        </fileset>
-      </copy>
-
-      <checksum algorithm="SHA1" fileext=".sha1" todir="@{dstdir}">
-        <fileset dir="${jar-checksum.temp.dir}"/>
-      </checksum>
-
-      <delete dir="${jar-checksum.temp.dir}"/>
-
-      <fixcrlf 
-        srcdir="@{dstdir}"
-        includes="**/*.jar.sha1"
-        eol="lf" fixlast="true" encoding="US-ASCII" />
-    </sequential>
-  </macrodef>
-
-  <macrodef name="sign-artifacts-macro">
-    <attribute name="artifacts.dir"/>
-    <sequential>
-      <delete failonerror="false">
-        <fileset dir="@{artifacts.dir}">
-          <include name="**/*.asc"/>
-        </fileset>
-      </delete>
-
-      <available property="gpg.input.handler" classname="org.apache.tools.ant.input.SecureInputHandler"
-                 value="org.apache.tools.ant.input.SecureInputHandler"/>
-      <!--else:--><property name="gpg.input.handler" value="org.apache.tools.ant.input.DefaultInputHandler"/>
-      <echo>WARNING: ON SOME PLATFORMS YOUR PASSPHRASE WILL BE ECHOED BACK!!!!!</echo>
-      <input message="Enter GPG keystore password: >" addproperty="gpg.passphrase">
-        <handler classname="${gpg.input.handler}" />
-      </input>
-
-      <apply executable="${gpg.exe}" inputstring="${gpg.passphrase}"
-             dest="@{artifacts.dir}" type="file" maxparallel="1" verbose="yes">
-        <arg value="--passphrase-fd"/>
-        <arg value="0"/>
-        <arg value="--batch"/>
-        <arg value="--armor"/>
-        <arg value="--default-key"/>
-        <arg value="${gpg.key}"/>
-        <arg value="--output"/>
-        <targetfile/>
-        <arg value="--detach-sig"/>
-        <srcfile/>
-
-        <fileset dir="@{artifacts.dir}">
-          <include name="**/*.jar"/>
-          <include name="**/*.war"/>
-          <include name="**/*.zip"/>
-          <include name="**/*.tgz"/>
-          <include name="**/*.pom"/>
-        </fileset>
-        <globmapper from="*" to="*.asc"/>
-      </apply>
-    </sequential>
-  </macrodef>
-
-  <!-- JFlex task -->
-  <target name="-install-jflex" unless="jflex.loaded" depends="ivy-availability-check,ivy-configure">
-    <ivy:cachepath organisation="de.jflex" module="jflex" revision="1.7.0"
-                   inline="true" conf="default" transitive="true" pathid="jflex.classpath"/>
-    <taskdef name="jflex" classname="jflex.anttask.JFlexTask" classpathref="jflex.classpath"/>
-    <property name="jflex.loaded" value="true"/>
-  </target>
-
-  <!-- GROOVY scripting engine for ANT tasks -->
-  <target name="resolve-groovy" unless="groovy.loaded" depends="ivy-availability-check,ivy-configure">
-    <ivy:cachepath organisation="org.codehaus.groovy" module="groovy-all" revision="2.4.17"
-      inline="true" conf="default" type="jar" transitive="true" pathid="groovy.classpath"/>
-    <taskdef name="groovy"
-      classname="org.codehaus.groovy.ant.Groovy"
-      classpathref="groovy.classpath"/>
-    <property name="groovy.loaded" value="true"/>
-  </target>
-  
-  <!-- Forbidden API Task -->
-  <property name="forbidden-base-excludes" value=""/>
-  <property name="forbidden-tests-excludes" value=""/>
-  <property name="forbidden-sysout-excludes" value=""/>
-  
-  <target name="-install-forbidden-apis" unless="forbidden-apis.loaded" depends="ivy-availability-check,ivy-configure">
-    <ivy:cachepath organisation="de.thetaphi" module="forbiddenapis" revision="3.0.1"
-      inline="true" conf="default" transitive="true" pathid="forbidden-apis.classpath"/>
-    <taskdef name="forbidden-apis" classname="de.thetaphi.forbiddenapis.ant.AntTask" classpathref="forbidden-apis.classpath"/>
-    <property name="forbidden-apis.loaded" value="true"/>
-  </target>  
-
-  <target name="-init-forbidden-apis" depends="-install-forbidden-apis">
-    <path id="forbidden-apis.allclasses.classpath">
-      <path refid="classpath"/>
-      <path refid="test.classpath"/>
-      <path refid="junit-path"/>
-      <!-- include the output directories, too (so we can still resolve excluded classes: -->
-      <pathelement path="${build.dir}/classes/java"/>
-      <pathelement path="${build.dir}/classes/test"/>
-    </path>
-  </target>  
-
-  <condition property="forbidden-isLucene">
-    <not>
-      <or>
-        <matches pattern="^(solr)\b" string="${name}"/>
-        <matches pattern="tools" string="${name}"/>
-      </or>
-    </not>
-  </condition>
-
-  <target name="check-forbidden-apis" depends="-check-forbidden-all,-check-forbidden-core,-check-forbidden-tests" description="Check forbidden API calls in compiled class files"/>
-  
-  <!-- applies to both source and test code -->
-  <target name="-check-forbidden-all" depends="-init-forbidden-apis,compile-core,compile-test">
-    <forbidden-apis suppressAnnotation="**.SuppressForbidden" classpathref="forbidden-apis.allclasses.classpath" targetVersion="${javac.release}">
-      <signatures>
-        <bundled name="jdk-unsafe"/>
-        <bundled name="jdk-deprecated"/>
-        <bundled name="jdk-non-portable"/>
-        <bundled name="jdk-reflection"/>
-        <fileset dir="${common.dir}/tools/forbiddenApis">
-          <include name="base.txt"/>
-          <include name="lucene.txt" if="forbidden-isLucene"/>
-        </fileset>
-      </signatures>
-      <fileset dir="${build.dir}/classes/java" excludes="${forbidden-base-excludes}"/>
-      <fileset dir="${build.dir}/classes/test" excludes="${forbidden-tests-excludes}" erroronmissingdir="false"/>
-    </forbidden-apis>
-  </target>
-
-  <!-- applies to only test code -->
-  <target name="-check-forbidden-tests" depends="-init-forbidden-apis,compile-test">
-    <forbidden-apis signaturesFile="${common.dir}/tools/forbiddenApis/tests.txt" suppressAnnotation="**.SuppressForbidden" classpathref="forbidden-apis.allclasses.classpath" targetVersion="${javac.release}"> 
-      <fileset dir="${build.dir}/classes/test" excludes="${forbidden-tests-excludes}"/>
-    </forbidden-apis>
-  </target>
- 
-  <!-- applies to only source code -->
-  <target name="-check-forbidden-core" depends="-init-forbidden-apis,compile-core,-check-forbidden-sysout" />
-
-  <target name="-check-forbidden-sysout" depends="-init-forbidden-apis,compile-core">
-    <forbidden-apis bundledSignatures="jdk-system-out" suppressAnnotation="**.SuppressForbidden" classpathref="forbidden-apis.allclasses.classpath" targetVersion="${javac.release}">
-      <fileset dir="${build.dir}/classes/java" excludes="${forbidden-sysout-excludes}"/>
-    </forbidden-apis>
-  </target>
-
-  <target name="resolve-markdown" unless="markdown.loaded" depends="resolve-groovy">
-    <property name="flexmark.version" value="0.42.6"/>
-    <ivy:cachepath transitive="true" resolveId="flexmark" pathid="markdown.classpath">
-      <ivy:dependency org="com.vladsch.flexmark" name="flexmark" rev="${flexmark.version}" conf="default" />
-      <ivy:dependency org="com.vladsch.flexmark" name="flexmark-ext-autolink" rev="${flexmark.version}" conf="default" />
-      <ivy:dependency org="com.vladsch.flexmark" name="flexmark-ext-abbreviation" rev="${flexmark.version}" conf="default" />
-    </ivy:cachepath>
-    <groovy classpathref="markdown.classpath" src="${common.dir}/tools/src/groovy/install-markdown-filter.groovy"/>
-    <property name="markdown.loaded" value="true"/>
-  </target>
-  
-  <!-- markdown macro: Before using depend on the target "resolve-markdown" -->
-  
-  <macrodef name="markdown">
-    <attribute name="todir"/>
-    <attribute name="flatten" default="false"/>
-    <attribute name="overwrite" default="false"/>
-    <element name="nested" optional="false" implicit="true"/>
-    <sequential>
-      <copy todir="@{todir}" flatten="@{flatten}" overwrite="@{overwrite}" verbose="true"
-        preservelastmodified="false" encoding="UTF-8" taskname="markdown"
-      >
-        <filterchain>
-          <tokenfilter>
-            <filetokenizer/>
-            <replaceregex pattern="\b(LUCENE|SOLR)\-\d+\b" replace="[\0](https://issues.apache.org/jira/browse/\0)" flags="gs"/>
-            <markdownfilter/>
-          </tokenfilter>
-        </filterchain>
-        <nested/>
-      </copy>
-    </sequential>
-  </macrodef>
-
-  <target name="regenerate"/>
-  
-  <macrodef name="check-broken-links">
-       <attribute name="dir"/>
-     <sequential>
-       <exec dir="." executable="${python3.exe}" failonerror="true">
-         <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-         <arg value="-B"/>
-         <arg value="${dev-tools.dir}/scripts/checkJavadocLinks.py"/>
-         <arg value="@{dir}"/>
-       </exec>
-     </sequential>
-  </macrodef>
-
-  <macrodef name="check-missing-javadocs">
-       <attribute name="dir"/>
-       <attribute name="level" default="class"/>
-     <sequential>
-       <exec dir="." executable="${python3.exe}" failonerror="true">
-         <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-         <arg value="-B"/>
-         <arg value="${dev-tools.dir}/scripts/checkJavaDocs.py"/>
-         <arg value="@{dir}"/>
-         <arg value="@{level}"/>
-       </exec>
-     </sequential>
-  </macrodef>
-
-  <!--
-   compile changes.txt into an html file
-   -->
-  <macrodef name="build-changes">
-    <attribute name="changes.product"/>
-    <attribute name="doap.property.prefix" default="doap.@{changes.product}"/>
-    <attribute name="changes.src.file" default="CHANGES.txt"/>
-    <attribute name="changes.src.doap" default="${dev-tools.dir}/doap/@{changes.product}.rdf"/>
-    <attribute name="changes.version.dates" default="build/@{doap.property.prefix}.version.dates.csv"/>
-    <attribute name="changes.target.dir" default="${changes.target.dir}"/>
-    <attribute name="lucene.javadoc.url" default="${lucene.javadoc.url}"/>
-    <sequential>
-      <mkdir dir="@{changes.target.dir}"/>
-      <xmlproperty keeproot="false" file="@{changes.src.doap}" collapseAttributes="false" prefix="@{doap.property.prefix}"/>
-      <echo file="@{changes.version.dates}" append="false">${@{doap.property.prefix}.Project.release.Version.revision}&#xA;</echo>
-      <echo file="@{changes.version.dates}" append="true">${@{doap.property.prefix}.Project.release.Version.created}&#xA;</echo>
-      <exec executable="${perl.exe}" input="@{changes.src.file}" output="@{changes.target.dir}/Changes.html"
-            failonerror="true" logError="true">
-        <arg value="-CSD"/>
-        <arg value="${changes.src.dir}/changes2html.pl"/>
-        <arg value="@{changes.product}"/>
-        <arg value="@{changes.version.dates}"/>
-        <arg value="@{lucene.javadoc.url}"/>
-      </exec>
-      <delete file="@{changes.version.dates}"/>
-      <copy todir="@{changes.target.dir}">
-        <fileset dir="${changes.src.dir}" includes="*.css"/>
-      </copy>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="pitest-macro" description="Executes junit tests.">
-    <attribute name="pitest.report.dir" default="${pitest.report.dir}"/>
-    <attribute name="pitest.framework.classpath" default="pitest.framework.classpath"/>
-    <attribute name="pitest.distance" default="${pitest.distance}" />
-    <attribute name="pitest.sysprops" default="${pitest.sysprops}" />
-    <attribute name="pitest.threads" default="${pitest.threads}" />
-    <attribute name="pitest.testCases" default="${pitest.testCases}" />
-    <attribute name="pitest.maxMutations" default="${pitest.maxMutations}" />
-    <attribute name="pitest.timeoutFactor" default="${pitest.timeoutFactor}" />
-    <attribute name="pitest.timeoutConst" default="${pitest.timeoutConst}" />
-    <attribute name="pitest.targetClasses" default="${pitest.targetClasses}" />
-
-    <attribute name="junit.classpath" default="junit.classpath"/>
-
-    <attribute name="src.dir" default="${src.dir}"/>
-    <attribute name="build.dir" default="${build.dir}"/>
-
-    <sequential>
-
-        <echo>
-PiTest mutation coverage can take a *long* time on even large hardware.
-(EC2 32core sandy bridge takes at least 12 hours to run PiTest for the lucene test cases)
-
-The following arguments can be provided to ant to alter its behaviour and target specific tests::
-
--Dpitest.report.dir (@{pitest.report.dir}) - Change where PiTest writes output reports
-
--Dpitest.distance (@{pitest.distance}) - How far away from the test class should be mutated
-   0 being immeditate callees only
-
--Dpitest.threads (@{pitest.threads}) - How many threads to use in PiTest 
-   (note this is independent of junit threads)
-
--Dpitest.testCases (@{pitest.testCases}) - Glob of testcases to run
-
--Dpitest.maxMutations (@{pitest.maxMutations}) - Maximum number of mutations per class under test
-    0 being unlimited
-
--Dpitest.timeoutFactor (@{pitest.timeoutFactor}) - Tunable factor used to determine
-    if a test is potentially been mutated to be an infinate loop or O(n!) (or similar)
-
--Dpitest.timeoutConst (@{pitest.timeoutConst}) - Base constant used for working out timeouts
-
--Dpitest.targetClasses (@{pitest.targetClasses}) - Classes to consider for mutation
-        </echo>
-
-        <taskdef name="pitest" classname="org.pitest.ant.PitestTask"
-            classpathref="pitest.framework.classpath" />
-
-        <path id="pitest.classpath">
-            <path refid="junit.classpath"/>
-            <path refid="pitest.framework.classpath"/>
-        </path>
-
-        <junit4:pickseed property="pitest.seed" />
-
-        <property name="pitest.sysprops" value="-Dversion=${version},-Dtest.seed=${pitest.seed},-Djava.security.manager=org.apache.lucene.util.TestSecurityManager,-Djava.security.policy=${tests.policy},-Djava.io.tmpdir=${tests.workDir},-Djunit4.childvm.cwd=${tests.workDir}" />
-
-        <pitest
-            classPath="pitest.classpath"
-            targetClasses="@{pitest.targetClasses}"
-            targetTests="@{pitest.testCases}"
-            reportDir="@{pitest.report.dir}"
-            sourceDir="@{src.dir}"
-            threads="@{pitest.threads}"
-            maxMutationsPerClass="@{pitest.maxMutations}"
-            timeoutFactor="@{pitest.timeoutFactor}"
-            timeoutConst="@{pitest.timeoutConst}"
-            verbose="false"
-            dependencyDistance="@{pitest.distance}"
-            mutableCodePaths="@{build.dir}/classes/java"
-            jvmArgs="-ea,@{pitest.sysprops}" />
-    </sequential>
-  </macrodef>
-
-  <macrodef name="run-jflex">
-    <attribute name="dir"/>
-    <attribute name="name"/>
-    <sequential>
-      <!-- The default skeleton is specified here to work around a JFlex ant task bug:    -->
-      <!-- invocations with a non-default skeleton will cause following invocations to    -->
-      <!-- use the same skeleton, though not specified, unless the default is configured. -->
-      <delete file="@{dir}/@{name}.java" />
-      <jflex file="@{dir}/@{name}.jflex" outdir="@{dir}" nobak="on"
-             encoding="UTF-8"
-             skeleton="${common.dir}/core/src/data/jflex/skeleton.default"/>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="run-jflex-and-disable-buffer-expansion">
-    <attribute name="dir"/>
-    <attribute name="name"/>
-    <sequential>
-      <!-- LUCENE-5897: Disallow scanner buffer expansion -->
-      <jflex file="@{dir}/@{name}.jflex" outdir="@{dir}" nobak="on"
-             encoding="UTF-8"
-             skeleton="${common.dir}/core/src/data/jflex/skeleton.disable.buffer.expansion.txt"/>
-      <!-- Since the ZZ_BUFFERSIZE declaration is generated rather than in the skeleton, we have to transform it here. -->
-      <replaceregexp file="@{dir}/@{name}.java"
-                     match="private static final int ZZ_BUFFERSIZE ="
-                     replace="private int ZZ_BUFFERSIZE ="/>
-    </sequential>
-  </macrodef>
-
-
-</project>
diff --git a/lucene/core/build.xml b/lucene/core/build.xml
deleted file mode 100644
index 5fa7512..0000000
--- a/lucene/core/build.xml
+++ /dev/null
@@ -1,235 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="core" default="default">
-  <description>Lucene core library</description>
-
-  <property name="build.dir" location="../build/core"/>
-
-  <!-- lucene core can use the minimal JDK profile -->
-  <property name="javac.profile.args" value="-profile compact1"/>
-  <import file="../common-build.xml"/>
-
-  <property name="moman.commit-hash" value="497c90e34e412b6494db6dabf0d95db8034bd325" />
-  <property name="moman.url" value="https://github.com/jpbarrette/moman/archive/${moman.commit-hash}.zip" />
-
-  <path id="classpath"/>
-  
-  <path id="test.classpath">
-    <pathelement location="${common.dir}/build/codecs/classes/java"/>
-    <pathelement location="${common.dir}/build/test-framework/classes/java"/>
-    <path refid="junit-path"/>
-    <pathelement location="${build.dir}/classes/java"/>
-    <pathelement location="${build.dir}/classes/test"/>
-  </path>
-
-  <path id="junit.classpath">
-    <path refid="test.classpath"/>
-  </path>
-
-  <target name="test-core" depends="common.test"/>
-
-  <target name="javadocs-core" depends="javadocs"/>
-  <target name="javadocs" description="Generate javadoc for core classes" 
-          depends="check-javadocs-uptodate" unless="javadocs-uptodate-${name}">
-     <sequential>
-      <mkdir dir="${javadoc.dir}/core"/>
-      <invoke-javadoc destdir="${javadoc.dir}/core" title="${Name} ${version} core API">
-        <sources>
-          <packageset dir="${src.dir}"/>
-          <link href=""/>
-        </sources>
-      </invoke-javadoc>
-      <mkdir dir="${build.dir}"/>
-     <jarify basedir="${javadoc.dir}/core" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-    </sequential>
-  </target>
-
-  <target name="-dist-maven" depends="-dist-maven-src-java"/>
-
-  <target name="-install-to-maven-local-repo" depends="-install-src-java-to-maven-local-repo"/>
-
-  <macrodef name="createLevAutomaton">
-      <attribute name="n"/>
-      <sequential>
-      <exec dir="src/java/org/apache/lucene/util/automaton"
-            executable="${python3.exe}" failonerror="true">
-        <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-        <arg value="-B"/>
-        <arg value="createLevAutomata.py"/>
-        <arg value="@{n}"/>
-        <arg value="True"/>
-        <!-- Hack while transitioning to Gradle build in 9.0 -->
-        <arg value="../../../../../../../../build/core/moman/finenight/python"/>
-      </exec>
-      <exec dir="src/java/org/apache/lucene/util/automaton"
-            executable="${python3.exe}" failonerror="true">
-        <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-        <arg value="-B"/>
-        <arg value="createLevAutomata.py"/>
-        <arg value="@{n}"/>
-        <arg value="False"/>
-        <!-- Hack while transitioning to Gradle build in 9.0 -->
-        <arg value="../../../../../../../../build/core/moman/finenight/python"/>
-      </exec>
-      <fixcrlf srcdir="src/java/org/apache/lucene/util/automaton" includes="*ParametricDescription.java" encoding="UTF-8"/>
-    </sequential>
-  </macrodef>
-
-  <target name="createPackedIntSources">
-    <exec dir="src/java/org/apache/lucene/util/packed"
-          executable="${python3.exe}" failonerror="true">
-      <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-      <arg value="-B"/>
-      <arg value="gen_BulkOperation.py"/>
-    </exec>
-    <exec dir="src/java/org/apache/lucene/util/packed"
-          executable="${python3.exe}" failonerror="true">
-      <!-- Tell Python not to write any bytecode cache into the filesystem: -->
-      <arg value="-B"/>
-      <arg value="gen_Packed64SingleBlock.py"/>
-    </exec>
-    <fixcrlf srcdir="src/java/org/apache/lucene/util/packed" includes="BulkOperation*.java,Packed64SingleBlock.java" encoding="UTF-8"/>
-  </target>
-
-  <target name="createLevAutomata" depends="check-moman,download-moman">
-    <createLevAutomaton n="1"/>
-    <createLevAutomaton n="2"/>
-  </target>
-  
-  <target name="check-moman">
-    <available file="${build.dir}/moman" property="moman.downloaded"/>
-  </target>
-
-  <target name="download-moman" unless="moman.downloaded">
-    <mkdir dir="${build.dir}/moman"/>
-    <get src="${moman.url}" dest="${build.dir}/moman.zip"/>
-    <unzip dest="${build.dir}/moman" src="${build.dir}/moman.zip">
-      <cutdirsmapper dirs="1"/>
-    </unzip>
-    <delete file="${build.dir}/moman.zip"/>
-  </target>
-
-  <target name="regenerate" depends="createLevAutomata,createPackedIntSources,jflex"/>
-  
-  <macrodef name="startLockStressTestClient">
-    <attribute name="clientId"/>
-    <attribute name="lockFactoryImpl"/>
-    <attribute name="lockFactoryDir"/>
-    <sequential>
-      <local name="lockverifyserver.port"/>
-      <groovy><![CDATA[
-        String port;
-        while ((port = System.getProperty("lockverifyserver.port")) == null) {
-          Thread.sleep(10L);
-        }
-        properties["lockverifyserver.port"] = port;
-      ]]></groovy>
-      <java taskname="lockStressTest@{clientId}" fork="true" classpathref="test-lock.classpath" classname="org.apache.lucene.store.LockStressTest" failOnError="true"> 
-        <arg value="@{clientId}"/>
-        <arg value="${lockverifyserver.host}"/>
-        <arg value="${lockverifyserver.port}"/>
-        <arg value="@{lockFactoryImpl}"/>
-        <arg value="@{lockFactoryDir}"/>
-        <arg value="${lockverify.delay}"/>
-        <arg value="${lockverify.count}"/>
-      </java>
-    </sequential>
-  </macrodef>
-  
-  <macrodef name="testLockFactory">
-    <attribute name="lockFactoryImpl"/>
-    <attribute name="lockFactoryDir"/>
-    <sequential>
-      <echo taskname="testLockFactory" message="Testing @{lockFactoryImpl}..."/>
-      <mkdir dir="@{lockFactoryDir}"/>
-      <parallel threadCount="3" failonany="false">
-        <sequential>
-          <!-- the server runs in-process, so we can wait for the sysproperty -->
-          <java taskname="lockVerifyServer" fork="false" classpathref="test-lock.classpath" classname="org.apache.lucene.store.LockVerifyServer" failOnError="true">
-            <arg value="${lockverifyserver.host}"/>
-            <arg value="2"/>
-          </java>
-        </sequential>
-        <sequential>
-          <startLockStressTestClient clientId="1" lockFactoryImpl="@{lockFactoryImpl}" lockFactoryDir="@{lockFactoryDir}" />
-        </sequential>
-        <sequential>
-          <startLockStressTestClient clientId="2" lockFactoryImpl="@{lockFactoryImpl}" lockFactoryDir="@{lockFactoryDir}" />
-        </sequential>
-      </parallel>
-    </sequential>
-  </macrodef>
-  
-  <condition property="-ignore-test-lock-factory">
-    <or>
-      <!-- We ignore our ant-based lock factory test, if user applies test filtering: -->
-      <isset property="tests.class" />
-      <isset property="tests.method" />
-      <!-- Clover seems to deadlock if running instrumented code inside the Ant JVM: -->
-      <isset property="run.clover" />
-    </or>
-  </condition>
-  
-  <target name="test-lock-factory" depends="resolve-groovy,compile-core" unless="-ignore-test-lock-factory"
-    description="Run LockStressTest with multiple JVMs">
-    <property name="lockverifyserver.host" value="127.0.0.1"/>
-    <property name="lockverify.delay" value="1"/>
-    <groovy taskname="lockVerifySetup"><![CDATA[
-      System.clearProperty("lockverifyserver.port"); // make sure it is undefined
-      
-      if (!properties["lockverify.count"]) {
-        int count = Boolean.parseBoolean(properties["tests.nightly"]) ?
-          30000 : 500;
-        count *= Integer.parseInt(properties["tests.multiplier"]);
-        properties["lockverify.count"] = count;
-      }
-      
-      task.log("Configuration properties:");
-      ["lockverify.delay", "lockverify.count"].each {
-        k -> task.log(" " + k + "=" + properties[k]);
-      }
-    ]]></groovy>
-    <path id="test-lock.classpath">
-      <path refid="classpath"/>
-      <pathelement location="${build.dir}/classes/java"/>
-    </path>
-    <testLockFactory lockFactoryImpl="org.apache.lucene.store.NativeFSLockFactory" lockFactoryDir="${build.dir}/lockfactorytest/native" />
-    <testLockFactory lockFactoryImpl="org.apache.lucene.store.SimpleFSLockFactory" lockFactoryDir="${build.dir}/lockfactorytest/simple" />
-  </target>
-  
-  <target name="test" depends="common.test, test-lock-factory"/>
-
-  <target name="clean-jflex">
-    <delete>
-      <fileset dir="src/java/org/apache/lucene/analysis/standard" includes="**/*.java">
-        <containsregexp expression="generated.*by.*JFlex"/>
-      </fileset>
-    </delete>
-  </target>
-
-  <target name="jflex" depends="-install-jflex,clean-jflex,-jflex-StandardAnalyzer"/>
-  
-  <target name="-jflex-StandardAnalyzer" depends="init,-install-jflex">
-    <run-jflex-and-disable-buffer-expansion 
-        dir="src/java/org/apache/lucene/analysis/standard" name="StandardTokenizerImpl"/>
-  </target>
-
-
-</project>
diff --git a/lucene/core/ivy.xml b/lucene/core/ivy.xml
deleted file mode 100644
index bbca9f9..0000000
--- a/lucene/core/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="core"/>
-</ivy-module>
diff --git a/lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java b/lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java
index b9a798b..2749c1c 100644
--- a/lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java
+++ b/lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java
@@ -64,19 +64,19 @@
  * <p>
  * For some concrete implementations bundled with Lucene, look in the analysis modules:
  * <ul>
- *   <li><a href="{@docRoot}/../analyzers-common/overview-summary.html">Common</a>:
+ *   <li><a href="{@docRoot}/../analysis/common/overview-summary.html">Common</a>:
  *       Analyzers for indexing content in different languages and domains.
- *   <li><a href="{@docRoot}/../analyzers-icu/overview-summary.html">ICU</a>:
+ *   <li><a href="{@docRoot}/../analysis/icu/overview-summary.html">ICU</a>:
  *       Exposes functionality from ICU to Apache Lucene. 
- *   <li><a href="{@docRoot}/../analyzers-kuromoji/overview-summary.html">Kuromoji</a>:
+ *   <li><a href="{@docRoot}/../analysis/kuromoji/overview-summary.html">Kuromoji</a>:
  *       Morphological analyzer for Japanese text.
- *   <li><a href="{@docRoot}/../analyzers-morfologik/overview-summary.html">Morfologik</a>:
+ *   <li><a href="{@docRoot}/../analysis/morfologik/overview-summary.html">Morfologik</a>:
  *       Dictionary-driven lemmatization for the Polish language.
- *   <li><a href="{@docRoot}/../analyzers-phonetic/overview-summary.html">Phonetic</a>:
+ *   <li><a href="{@docRoot}/../analysis/phonetic/overview-summary.html">Phonetic</a>:
  *       Analysis for indexing phonetic signatures (for sounds-alike search).
- *   <li><a href="{@docRoot}/../analyzers-smartcn/overview-summary.html">Smart Chinese</a>:
+ *   <li><a href="{@docRoot}/../analysis/smartcn/overview-summary.html">Smart Chinese</a>:
  *       Analyzer for Simplified Chinese, which indexes words.
- *   <li><a href="{@docRoot}/../analyzers-stempel/overview-summary.html">Stempel</a>:
+ *   <li><a href="{@docRoot}/../analysis/stempel/overview-summary.html">Stempel</a>:
  *       Algorithmic Stemmer for the Polish Language.
  * </ul>
  *
@@ -103,7 +103,7 @@
    * <p>
    * NOTE: if you just want to reuse on a per-field basis, it's easier to
    * use a subclass of {@link AnalyzerWrapper} such as 
-   * <a href="{@docRoot}/../analyzers-common/org/apache/lucene/analysis/miscellaneous/PerFieldAnalyzerWrapper.html">
+   * <a href="{@docRoot}/../analysis/common/org/apache/lucene/analysis/miscellaneous/PerFieldAnalyzerWrapper.html">
    * PerFieldAnalyerWrapper</a> instead.
    */
   public Analyzer(ReuseStrategy reuseStrategy) {
diff --git a/lucene/core/src/java/org/apache/lucene/analysis/package-info.java b/lucene/core/src/java/org/apache/lucene/analysis/package-info.java
index b7e752c..83736ae 100644
--- a/lucene/core/src/java/org/apache/lucene/analysis/package-info.java
+++ b/lucene/core/src/java/org/apache/lucene/analysis/package-info.java
@@ -158,10 +158,10 @@
  *   supplies a large family of <code>Analyzer</code> classes that deliver useful
  *   analysis chains. The most common of these is the <a href="{@docRoot}/org/apache/lucene/analysis/standard/StandardAnalyzer.html">StandardAnalyzer</a>.
  *   Many applications will have a long and industrious life with nothing more
- *   than the <code>StandardAnalyzer</code>. The <a href="{@docRoot}/../analyzers-common/overview-summary.html">analyzers-common</a>
+ *   than the <code>StandardAnalyzer</code>. The <a href="{@docRoot}/../analysis/common/overview-summary.html">analyzers-common</a>
  *   library provides many pre-existing analyzers for various languages.
  *   The analysis-common library also allows to configure a custom Analyzer without subclassing using the
- *   <a href="{@docRoot}/../analyzers-common/org/apache/lucene/analysis/custom/CustomAnalyzer.html">CustomAnalyzer</a>
+ *   <a href="{@docRoot}/../analysis/common/org/apache/lucene/analysis/custom/CustomAnalyzer.html">CustomAnalyzer</a>
  *   class.
  * </p>
  * <p>
@@ -170,7 +170,7 @@
  *   all under the 'analysis' directory of the distribution. Some of
  *   these support particular languages, others integrate external
  *   components. The 'common' subdirectory has some noteworthy
- *  general-purpose analyzers, including the <a href="{@docRoot}/../analyzers-common/org/apache/lucene/analysis/miscellaneous/PerFieldAnalyzerWrapper.html">PerFieldAnalyzerWrapper</a>. Most <code>Analyzer</code>s perform the same operation on all
+ *  general-purpose analyzers, including the <a href="{@docRoot}/../analysis/common/org/apache/lucene/analysis/miscellaneous/PerFieldAnalyzerWrapper.html">PerFieldAnalyzerWrapper</a>. Most <code>Analyzer</code>s perform the same operation on all
  *  {@link org.apache.lucene.document.Field}s.  The PerFieldAnalyzerWrapper can be used to associate a different <code>Analyzer</code> with different
  *  {@link org.apache.lucene.document.Field}s. There is a great deal of
  *  functionality in the analysis area, you should study it carefully to
@@ -253,7 +253,7 @@
  *   Tokenizer, and TokenFilter(s) <i>(optional)</i> &mdash; or components you
  *   create, or a combination of existing and newly created components.  Before
  *   pursuing this approach, you may find it worthwhile to explore the
- *   <a href="{@docRoot}/../analyzers-common/overview-summary.html">analyzers-common</a> library and/or ask on the 
+ *   <a href="{@docRoot}/../analysis/common/overview-summary.html">analyzers-common</a> library and/or ask on the
  *   <a href="http://lucene.apache.org/core/discussion.html">java-user@lucene.apache.org mailing list</a> first to see if what you
  *   need already exists. If you are still committed to creating your own
  *   Analyzer, have a look at the source code of any one of the many samples
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/Codec.java b/lucene/core/src/java/org/apache/lucene/codecs/Codec.java
index 8b5ca14..14fa793 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/Codec.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/Codec.java
@@ -57,7 +57,7 @@
     }
     
     // TODO: should we use this, or maybe a system property is better?
-    static Codec defaultCodec = LOADER.lookup("Lucene86");
+    static Codec defaultCodec = LOADER.lookup("Lucene87");
   }
 
   private final String name;
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsReader.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsReader.java
index bea89eb..91916dd 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsReader.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsReader.java
@@ -24,7 +24,9 @@
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.HOUR;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.HOUR_ENCODING;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.INDEX_CODEC_NAME;
-import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.INDEX_EXTENSION_PREFIX;
+import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.INDEX_EXTENSION;
+import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.META_EXTENSION;
+import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.META_VERSION_START;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.NUMERIC_DOUBLE;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.NUMERIC_FLOAT;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.NUMERIC_INT;
@@ -35,6 +37,7 @@
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.TYPE_BITS;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.TYPE_MASK;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.VERSION_CURRENT;
+import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.VERSION_META;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.VERSION_OFFHEAP_INDEX;
 import static org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.VERSION_START;
 
@@ -120,14 +123,26 @@
     numDocs = si.maxDoc();
 
     final String fieldsStreamFN = IndexFileNames.segmentFileName(segment, segmentSuffix, FIELDS_EXTENSION);
+    ChecksumIndexInput metaIn = null;
     try {
-      // Open the data file and read metadata
+      // Open the data file
       fieldsStream = d.openInput(fieldsStreamFN, context);
       version = CodecUtil.checkIndexHeader(fieldsStream, formatName, VERSION_START, VERSION_CURRENT, si.getId(), segmentSuffix);
       assert CodecUtil.indexHeaderLength(formatName, segmentSuffix) == fieldsStream.getFilePointer();
 
-      chunkSize = fieldsStream.readVInt();
-      packedIntsVersion = fieldsStream.readVInt();
+      if (version >= VERSION_OFFHEAP_INDEX) {
+        final String metaStreamFN = IndexFileNames.segmentFileName(segment, segmentSuffix, META_EXTENSION);
+        metaIn = d.openChecksumInput(metaStreamFN, IOContext.READONCE);
+        CodecUtil.checkIndexHeader(metaIn, INDEX_CODEC_NAME + "Meta", META_VERSION_START, version, si.getId(), segmentSuffix);
+      }
+      if (version >= VERSION_META) {
+        chunkSize = metaIn.readVInt();
+        packedIntsVersion = metaIn.readVInt();
+      } else {
+        chunkSize = fieldsStream.readVInt();
+        packedIntsVersion = fieldsStream.readVInt();
+      }
+
       decompressor = compressionMode.newDecompressor();
       this.merging = false;
       this.state = new BlockState();
@@ -163,7 +178,7 @@
           }
         }
       } else {
-        FieldsIndexReader fieldsIndexReader = new FieldsIndexReader(d, si.name, segmentSuffix, INDEX_EXTENSION_PREFIX, INDEX_CODEC_NAME, si.getId());
+        FieldsIndexReader fieldsIndexReader = new FieldsIndexReader(d, si.name, segmentSuffix, INDEX_EXTENSION, INDEX_CODEC_NAME, si.getId(), metaIn);
         indexReader = fieldsIndexReader;
         maxPointer = fieldsIndexReader.getMaxPointer();
       }
@@ -171,17 +186,34 @@
       this.maxPointer = maxPointer;
       this.indexReader = indexReader;
 
-      fieldsStream.seek(maxPointer);
-      numChunks = fieldsStream.readVLong();
-      numDirtyChunks = fieldsStream.readVLong();
+      if (version >= VERSION_META) {
+        numChunks = metaIn.readVLong();
+        numDirtyChunks = metaIn.readVLong();
+      } else {
+        fieldsStream.seek(maxPointer);
+        numChunks = fieldsStream.readVLong();
+        numDirtyChunks = fieldsStream.readVLong();
+      }
       if (numDirtyChunks > numChunks) {
         throw new CorruptIndexException("invalid chunk counts: dirty=" + numDirtyChunks + ", total=" + numChunks, fieldsStream);
       }
 
+      if (metaIn != null) {
+        CodecUtil.checkFooter(metaIn, null);
+        metaIn.close();
+      }
+
       success = true;
+    } catch (Throwable t) {
+      if (metaIn != null) {
+        CodecUtil.checkFooter(metaIn, t);
+        throw new AssertionError("unreachable");
+      } else {
+        throw t;
+      }
     } finally {
       if (!success) {
-        IOUtils.closeWhileHandlingException(this);
+        IOUtils.closeWhileHandlingException(this, metaIn);
       }
     }
   }
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsWriter.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsWriter.java
index 421bda3..c0ec878 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsWriter.java
@@ -57,7 +57,9 @@
   /** Extension of stored fields file */
   public static final String FIELDS_EXTENSION = "fdt";
   /** Extension of stored fields index */
-  public static final String INDEX_EXTENSION_PREFIX = "fd";
+  public static final String INDEX_EXTENSION = "fdx";
+  /** Extension of stored fields meta */
+  public static final String META_EXTENSION = "fdm";
   /** Codec name for the index. */
   public static final String INDEX_CODEC_NAME = "Lucene85FieldsIndex";
 
@@ -73,11 +75,14 @@
 
   static final int VERSION_START = 1;
   static final int VERSION_OFFHEAP_INDEX = 2;
-  static final int VERSION_CURRENT = VERSION_OFFHEAP_INDEX;
+  /** Version where all metadata were moved to the meta file. */
+  static final int VERSION_META = 3;
+  static final int VERSION_CURRENT = VERSION_META;
+  static final int META_VERSION_START = 0;
 
   private final String segment;
   private FieldsIndexWriter indexWriter;
-  private IndexOutput fieldsStream;
+  private IndexOutput metaStream, fieldsStream;
 
   private Compressor compressor;
   private final CompressionMode compressionMode;
@@ -110,19 +115,23 @@
 
     boolean success = false;
     try {
+      metaStream = directory.createOutput(IndexFileNames.segmentFileName(segment, segmentSuffix, META_EXTENSION), context);
+      CodecUtil.writeIndexHeader(metaStream, INDEX_CODEC_NAME + "Meta", VERSION_CURRENT, si.getId(), segmentSuffix);
+      assert CodecUtil.indexHeaderLength(INDEX_CODEC_NAME + "Meta", segmentSuffix) == metaStream.getFilePointer();
+
       fieldsStream = directory.createOutput(IndexFileNames.segmentFileName(segment, segmentSuffix, FIELDS_EXTENSION), context);
       CodecUtil.writeIndexHeader(fieldsStream, formatName, VERSION_CURRENT, si.getId(), segmentSuffix);
       assert CodecUtil.indexHeaderLength(formatName, segmentSuffix) == fieldsStream.getFilePointer();
 
-      indexWriter = new FieldsIndexWriter(directory, segment, segmentSuffix, INDEX_EXTENSION_PREFIX, INDEX_CODEC_NAME, si.getId(), blockShift, context);
+      indexWriter = new FieldsIndexWriter(directory, segment, segmentSuffix, INDEX_EXTENSION, INDEX_CODEC_NAME, si.getId(), blockShift, context);
 
-      fieldsStream.writeVInt(chunkSize);
-      fieldsStream.writeVInt(PackedInts.VERSION_CURRENT);
+      metaStream.writeVInt(chunkSize);
+      metaStream.writeVInt(PackedInts.VERSION_CURRENT);
 
       success = true;
     } finally {
       if (!success) {
-        IOUtils.closeWhileHandlingException(fieldsStream, indexWriter);
+        IOUtils.closeWhileHandlingException(metaStream, fieldsStream, indexWriter);
       }
     }
   }
@@ -130,8 +139,9 @@
   @Override
   public void close() throws IOException {
     try {
-      IOUtils.close(fieldsStream, indexWriter, compressor);
+      IOUtils.close(metaStream, fieldsStream, indexWriter, compressor);
     } finally {
+      metaStream = null;
       fieldsStream = null;
       indexWriter = null;
       compressor = null;
@@ -466,9 +476,10 @@
     if (docBase != numDocs) {
       throw new RuntimeException("Wrote " + docBase + " docs, finish called with numDocs=" + numDocs);
     }
-    indexWriter.finish(numDocs, fieldsStream.getFilePointer());
-    fieldsStream.writeVLong(numChunks);
-    fieldsStream.writeVLong(numDirtyChunks);
+    indexWriter.finish(numDocs, fieldsStream.getFilePointer(), metaStream);
+    metaStream.writeVLong(numChunks);
+    metaStream.writeVLong(numDirtyChunks);
+    CodecUtil.writeFooter(metaStream);
     CodecUtil.writeFooter(fieldsStream);
     assert bufferedDocs.size() == 0;
   }
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsReader.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsReader.java
index 6b66e24..d3bdc06 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsReader.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsReader.java
@@ -55,14 +55,17 @@
 
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VERSION_OFFHEAP_INDEX;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.FLAGS_BITS;
+import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.META_VERSION_START;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.OFFSETS;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.PACKED_BLOCK_SIZE;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.PAYLOADS;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.POSITIONS;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VECTORS_EXTENSION;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VECTORS_INDEX_CODEC_NAME;
-import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VECTORS_INDEX_EXTENSION_PREFIX;
+import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VECTORS_INDEX_EXTENSION;
+import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VECTORS_META_EXTENSION;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VERSION_CURRENT;
+import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VERSION_META;
 import static org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.VERSION_START;
 
 /**
@@ -113,13 +116,34 @@
     fieldInfos = fn;
     numDocs = si.maxDoc();
 
+    ChecksumIndexInput metaIn = null;
     try {
-      // Open the data file and read metadata
+      // Open the data file
       final String vectorsStreamFN = IndexFileNames.segmentFileName(segment, segmentSuffix, VECTORS_EXTENSION);
       vectorsStream = d.openInput(vectorsStreamFN, context);
       version = CodecUtil.checkIndexHeader(vectorsStream, formatName, VERSION_START, VERSION_CURRENT, si.getId(), segmentSuffix);
       assert CodecUtil.indexHeaderLength(formatName, segmentSuffix) == vectorsStream.getFilePointer();
 
+      if (version >= VERSION_OFFHEAP_INDEX) {
+        final String metaStreamFN = IndexFileNames.segmentFileName(segment, segmentSuffix, VECTORS_META_EXTENSION);
+        metaIn = d.openChecksumInput(metaStreamFN, IOContext.READONCE);
+        CodecUtil.checkIndexHeader(metaIn, VECTORS_INDEX_CODEC_NAME + "Meta", META_VERSION_START, version, si.getId(), segmentSuffix);
+      }
+
+      if (version >= VERSION_META) {
+        packedIntsVersion = metaIn.readVInt();
+        chunkSize = metaIn.readVInt();
+      } else {
+        packedIntsVersion = vectorsStream.readVInt();
+        chunkSize = vectorsStream.readVInt();
+      }
+
+      // NOTE: data file is too costly to verify checksum against all the bytes on open,
+      // but for now we at least verify proper structure of the checksum footer: which looks
+      // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
+      // such as file truncation.
+      CodecUtil.retrieveChecksum(vectorsStream);
+
       FieldsIndex indexReader = null;
       long maxPointer = -1;
 
@@ -145,7 +169,7 @@
           }
         }
       } else {
-        FieldsIndexReader fieldsIndexReader = new FieldsIndexReader(d, si.name, segmentSuffix, VECTORS_INDEX_EXTENSION_PREFIX, VECTORS_INDEX_CODEC_NAME, si.getId());
+        FieldsIndexReader fieldsIndexReader = new FieldsIndexReader(d, si.name, segmentSuffix, VECTORS_INDEX_EXTENSION, VECTORS_INDEX_CODEC_NAME, si.getId(), metaIn);
         indexReader = fieldsIndexReader;
         maxPointer = fieldsIndexReader.getMaxPointer();
       }
@@ -153,30 +177,37 @@
       this.indexReader = indexReader;
       this.maxPointer = maxPointer;
 
-      long pos = vectorsStream.getFilePointer();
-      vectorsStream.seek(maxPointer);
-      numChunks = vectorsStream.readVLong();
-      numDirtyChunks = vectorsStream.readVLong();
+      if (version >= VERSION_META) {
+        numChunks = metaIn.readVLong();
+        numDirtyChunks = metaIn.readVLong();
+      } else {
+        vectorsStream.seek(maxPointer);
+        numChunks = vectorsStream.readVLong();
+        numDirtyChunks = vectorsStream.readVLong();
+      }
       if (numDirtyChunks > numChunks) {
         throw new CorruptIndexException("invalid chunk counts: dirty=" + numDirtyChunks + ", total=" + numChunks, vectorsStream);
       }
 
-      // NOTE: data file is too costly to verify checksum against all the bytes on open,
-      // but for now we at least verify proper structure of the checksum footer: which looks
-      // for FOOTER_MAGIC + algorithmID. This is cheap and can detect some forms of corruption
-      // such as file truncation.
-      CodecUtil.retrieveChecksum(vectorsStream);
-      vectorsStream.seek(pos);
-
-      packedIntsVersion = vectorsStream.readVInt();
-      chunkSize = vectorsStream.readVInt();
       decompressor = compressionMode.newDecompressor();
       this.reader = new BlockPackedReaderIterator(vectorsStream, packedIntsVersion, PACKED_BLOCK_SIZE, 0);
 
+      if (metaIn != null) {
+        CodecUtil.checkFooter(metaIn, null);
+        metaIn.close();
+      }
+
       success = true;
+    } catch (Throwable t) {
+      if (metaIn != null) {
+        CodecUtil.checkFooter(metaIn, t);
+        throw new AssertionError("unreachable");
+      } else {
+        throw t;
+      }
     } finally {
       if (!success) {
-        IOUtils.closeWhileHandlingException(this);
+        IOUtils.closeWhileHandlingException(this, metaIn);
       }
     }
   }
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsWriter.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsWriter.java
index 34f9edb..73f4ecb 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingTermVectorsWriter.java
@@ -59,12 +59,16 @@
   static final int MAX_DOCUMENTS_PER_CHUNK = 128;
 
   static final String VECTORS_EXTENSION = "tvd";
-  static final String VECTORS_INDEX_EXTENSION_PREFIX = "tv";
+  static final String VECTORS_INDEX_EXTENSION = "tvx";
+  static final String VECTORS_META_EXTENSION = "tvm";
   static final String VECTORS_INDEX_CODEC_NAME = "Lucene85TermVectorsIndex";
 
   static final int VERSION_START = 1;
   static final int VERSION_OFFHEAP_INDEX = 2;
-  static final int VERSION_CURRENT = VERSION_OFFHEAP_INDEX;
+  /** Version where all metadata were moved to the meta file. */
+  static final int VERSION_META = 3;
+  static final int VERSION_CURRENT = VERSION_META;
+  static final int META_VERSION_START = 0;
 
   static final int PACKED_BLOCK_SIZE = 64;
 
@@ -75,7 +79,7 @@
 
   private final String segment;
   private FieldsIndexWriter indexWriter;
-  private IndexOutput vectorsStream;
+  private IndexOutput metaStream, vectorsStream;
 
   private final CompressionMode compressionMode;
   private final Compressor compressor;
@@ -218,15 +222,19 @@
 
     boolean success = false;
     try {
+      metaStream = directory.createOutput(IndexFileNames.segmentFileName(segment, segmentSuffix, VECTORS_META_EXTENSION), context);
+      CodecUtil.writeIndexHeader(metaStream, VECTORS_INDEX_CODEC_NAME + "Meta", VERSION_CURRENT, si.getId(), segmentSuffix);
+      assert CodecUtil.indexHeaderLength(VECTORS_INDEX_CODEC_NAME + "Meta", segmentSuffix) == metaStream.getFilePointer();
+
       vectorsStream = directory.createOutput(IndexFileNames.segmentFileName(segment, segmentSuffix, VECTORS_EXTENSION),
                                                      context);
       CodecUtil.writeIndexHeader(vectorsStream, formatName, VERSION_CURRENT, si.getId(), segmentSuffix);
       assert CodecUtil.indexHeaderLength(formatName, segmentSuffix) == vectorsStream.getFilePointer();
 
-      indexWriter = new FieldsIndexWriter(directory, segment, segmentSuffix, VECTORS_INDEX_EXTENSION_PREFIX, VECTORS_INDEX_CODEC_NAME, si.getId(), blockShift, context);
+      indexWriter = new FieldsIndexWriter(directory, segment, segmentSuffix, VECTORS_INDEX_EXTENSION, VECTORS_INDEX_CODEC_NAME, si.getId(), blockShift, context);
 
-      vectorsStream.writeVInt(PackedInts.VERSION_CURRENT);
-      vectorsStream.writeVInt(chunkSize);
+      metaStream.writeVInt(PackedInts.VERSION_CURRENT);
+      metaStream.writeVInt(chunkSize);
       writer = new BlockPackedWriter(vectorsStream, PACKED_BLOCK_SIZE);
 
       positionsBuf = new int[1024];
@@ -237,7 +245,7 @@
       success = true;
     } finally {
       if (!success) {
-        IOUtils.closeWhileHandlingException(vectorsStream, indexWriter, indexWriter);
+        IOUtils.closeWhileHandlingException(metaStream, vectorsStream, indexWriter, indexWriter);
       }
     }
   }
@@ -245,8 +253,9 @@
   @Override
   public void close() throws IOException {
     try {
-      IOUtils.close(vectorsStream, indexWriter);
+      IOUtils.close(metaStream, vectorsStream, indexWriter);
     } finally {
+      metaStream = null;
       vectorsStream = null;
       indexWriter = null;
     }
@@ -644,9 +653,10 @@
     if (numDocs != this.numDocs) {
       throw new RuntimeException("Wrote " + this.numDocs + " docs, finish called with numDocs=" + numDocs);
     }
-    indexWriter.finish(numDocs, vectorsStream.getFilePointer());
-    vectorsStream.writeVLong(numChunks);
-    vectorsStream.writeVLong(numDirtyChunks);
+    indexWriter.finish(numDocs, vectorsStream.getFilePointer(), metaStream);
+    metaStream.writeVLong(numChunks);
+    metaStream.writeVLong(numDirtyChunks);
+    CodecUtil.writeFooter(metaStream);
     CodecUtil.writeFooter(vectorsStream);
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexReader.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexReader.java
index 082fe09..a5b9817 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexReader.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexReader.java
@@ -16,8 +16,6 @@
  */
 package org.apache.lucene.codecs.compressing;
 
-import static org.apache.lucene.codecs.compressing.FieldsIndexWriter.FIELDS_INDEX_EXTENSION_SUFFIX;
-import static org.apache.lucene.codecs.compressing.FieldsIndexWriter.FIELDS_META_EXTENSION_SUFFIX;
 import static org.apache.lucene.codecs.compressing.FieldsIndexWriter.VERSION_CURRENT;
 import static org.apache.lucene.codecs.compressing.FieldsIndexWriter.VERSION_START;
 
@@ -27,7 +25,6 @@
 
 import org.apache.lucene.codecs.CodecUtil;
 import org.apache.lucene.index.IndexFileNames;
-import org.apache.lucene.store.ChecksumIndexInput;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.store.IOContext;
 import org.apache.lucene.store.IndexInput;
@@ -49,26 +46,18 @@
   private final DirectMonotonicReader docs, startPointers;
   private final long maxPointer;
 
-  FieldsIndexReader(Directory dir, String name, String suffix, String extensionPrefix, String codecName, byte[] id) throws IOException {
-    try (ChecksumIndexInput metaIn = dir.openChecksumInput(IndexFileNames.segmentFileName(name, suffix, extensionPrefix + FIELDS_META_EXTENSION_SUFFIX), IOContext.READONCE)) {
-      Throwable priorE = null;
-      try {
-        CodecUtil.checkIndexHeader(metaIn, codecName + "Meta", VERSION_START, VERSION_CURRENT, id, suffix);
-        maxDoc = metaIn.readInt();
-        blockShift = metaIn.readInt();
-        numChunks = metaIn.readInt();
-        docsStartPointer = metaIn.readLong();
-        docsMeta = DirectMonotonicReader.loadMeta(metaIn, numChunks, blockShift);
-        docsEndPointer = startPointersStartPointer = metaIn.readLong();
-        startPointersMeta = DirectMonotonicReader.loadMeta(metaIn, numChunks, blockShift);
-        startPointersEndPointer = metaIn.readLong();
-        maxPointer = metaIn.readLong();
-      } finally {
-        CodecUtil.checkFooter(metaIn, priorE);
-      }
-    }
+  FieldsIndexReader(Directory dir, String name, String suffix, String extension, String codecName, byte[] id, IndexInput metaIn) throws IOException {
+    maxDoc = metaIn.readInt();
+    blockShift = metaIn.readInt();
+    numChunks = metaIn.readInt();
+    docsStartPointer = metaIn.readLong();
+    docsMeta = DirectMonotonicReader.loadMeta(metaIn, numChunks, blockShift);
+    docsEndPointer = startPointersStartPointer = metaIn.readLong();
+    startPointersMeta = DirectMonotonicReader.loadMeta(metaIn, numChunks, blockShift);
+    startPointersEndPointer = metaIn.readLong();
+    maxPointer = metaIn.readLong();
 
-    indexInput = dir.openInput(IndexFileNames.segmentFileName(name, suffix, extensionPrefix + FIELDS_INDEX_EXTENSION_SUFFIX), IOContext.READ);
+    indexInput = dir.openInput(IndexFileNames.segmentFileName(name, suffix, extension), IOContext.READ);
     boolean success = false;
     try {
       CodecUtil.checkIndexHeader(indexInput, codecName + "Idx", VERSION_START, VERSION_CURRENT, id, suffix);
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexWriter.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexWriter.java
index 9feda32..48ca1ff 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/FieldsIndexWriter.java
@@ -46,12 +46,6 @@
  */
 public final class FieldsIndexWriter implements Closeable {
 
-  /** Extension of stored fields index file. */
-  public static final String FIELDS_INDEX_EXTENSION_SUFFIX = "x";
-
-  /** Extension of stored fields meta file. */
-  public static final String FIELDS_META_EXTENSION_SUFFIX = "m";
-
   static final int VERSION_START = 0;
   static final int VERSION_CURRENT = 0;
 
@@ -102,7 +96,7 @@
     totalChunks++;
   }
 
-  void finish(int numDocs, long maxPointer) throws IOException {
+  void finish(int numDocs, long maxPointer, IndexOutput metaOut) throws IOException {
     if (numDocs != totalDocs) {
       throw new IllegalStateException("Expected " + numDocs + " docs, but got " + totalDocs);
     }
@@ -110,10 +104,7 @@
     CodecUtil.writeFooter(filePointersOut);
     IOUtils.close(docsOut, filePointersOut);
 
-    try (IndexOutput metaOut = dir.createOutput(IndexFileNames.segmentFileName(name, suffix, extension + FIELDS_META_EXTENSION_SUFFIX), ioContext);
-        IndexOutput dataOut = dir.createOutput(IndexFileNames.segmentFileName(name, suffix, extension + FIELDS_INDEX_EXTENSION_SUFFIX), ioContext)) {
-
-      CodecUtil.writeIndexHeader(metaOut, codecName + "Meta", VERSION_CURRENT, id, suffix);
+    try (IndexOutput dataOut = dir.createOutput(IndexFileNames.segmentFileName(name, suffix, extension), ioContext)) {
       CodecUtil.writeIndexHeader(dataOut, codecName + "Idx", VERSION_CURRENT, id, suffix);
 
       metaOut.writeInt(numDocs);
@@ -173,7 +164,6 @@
       metaOut.writeLong(dataOut.getFilePointer());
       metaOut.writeLong(maxPointer);
 
-      CodecUtil.writeFooter(metaOut);
       CodecUtil.writeFooter(dataOut);
     }
   }
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50StoredFieldsFormat.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50StoredFieldsFormat.java
deleted file mode 100644
index 035fbd9..0000000
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50StoredFieldsFormat.java
+++ /dev/null
@@ -1,157 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.codecs.lucene50;
-
-
-import java.io.IOException;
-import java.util.Objects;
-
-import org.apache.lucene.codecs.StoredFieldsFormat;
-import org.apache.lucene.codecs.StoredFieldsReader;
-import org.apache.lucene.codecs.StoredFieldsWriter;
-import org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat;
-import org.apache.lucene.codecs.compressing.CompressionMode;
-import org.apache.lucene.index.FieldInfos;
-import org.apache.lucene.index.SegmentInfo;
-import org.apache.lucene.index.StoredFieldVisitor;
-import org.apache.lucene.store.Directory;
-import org.apache.lucene.store.IOContext;
-import org.apache.lucene.util.packed.DirectMonotonicWriter;
-
-/**
- * Lucene 5.0 stored fields format.
- *
- * <p><b>Principle</b>
- * <p>This {@link StoredFieldsFormat} compresses blocks of documents in
- * order to improve the compression ratio compared to document-level
- * compression. It uses the <a href="http://code.google.com/p/lz4/">LZ4</a>
- * compression algorithm by default in 16KB blocks, which is fast to compress 
- * and very fast to decompress data. Although the default compression method 
- * that is used ({@link Mode#BEST_SPEED BEST_SPEED}) focuses more on speed than on 
- * compression ratio, it should provide interesting compression ratios
- * for redundant inputs (such as log files, HTML or plain text). For higher
- * compression, you can choose ({@link Mode#BEST_COMPRESSION BEST_COMPRESSION}), which uses 
- * the <a href="http://en.wikipedia.org/wiki/DEFLATE">DEFLATE</a> algorithm with 60KB blocks 
- * for a better ratio at the expense of slower performance. 
- * These two options can be configured like this:
- * <pre class="prettyprint">
- *   // the default: for high performance
- *   indexWriterConfig.setCodec(new Lucene54Codec(Mode.BEST_SPEED));
- *   // instead for higher performance (but slower):
- *   // indexWriterConfig.setCodec(new Lucene54Codec(Mode.BEST_COMPRESSION));
- * </pre>
- * <p><b>File formats</b>
- * <p>Stored fields are represented by three files:
- * <ol>
- * <li><a id="field_data"></a>
- * <p>A fields data file (extension <code>.fdt</code>). This file stores a compact
- * representation of documents in compressed blocks of 16KB or more. When
- * writing a segment, documents are appended to an in-memory <code>byte[]</code>
- * buffer. When its size reaches 16KB or more, some metadata about the documents
- * is flushed to disk, immediately followed by a compressed representation of
- * the buffer using the
- * <a href="https://github.com/lz4/lz4">LZ4</a>
- * <a href="http://fastcompression.blogspot.fr/2011/05/lz4-explained.html">compression format</a>.</p>
- * <p>Notes
- * <ul>
- * <li>If documents are larger than 16KB then chunks will likely contain only
- * one document. However, documents can never spread across several chunks (all
- * fields of a single document are in the same chunk).</li>
- * <li>When at least one document in a chunk is large enough so that the chunk
- * is larger than 32KB, the chunk will actually be compressed in several LZ4
- * blocks of 16KB. This allows {@link StoredFieldVisitor}s which are only
- * interested in the first fields of a document to not have to decompress 10MB
- * of data if the document is 10MB, but only 16KB.</li>
- * <li>Given that the original lengths are written in the metadata of the chunk,
- * the decompressor can leverage this information to stop decoding as soon as
- * enough data has been decompressed.</li>
- * <li>In case documents are incompressible, the overhead of the compression format
- * is less than 0.5%.</li>
- * </ul>
- * </li>
- * <li><a id="field_index"></a>
- * <p>A fields index file (extension <code>.fdx</code>). This file stores two
- * {@link DirectMonotonicWriter monotonic arrays}, one for the first doc IDs of
- * each block of compressed documents, and another one for the corresponding
- * offsets on disk. At search time, the array containing doc IDs is
- * binary-searched in order to find the block that contains the expected doc ID,
- * and the associated offset on disk is retrieved from the second array.</p>
- * <li><a id="field_meta"></a>
- * <p>A fields meta file (extension <code>.fdm</code>). This file stores metadata
- * about the monotonic arrays stored in the index file.</p>
- * </li>
- * </ol>
- * <p><b>Known limitations</b>
- * <p>This {@link StoredFieldsFormat} does not support individual documents
- * larger than (<code>2<sup>31</sup> - 2<sup>14</sup></code>) bytes.
- * @lucene.experimental
- */
-public final class Lucene50StoredFieldsFormat extends StoredFieldsFormat {
-  
-  /** Configuration option for stored fields. */
-  public static enum Mode {
-    /** Trade compression ratio for retrieval speed. */
-    BEST_SPEED,
-    /** Trade retrieval speed for compression ratio. */
-    BEST_COMPRESSION
-  }
-  
-  /** Attribute key for compression mode. */
-  public static final String MODE_KEY = Lucene50StoredFieldsFormat.class.getSimpleName() + ".mode";
-  
-  final Mode mode;
-  
-  /** Stored fields format with default options */
-  public Lucene50StoredFieldsFormat() {
-    this(Mode.BEST_SPEED);
-  }
-  
-  /** Stored fields format with specified mode */
-  public Lucene50StoredFieldsFormat(Mode mode) {
-    this.mode = Objects.requireNonNull(mode);
-  }
-
-  @Override
-  public StoredFieldsReader fieldsReader(Directory directory, SegmentInfo si, FieldInfos fn, IOContext context) throws IOException {
-    String value = si.getAttribute(MODE_KEY);
-    if (value == null) {
-      throw new IllegalStateException("missing value for " + MODE_KEY + " for segment: " + si.name);
-    }
-    Mode mode = Mode.valueOf(value);
-    return impl(mode).fieldsReader(directory, si, fn, context);
-  }
-
-  @Override
-  public StoredFieldsWriter fieldsWriter(Directory directory, SegmentInfo si, IOContext context) throws IOException {
-    String previous = si.putAttribute(MODE_KEY, mode.name());
-    if (previous != null && previous.equals(mode.name()) == false) {
-      throw new IllegalStateException("found existing value for " + MODE_KEY + " for segment: " + si.name +
-                                      "old=" + previous + ", new=" + mode.name());
-    }
-    return impl(mode).fieldsWriter(directory, si, context);
-  }
-  
-  StoredFieldsFormat impl(Mode mode) {
-    switch (mode) {
-      case BEST_SPEED: 
-        return new CompressingStoredFieldsFormat("Lucene50StoredFieldsFastData", CompressionMode.FAST, 1 << 14, 128, 10);
-      case BEST_COMPRESSION: 
-        return new CompressingStoredFieldsFormat("Lucene50StoredFieldsHighData", CompressionMode.HIGH_COMPRESSION, 61440, 512, 10);
-      default: throw new AssertionError();
-    }
-  }
-}
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50TermVectorsFormat.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50TermVectorsFormat.java
index 00412d5..9b65fb4 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50TermVectorsFormat.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50TermVectorsFormat.java
@@ -20,6 +20,7 @@
 import org.apache.lucene.codecs.CodecUtil;
 import org.apache.lucene.codecs.TermVectorsFormat;
 import org.apache.lucene.codecs.compressing.FieldsIndexWriter;
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat;
 import org.apache.lucene.codecs.compressing.CompressingTermVectorsFormat;
 import org.apache.lucene.codecs.compressing.CompressionMode;
 import org.apache.lucene.store.DataOutput;
@@ -29,7 +30,7 @@
 /**
  * Lucene 5.0 {@link TermVectorsFormat term vectors format}.
  * <p>
- * Very similarly to {@link Lucene50StoredFieldsFormat}, this format is based
+ * Very similarly to {@link Lucene87StoredFieldsFormat}, this format is based
  * on compressed chunks of data, with document-level granularity so that a
  * document can never span across distinct chunks. Moreover, data is made as
  * compact as possible:<ul>
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene86/Lucene86Codec.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene86/Lucene86Codec.java
deleted file mode 100644
index 3f69874..0000000
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene86/Lucene86Codec.java
+++ /dev/null
@@ -1,178 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.lucene.codecs.lucene86;
-
-import java.util.Objects;
-
-import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.codecs.CompoundFormat;
-import org.apache.lucene.codecs.DocValuesFormat;
-import org.apache.lucene.codecs.FieldInfosFormat;
-import org.apache.lucene.codecs.FilterCodec;
-import org.apache.lucene.codecs.LiveDocsFormat;
-import org.apache.lucene.codecs.NormsFormat;
-import org.apache.lucene.codecs.PointsFormat;
-import org.apache.lucene.codecs.PostingsFormat;
-import org.apache.lucene.codecs.SegmentInfoFormat;
-import org.apache.lucene.codecs.StoredFieldsFormat;
-import org.apache.lucene.codecs.TermVectorsFormat;
-import org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat;
-import org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat;
-import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat;
-import org.apache.lucene.codecs.lucene50.Lucene50TermVectorsFormat;
-import org.apache.lucene.codecs.lucene60.Lucene60FieldInfosFormat;
-import org.apache.lucene.codecs.lucene80.Lucene80NormsFormat;
-import org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat;
-import org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat;
-import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;
-
-/**
- * Implements the Lucene 8.6 index format, with configurable per-field postings
- * and docvalues formats.
- * <p>
- * If you want to reuse functionality of this codec in another codec, extend
- * {@link FilterCodec}.
- *
- * @see org.apache.lucene.codecs.lucene86 package documentation for file format details.
- *
- * @lucene.experimental
- */
-public class Lucene86Codec extends Codec {
-  private final TermVectorsFormat vectorsFormat = new Lucene50TermVectorsFormat();
-  private final FieldInfosFormat fieldInfosFormat = new Lucene60FieldInfosFormat();
-  private final SegmentInfoFormat segmentInfosFormat = new Lucene86SegmentInfoFormat();
-  private final LiveDocsFormat liveDocsFormat = new Lucene50LiveDocsFormat();
-  private final CompoundFormat compoundFormat = new Lucene50CompoundFormat();
-  private final PointsFormat pointsFormat = new Lucene86PointsFormat();
-  private final PostingsFormat defaultFormat;
-
-  private final PostingsFormat postingsFormat = new PerFieldPostingsFormat() {
-    @Override
-    public PostingsFormat getPostingsFormatForField(String field) {
-      return Lucene86Codec.this.getPostingsFormatForField(field);
-    }
-  };
-
-  private final DocValuesFormat docValuesFormat = new PerFieldDocValuesFormat() {
-    @Override
-    public DocValuesFormat getDocValuesFormatForField(String field) {
-      return Lucene86Codec.this.getDocValuesFormatForField(field);
-    }
-  };
-
-  private final StoredFieldsFormat storedFieldsFormat;
-
-  /**
-   * Instantiates a new codec.
-   */
-  public Lucene86Codec() {
-    this(Lucene50StoredFieldsFormat.Mode.BEST_SPEED);
-  }
-
-  /**
-   * Instantiates a new codec, specifying the stored fields compression
-   * mode to use.
-   * @param mode stored fields compression mode to use for newly
-   *             flushed/merged segments.
-   */
-  public Lucene86Codec(Lucene50StoredFieldsFormat.Mode mode) {
-    super("Lucene86");
-    this.storedFieldsFormat = new Lucene50StoredFieldsFormat(Objects.requireNonNull(mode));
-    this.defaultFormat = new Lucene84PostingsFormat();
-  }
-
-  @Override
-  public final StoredFieldsFormat storedFieldsFormat() {
-    return storedFieldsFormat;
-  }
-
-  @Override
-  public final TermVectorsFormat termVectorsFormat() {
-    return vectorsFormat;
-  }
-
-  @Override
-  public final PostingsFormat postingsFormat() {
-    return postingsFormat;
-  }
-
-  @Override
-  public final FieldInfosFormat fieldInfosFormat() {
-    return fieldInfosFormat;
-  }
-
-  @Override
-  public final SegmentInfoFormat segmentInfoFormat() {
-    return segmentInfosFormat;
-  }
-
-  @Override
-  public final LiveDocsFormat liveDocsFormat() {
-    return liveDocsFormat;
-  }
-
-  @Override
-  public final CompoundFormat compoundFormat() {
-    return compoundFormat;
-  }
-
-  @Override
-  public final PointsFormat pointsFormat() {
-    return pointsFormat;
-  }
-
-  /** Returns the postings format that should be used for writing
-   *  new segments of <code>field</code>.
-   *
-   *  The default implementation always returns "Lucene84".
-   *  <p>
-   *  <b>WARNING:</b> if you subclass, you are responsible for index
-   *  backwards compatibility: future version of Lucene are only
-   *  guaranteed to be able to read the default implementation.
-   */
-  public PostingsFormat getPostingsFormatForField(String field) {
-    return defaultFormat;
-  }
-
-  /** Returns the docvalues format that should be used for writing
-   *  new segments of <code>field</code>.
-   *
-   *  The default implementation always returns "Lucene80".
-   *  <p>
-   *  <b>WARNING:</b> if you subclass, you are responsible for index
-   *  backwards compatibility: future version of Lucene are only
-   *  guaranteed to be able to read the default implementation.
-   */
-  public DocValuesFormat getDocValuesFormatForField(String field) {
-    return defaultDVFormat;
-  }
-
-  @Override
-  public final DocValuesFormat docValuesFormat() {
-    return docValuesFormat;
-  }
-
-  private final DocValuesFormat defaultDVFormat = DocValuesFormat.forName("Lucene80");
-
-  private final NormsFormat normsFormat = new Lucene80NormsFormat();
-
-  @Override
-  public final NormsFormat normsFormat() {
-    return normsFormat;
-  }
-}
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene86/package-info.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene86/package-info.java
index 19be7eb..13f35a1 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene86/package-info.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene86/package-info.java
@@ -137,7 +137,7 @@
  *    This contains the set of field names used in the index.
  * </li>
  * <li>
- * {@link org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat Stored Field values}.
+ * Stored Field values.
  * This contains, for each document, a list of attribute-value pairs, where the attributes
  * are field names. These are used to store auxiliary information about the document, such as
  * its title, url, or an identifier to access a database. The set of stored fields are what is
@@ -250,12 +250,12 @@
  * <td>Stores information about the fields</td>
  * </tr>
  * <tr>
- * <td>{@link org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat Field Index}</td>
+ * <td>Field Index</td>
  * <td>.fdx</td>
  * <td>Contains pointers to field data</td>
  * </tr>
  * <tr>
- * <td>{@link org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat Field Data}</td>
+ * <td>Field Data</td>
  * <td>.fdt</td>
  * <td>The stored fields for documents</td>
  * </tr>
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene87/Lucene87Codec.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene87/Lucene87Codec.java
new file mode 100644
index 0000000..5ff4073
--- /dev/null
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene87/Lucene87Codec.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.codecs.lucene87;
+
+import java.util.Objects;
+
+import org.apache.lucene.codecs.Codec;
+import org.apache.lucene.codecs.CompoundFormat;
+import org.apache.lucene.codecs.DocValuesFormat;
+import org.apache.lucene.codecs.FieldInfosFormat;
+import org.apache.lucene.codecs.FilterCodec;
+import org.apache.lucene.codecs.LiveDocsFormat;
+import org.apache.lucene.codecs.NormsFormat;
+import org.apache.lucene.codecs.PointsFormat;
+import org.apache.lucene.codecs.PostingsFormat;
+import org.apache.lucene.codecs.SegmentInfoFormat;
+import org.apache.lucene.codecs.StoredFieldsFormat;
+import org.apache.lucene.codecs.TermVectorsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat;
+import org.apache.lucene.codecs.lucene50.Lucene50TermVectorsFormat;
+import org.apache.lucene.codecs.lucene60.Lucene60FieldInfosFormat;
+import org.apache.lucene.codecs.lucene80.Lucene80NormsFormat;
+import org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat;
+import org.apache.lucene.codecs.lucene86.Lucene86PointsFormat;
+import org.apache.lucene.codecs.lucene86.Lucene86SegmentInfoFormat;
+import org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat;
+import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;
+
+/**
+ * Implements the Lucene 8.6 index format, with configurable per-field postings
+ * and docvalues formats.
+ * <p>
+ * If you want to reuse functionality of this codec in another codec, extend
+ * {@link FilterCodec}.
+ *
+ * @see org.apache.lucene.codecs.lucene86 package documentation for file format details.
+ *
+ * @lucene.experimental
+ */
+public class Lucene87Codec extends Codec {
+  private final TermVectorsFormat vectorsFormat = new Lucene50TermVectorsFormat();
+  private final FieldInfosFormat fieldInfosFormat = new Lucene60FieldInfosFormat();
+  private final SegmentInfoFormat segmentInfosFormat = new Lucene86SegmentInfoFormat();
+  private final LiveDocsFormat liveDocsFormat = new Lucene50LiveDocsFormat();
+  private final CompoundFormat compoundFormat = new Lucene50CompoundFormat();
+  private final PointsFormat pointsFormat = new Lucene86PointsFormat();
+  private final PostingsFormat defaultFormat;
+
+  private final PostingsFormat postingsFormat = new PerFieldPostingsFormat() {
+    @Override
+    public PostingsFormat getPostingsFormatForField(String field) {
+      return Lucene87Codec.this.getPostingsFormatForField(field);
+    }
+  };
+
+  private final DocValuesFormat docValuesFormat = new PerFieldDocValuesFormat() {
+    @Override
+    public DocValuesFormat getDocValuesFormatForField(String field) {
+      return Lucene87Codec.this.getDocValuesFormatForField(field);
+    }
+  };
+
+  private final StoredFieldsFormat storedFieldsFormat;
+
+  /**
+   * Instantiates a new codec.
+   */
+  public Lucene87Codec() {
+    this(Lucene87StoredFieldsFormat.Mode.BEST_SPEED);
+  }
+
+  /**
+   * Instantiates a new codec, specifying the stored fields compression
+   * mode to use.
+   * @param mode stored fields compression mode to use for newly
+   *             flushed/merged segments.
+   */
+  public Lucene87Codec(Lucene87StoredFieldsFormat.Mode mode) {
+    super("Lucene87");
+    this.storedFieldsFormat = new Lucene87StoredFieldsFormat(Objects.requireNonNull(mode));
+    this.defaultFormat = new Lucene84PostingsFormat();
+  }
+
+  @Override
+  public final StoredFieldsFormat storedFieldsFormat() {
+    return storedFieldsFormat;
+  }
+
+  @Override
+  public final TermVectorsFormat termVectorsFormat() {
+    return vectorsFormat;
+  }
+
+  @Override
+  public final PostingsFormat postingsFormat() {
+    return postingsFormat;
+  }
+
+  @Override
+  public final FieldInfosFormat fieldInfosFormat() {
+    return fieldInfosFormat;
+  }
+
+  @Override
+  public final SegmentInfoFormat segmentInfoFormat() {
+    return segmentInfosFormat;
+  }
+
+  @Override
+  public final LiveDocsFormat liveDocsFormat() {
+    return liveDocsFormat;
+  }
+
+  @Override
+  public final CompoundFormat compoundFormat() {
+    return compoundFormat;
+  }
+
+  @Override
+  public final PointsFormat pointsFormat() {
+    return pointsFormat;
+  }
+
+  /** Returns the postings format that should be used for writing
+   *  new segments of <code>field</code>.
+   *
+   *  The default implementation always returns "Lucene84".
+   *  <p>
+   *  <b>WARNING:</b> if you subclass, you are responsible for index
+   *  backwards compatibility: future version of Lucene are only
+   *  guaranteed to be able to read the default implementation.
+   */
+  public PostingsFormat getPostingsFormatForField(String field) {
+    return defaultFormat;
+  }
+
+  /** Returns the docvalues format that should be used for writing
+   *  new segments of <code>field</code>.
+   *
+   *  The default implementation always returns "Lucene80".
+   *  <p>
+   *  <b>WARNING:</b> if you subclass, you are responsible for index
+   *  backwards compatibility: future version of Lucene are only
+   *  guaranteed to be able to read the default implementation.
+   */
+  public DocValuesFormat getDocValuesFormatForField(String field) {
+    return defaultDVFormat;
+  }
+
+  @Override
+  public final DocValuesFormat docValuesFormat() {
+    return docValuesFormat;
+  }
+
+  private final DocValuesFormat defaultDVFormat = DocValuesFormat.forName("Lucene80");
+
+  private final NormsFormat normsFormat = new Lucene80NormsFormat();
+
+  @Override
+  public final NormsFormat normsFormat() {
+    return normsFormat;
+  }
+}
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene87/Lucene87StoredFieldsFormat.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene87/Lucene87StoredFieldsFormat.java
new file mode 100644
index 0000000..c2bbced
--- /dev/null
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene87/Lucene87StoredFieldsFormat.java
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene87;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.zip.DataFormatException;
+import java.util.zip.Deflater;
+import java.util.zip.Inflater;
+
+import org.apache.lucene.codecs.StoredFieldsFormat;
+import org.apache.lucene.codecs.StoredFieldsReader;
+import org.apache.lucene.codecs.StoredFieldsWriter;
+import org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat;
+import org.apache.lucene.codecs.compressing.CompressionMode;
+import org.apache.lucene.codecs.compressing.Compressor;
+import org.apache.lucene.codecs.compressing.Decompressor;
+import org.apache.lucene.index.CorruptIndexException;
+import org.apache.lucene.index.FieldInfos;
+import org.apache.lucene.index.SegmentInfo;
+import org.apache.lucene.index.StoredFieldVisitor;
+import org.apache.lucene.store.DataInput;
+import org.apache.lucene.store.DataOutput;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.IOContext;
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.packed.DirectMonotonicWriter;
+
+/**
+ * Lucene 8.7 stored fields format.
+ *
+ * <p><b>Principle</b>
+ * <p>This {@link StoredFieldsFormat} compresses blocks of documents in
+ * order to improve the compression ratio compared to document-level
+ * compression. It uses the <a href="http://code.google.com/p/lz4/">LZ4</a>
+ * compression algorithm by default in 16KB blocks, which is fast to compress 
+ * and very fast to decompress data. Although the default compression method 
+ * that is used ({@link Mode#BEST_SPEED BEST_SPEED}) focuses more on speed than on 
+ * compression ratio, it should provide interesting compression ratios
+ * for redundant inputs (such as log files, HTML or plain text). For higher
+ * compression, you can choose ({@link Mode#BEST_COMPRESSION BEST_COMPRESSION}),
+ * which uses the <a href="http://en.wikipedia.org/wiki/DEFLATE">DEFLATE</a>
+ * algorithm with 48kB blocks and shared dictionaries for a better ratio at the
+ * expense of slower performance. These two options can be configured like this:
+ * <pre class="prettyprint">
+ *   // the default: for high performance
+ *   indexWriterConfig.setCodec(new Lucene87Codec(Mode.BEST_SPEED));
+ *   // instead for higher performance (but slower):
+ *   // indexWriterConfig.setCodec(new Lucene87Codec(Mode.BEST_COMPRESSION));
+ * </pre>
+ * <p><b>File formats</b>
+ * <p>Stored fields are represented by three files:
+ * <ol>
+ * <li><a id="field_data"></a>
+ * <p>A fields data file (extension <code>.fdt</code>). This file stores a compact
+ * representation of documents in compressed blocks of 16KB or more. When
+ * writing a segment, documents are appended to an in-memory <code>byte[]</code>
+ * buffer. When its size reaches 16KB or more, some metadata about the documents
+ * is flushed to disk, immediately followed by a compressed representation of
+ * the buffer using the
+ * <a href="https://github.com/lz4/lz4">LZ4</a>
+ * <a href="http://fastcompression.blogspot.fr/2011/05/lz4-explained.html">compression format</a>.</p>
+ * <p>Notes
+ * <ul>
+ * <li>When at least one document in a chunk is large enough so that the chunk
+ * is larger than 32KB, the chunk will actually be compressed in several LZ4
+ * blocks of 16KB. This allows {@link StoredFieldVisitor}s which are only
+ * interested in the first fields of a document to not have to decompress 10MB
+ * of data if the document is 10MB, but only 16KB.</li>
+ * <li>Given that the original lengths are written in the metadata of the chunk,
+ * the decompressor can leverage this information to stop decoding as soon as
+ * enough data has been decompressed.</li>
+ * <li>In case documents are incompressible, the overhead of the compression format
+ * is less than 0.5%.</li>
+ * </ul>
+ * </li>
+ * <li><a id="field_index"></a>
+ * <p>A fields index file (extension <code>.fdx</code>). This file stores two
+ * {@link DirectMonotonicWriter monotonic arrays}, one for the first doc IDs of
+ * each block of compressed documents, and another one for the corresponding
+ * offsets on disk. At search time, the array containing doc IDs is
+ * binary-searched in order to find the block that contains the expected doc ID,
+ * and the associated offset on disk is retrieved from the second array.</p>
+ * <li><a id="field_meta"></a>
+ * <p>A fields meta file (extension <code>.fdm</code>). This file stores metadata
+ * about the monotonic arrays stored in the index file.</p>
+ * </li>
+ * </ol>
+ * <p><b>Known limitations</b>
+ * <p>This {@link StoredFieldsFormat} does not support individual documents
+ * larger than (<code>2<sup>31</sup> - 2<sup>14</sup></code>) bytes.
+ * @lucene.experimental
+ */
+public class Lucene87StoredFieldsFormat extends StoredFieldsFormat {
+  
+  /** Configuration option for stored fields. */
+  public static enum Mode {
+    /** Trade compression ratio for retrieval speed. */
+    BEST_SPEED,
+    /** Trade retrieval speed for compression ratio. */
+    BEST_COMPRESSION
+  }
+  
+  /** Attribute key for compression mode. */
+  public static final String MODE_KEY = Lucene87StoredFieldsFormat.class.getSimpleName() + ".mode";
+  
+  final Mode mode;
+  
+  /** Stored fields format with default options */
+  public Lucene87StoredFieldsFormat() {
+    this(Mode.BEST_SPEED);
+  }
+
+  /** Stored fields format with specified mode */
+  public Lucene87StoredFieldsFormat(Mode mode) {
+    this.mode = Objects.requireNonNull(mode);
+  }
+
+  @Override
+  public StoredFieldsReader fieldsReader(Directory directory, SegmentInfo si, FieldInfos fn, IOContext context) throws IOException {
+    String value = si.getAttribute(MODE_KEY);
+    if (value == null) {
+      throw new IllegalStateException("missing value for " + MODE_KEY + " for segment: " + si.name);
+    }
+    Mode mode = Mode.valueOf(value);
+    return impl(mode).fieldsReader(directory, si, fn, context);
+  }
+
+  @Override
+  public StoredFieldsWriter fieldsWriter(Directory directory, SegmentInfo si, IOContext context) throws IOException {
+    String previous = si.putAttribute(MODE_KEY, mode.name());
+    if (previous != null && previous.equals(mode.name()) == false) {
+      throw new IllegalStateException("found existing value for " + MODE_KEY + " for segment: " + si.name +
+                                      "old=" + previous + ", new=" + mode.name());
+    }
+    return impl(mode).fieldsWriter(directory, si, context);
+  }
+  
+  StoredFieldsFormat impl(Mode mode) {
+    switch (mode) {
+      case BEST_SPEED:
+        return new CompressingStoredFieldsFormat("Lucene87StoredFieldsFastData", CompressionMode.FAST, 16*1024, 128, 10);
+      case BEST_COMPRESSION:
+        return new CompressingStoredFieldsFormat("Lucene87StoredFieldsHighData", BEST_COMPRESSION_MODE, BEST_COMPRESSION_BLOCK_LENGTH, 512, 10);
+      default: throw new AssertionError();
+    }
+  }
+
+  // 8kB seems to be a good trade-off between higher compression rates by not
+  // having to fully bootstrap a dictionary, and indexing rate by not spending
+  // too much CPU initializing data-structures to find strings in this preset
+  // dictionary.
+  private static final int BEST_COMPRESSION_DICT_LENGTH = 8 * 1024;
+  // 48kB seems like a nice trade-off because it's small enough to keep
+  // retrieval fast, yet sub blocks can find strings in a window of 26kB of
+  // data on average (the window grows from 8kB to 32kB in the first 24kB, and
+  // then DEFLATE can use 32kB for the last 24kB) which is close enough to the
+  // maximum window length of DEFLATE of 32kB.
+  private static final int BEST_COMPRESSION_SUB_BLOCK_LENGTH = 48 * 1024;
+  // We shoot for 10 sub blocks per block, which should hopefully amortize the
+  // space overhead of having the first 8kB compressed without any preset dict,
+  // and then remove 8kB in order to avoid creating a tiny 11th sub block if
+  // documents are small.
+  private static final int BEST_COMPRESSION_BLOCK_LENGTH = BEST_COMPRESSION_DICT_LENGTH + 10 * BEST_COMPRESSION_SUB_BLOCK_LENGTH - 8 * 1024;
+
+  /** Compression mode for {@link Mode#BEST_COMPRESSION} */
+  public static final DeflateWithPresetDict BEST_COMPRESSION_MODE = new DeflateWithPresetDict(BEST_COMPRESSION_DICT_LENGTH, BEST_COMPRESSION_SUB_BLOCK_LENGTH);
+
+  /**
+   * A compression mode that trades speed for compression ratio. Although
+   * compression and decompression might be slow, this compression mode should
+   * provide a good compression ratio. This mode might be interesting if/when
+   * your index size is much bigger than your OS cache.
+   */
+  public static class DeflateWithPresetDict extends CompressionMode {
+
+    private final int dictLength, subBlockLength;
+
+    /** Sole constructor. */
+    public DeflateWithPresetDict(int dictLength, int subBlockLength) {
+      this.dictLength = dictLength;
+      this.subBlockLength = subBlockLength;
+    }
+
+    @Override
+    public Compressor newCompressor() {
+      // notes:
+      // 3 is the highest level that doesn't have lazy match evaluation
+      // 6 is the default, higher than that is just a waste of cpu
+      return new DeflateWithPresetDictCompressor(6, dictLength, subBlockLength);
+    }
+
+    @Override
+    public Decompressor newDecompressor() {
+      return new DeflateWithPresetDictDecompressor();
+    }
+
+    @Override
+    public String toString() {
+      return "BEST_COMPRESSION";
+    }
+
+  };
+
+  private static final class DeflateWithPresetDictDecompressor extends Decompressor {
+
+    byte[] compressed;
+
+    DeflateWithPresetDictDecompressor() {
+      compressed = new byte[0];
+    }
+
+    private void doDecompress(DataInput in, Inflater decompressor, BytesRef bytes) throws IOException {
+      final int compressedLength = in.readVInt();
+      if (compressedLength == 0) {
+        return;
+      }
+      // pad with extra "dummy byte": see javadocs for using Inflater(true)
+      // we do it for compliance, but it's unnecessary for years in zlib.
+      final int paddedLength = compressedLength + 1;
+      compressed = ArrayUtil.grow(compressed, paddedLength);
+      in.readBytes(compressed, 0, compressedLength);
+      compressed[compressedLength] = 0; // explicitly set dummy byte to 0
+
+      // extra "dummy byte"
+      decompressor.setInput(compressed, 0, paddedLength);
+      try {
+        bytes.length += decompressor.inflate(bytes.bytes, bytes.length, bytes.bytes.length - bytes.length);
+      } catch (DataFormatException e) {
+        throw new IOException(e);
+      }
+      if (decompressor.finished() == false) {
+        throw new CorruptIndexException("Invalid decoder state: needsInput=" + decompressor.needsInput()
+        + ", needsDict=" + decompressor.needsDictionary(), in);
+      }
+    }
+
+    @Override
+    public void decompress(DataInput in, int originalLength, int offset, int length, BytesRef bytes) throws IOException {
+      assert offset + length <= originalLength;
+      if (length == 0) {
+        bytes.length = 0;
+        return;
+      }
+      final int dictLength = in.readVInt();
+      final int blockLength = in.readVInt();
+      bytes.bytes = ArrayUtil.grow(bytes.bytes, dictLength);
+      bytes.offset = bytes.length = 0;
+
+      final Inflater decompressor = new Inflater(true);
+      try {
+        // Read the dictionary
+        doDecompress(in, decompressor, bytes);
+        if (dictLength != bytes.length) {
+          throw new CorruptIndexException("Unexpected dict length", in);
+        }
+
+        int offsetInBlock = dictLength;
+        int offsetInBytesRef = offset;
+
+        // Skip unneeded blocks
+        while (offsetInBlock + blockLength < offset) {
+          final int compressedLength = in.readVInt();
+          in.skipBytes(compressedLength);
+          offsetInBlock += blockLength;
+          offsetInBytesRef -= blockLength;
+        }
+
+        // Read blocks that intersect with the interval we need
+        while (offsetInBlock < offset + length) {
+          bytes.bytes = ArrayUtil.grow(bytes.bytes, bytes.length + blockLength);
+          decompressor.reset();
+          decompressor.setDictionary(bytes.bytes, 0, dictLength);
+          doDecompress(in, decompressor, bytes);
+          offsetInBlock += blockLength;
+        }
+
+        bytes.offset = offsetInBytesRef;
+        bytes.length = length;
+        assert bytes.isValid();
+      } finally {
+        decompressor.end();
+      }
+    }
+
+    @Override
+    public Decompressor clone() {
+      return new DeflateWithPresetDictDecompressor();
+    }
+
+  }
+
+  private static class DeflateWithPresetDictCompressor extends Compressor {
+
+    final int dictLength;
+    final int blockLength;
+    final Deflater compressor;
+    byte[] compressed;
+    boolean closed;
+
+    DeflateWithPresetDictCompressor(int level, int dictLength, int blockLength) {
+      compressor = new Deflater(level, true);
+      compressed = new byte[64];
+      this.dictLength = dictLength;
+      this.blockLength = blockLength;
+    }
+
+    private void doCompress(byte[] bytes, int off, int len, DataOutput out) throws IOException {
+      if (len == 0) {
+        out.writeVInt(0);
+        return;
+      }
+      compressor.setInput(bytes, off, len);
+      compressor.finish();
+      if (compressor.needsInput()) {
+        throw new IllegalStateException();
+      }
+
+      int totalCount = 0;
+      for (;;) {
+        final int count = compressor.deflate(compressed, totalCount, compressed.length - totalCount);
+        totalCount += count;
+        assert totalCount <= compressed.length;
+        if (compressor.finished()) {
+          break;
+        } else {
+          compressed = ArrayUtil.grow(compressed);
+        }
+      }
+
+      out.writeVInt(totalCount);
+      out.writeBytes(compressed, totalCount);
+    }
+
+    @Override
+    public void compress(byte[] bytes, int off, int len, DataOutput out) throws IOException {
+      final int dictLength = Math.min(this.dictLength, len);
+      out.writeVInt(dictLength);
+      out.writeVInt(blockLength);
+      final int end = off + len;
+
+      // Compress the dictionary first
+      compressor.reset();
+      doCompress(bytes, off, dictLength, out);
+
+      // And then sub blocks
+      for (int start = off + dictLength; start < end; start += blockLength) {
+        compressor.reset();
+        compressor.setDictionary(bytes, off, dictLength);
+        doCompress(bytes, start, Math.min(blockLength, off + len - start), out);
+      }
+    }
+
+    @Override
+    public void close() throws IOException {
+      if (closed == false) {
+        compressor.end();
+        closed = true;
+      }
+    }
+  }
+
+}
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene87/package-info.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene87/package-info.java
new file mode 100644
index 0000000..75facdb
--- /dev/null
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene87/package-info.java
@@ -0,0 +1,416 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Lucene 8.7 file format.
+ *
+ * <h2>Apache Lucene - Index File Formats</h2>
+ * <div>
+ * <ul>
+ * <li><a href="#Introduction">Introduction</a></li>
+ * <li><a href="#Definitions">Definitions</a>
+ *   <ul>
+ *   <li><a href="#Inverted_Indexing">Inverted Indexing</a></li>
+ *   <li><a href="#Types_of_Fields">Types of Fields</a></li>
+ *   <li><a href="#Segments">Segments</a></li>
+ *   <li><a href="#Document_Numbers">Document Numbers</a></li>
+ *   </ul>
+ * </li>
+ * <li><a href="#Overview">Index Structure Overview</a></li>
+ * <li><a href="#File_Naming">File Naming</a></li>
+ * <li><a href="#file-names">Summary of File Extensions</a>
+ *   <ul>
+ *   <li><a href="#Lock_File">Lock File</a></li>
+ *   <li><a href="#History">History</a></li>
+ *   <li><a href="#Limitations">Limitations</a></li>
+ *   </ul>
+ * </li>
+ * </ul>
+ * </div>
+ * <a id="Introduction"></a>
+ * <h3>Introduction</h3>
+ * <div>
+ * <p>This document defines the index file formats used in this version of Lucene.
+ * If you are using a different version of Lucene, please consult the copy of
+ * <code>docs/</code> that was distributed with
+ * the version you are using.</p>
+ * <p>This document attempts to provide a high-level definition of the Apache
+ * Lucene file formats.</p>
+ * </div>
+ * <a id="Definitions"></a>
+ * <h3>Definitions</h3>
+ * <div>
+ * <p>The fundamental concepts in Lucene are index, document, field and term.</p>
+ * <p>An index contains a sequence of documents.</p>
+ * <ul>
+ * <li>A document is a sequence of fields.</li>
+ * <li>A field is a named sequence of terms.</li>
+ * <li>A term is a sequence of bytes.</li>
+ * </ul>
+ * <p>The same sequence of bytes in two different fields is considered a different
+ * term. Thus terms are represented as a pair: the string naming the field, and the
+ * bytes within the field.</p>
+ * <a id="Inverted_Indexing"></a>
+ * <h4>Inverted Indexing</h4>
+ * <p>The index stores statistics about terms in order to make term-based search
+ * more efficient. Lucene's index falls into the family of indexes known as an
+ * <i>inverted index.</i> This is because it can list, for a term, the documents
+ * that contain it. This is the inverse of the natural relationship, in which
+ * documents list terms.</p>
+ * <a id="Types_of_Fields"></a>
+ * <h4>Types of Fields</h4>
+ * <p>In Lucene, fields may be <i>stored</i>, in which case their text is stored
+ * in the index literally, in a non-inverted manner. Fields that are inverted are
+ * called <i>indexed</i>. A field may be both stored and indexed.</p>
+ * <p>The text of a field may be <i>tokenized</i> into terms to be indexed, or the
+ * text of a field may be used literally as a term to be indexed. Most fields are
+ * tokenized, but sometimes it is useful for certain identifier fields to be
+ * indexed literally.</p>
+ * <p>See the {@link org.apache.lucene.document.Field Field}
+ * java docs for more information on Fields.</p>
+ * <a id="Segments"></a>
+ * <h4>Segments</h4>
+ * <p>Lucene indexes may be composed of multiple sub-indexes, or <i>segments</i>.
+ * Each segment is a fully independent index, which could be searched separately.
+ * Indexes evolve by:</p>
+ * <ol>
+ * <li>Creating new segments for newly added documents.</li>
+ * <li>Merging existing segments.</li>
+ * </ol>
+ * <p>Searches may involve multiple segments and/or multiple indexes, each index
+ * potentially composed of a set of segments.</p>
+ * <a id="Document_Numbers"></a>
+ * <h4>Document Numbers</h4>
+ * <p>Internally, Lucene refers to documents by an integer <i>document number</i>.
+ * The first document added to an index is numbered zero, and each subsequent
+ * document added gets a number one greater than the previous.</p>
+ * <p>Note that a document's number may change, so caution should be taken when
+ * storing these numbers outside of Lucene. In particular, numbers may change in
+ * the following situations:</p>
+ * <ul>
+ * <li>
+ * <p>The numbers stored in each segment are unique only within the segment, and
+ * must be converted before they can be used in a larger context. The standard
+ * technique is to allocate each segment a range of values, based on the range of
+ * numbers used in that segment. To convert a document number from a segment to an
+ * external value, the segment's <i>base</i> document number is added. To convert
+ * an external value back to a segment-specific value, the segment is identified
+ * by the range that the external value is in, and the segment's base value is
+ * subtracted. For example two five document segments might be combined, so that
+ * the first segment has a base value of zero, and the second of five. Document
+ * three from the second segment would have an external value of eight.</p>
+ * </li>
+ * <li>
+ * <p>When documents are deleted, gaps are created in the numbering. These are
+ * eventually removed as the index evolves through merging. Deleted documents are
+ * dropped when segments are merged. A freshly-merged segment thus has no gaps in
+ * its numbering.</p>
+ * </li>
+ * </ul>
+ * </div>
+ * <a id="Overview"></a>
+ * <h3>Index Structure Overview</h3>
+ * <div>
+ * <p>Each segment index maintains the following:</p>
+ * <ul>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene86.Lucene86SegmentInfoFormat Segment info}.
+ *    This contains metadata about a segment, such as the number of documents,
+ *    what files it uses, and information about how the segment is sorted
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene60.Lucene60FieldInfosFormat Field names}.
+ *    This contains the set of field names used in the index.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat Stored Field values}.
+ * This contains, for each document, a list of attribute-value pairs, where the attributes
+ * are field names. These are used to store auxiliary information about the document, such as
+ * its title, url, or an identifier to access a database. The set of stored fields are what is
+ * returned for each hit when searching. This is keyed by document number.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Term dictionary}.
+ * A dictionary containing all of the terms used in all of the
+ * indexed fields of all of the documents. The dictionary also contains the number
+ * of documents which contain the term, and pointers to the term's frequency and
+ * proximity data.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Term Frequency data}.
+ * For each term in the dictionary, the numbers of all the
+ * documents that contain that term, and the frequency of the term in that
+ * document, unless frequencies are omitted ({@link org.apache.lucene.index.IndexOptions#DOCS IndexOptions.DOCS})
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Term Proximity data}.
+ * For each term in the dictionary, the positions that the
+ * term occurs in each document. Note that this will not exist if all fields in
+ * all documents omit position data.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene80.Lucene80NormsFormat Normalization factors}.
+ * For each field in each document, a value is stored
+ * that is multiplied into the score for hits on that field.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene50.Lucene50TermVectorsFormat Term Vectors}.
+ * For each field in each document, the term vector (sometimes
+ * called document vector) may be stored. A term vector consists of term text and
+ * term frequency. To add Term Vectors to your index see the
+ * {@link org.apache.lucene.document.Field Field} constructors
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene80.Lucene80DocValuesFormat Per-document values}.
+ * Like stored values, these are also keyed by document
+ * number, but are generally intended to be loaded into main memory for fast
+ * access. Whereas stored values are generally intended for summary results from
+ * searches, per-document values are useful for things like scoring factors.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat Live documents}.
+ * An optional file indicating which documents are live.
+ * </li>
+ * <li>
+ * {@link org.apache.lucene.codecs.lucene86.Lucene86PointsFormat Point values}.
+ * Optional pair of files, recording dimensionally indexed fields, to enable fast
+ * numeric range filtering and large numeric values like BigInteger and BigDecimal (1D)
+ * and geographic shape intersection (2D, 3D).
+ * </li>
+ * </ul>
+ * <p>Details on each of these are provided in their linked pages.</p>
+ * </div>
+ * <a id="File_Naming"></a>
+ * <h3>File Naming</h3>
+ * <div>
+ * <p>All files belonging to a segment have the same name with varying extensions.
+ * The extensions correspond to the different file formats described below. When
+ * using the Compound File format (default for small segments) these files (except
+ * for the Segment info file, the Lock file, and Deleted documents file) are collapsed
+ * into a single .cfs file (see below for details)</p>
+ * <p>Typically, all segments in an index are stored in a single directory,
+ * although this is not required.</p>
+ * <p>File names are never re-used. That is, when any file is saved
+ * to the Directory it is given a never before used filename. This is achieved
+ * using a simple generations approach. For example, the first segments file is
+ * segments_1, then segments_2, etc. The generation is a sequential long integer
+ * represented in alpha-numeric (base 36) form.</p>
+ * </div>
+ * <a id="file-names"></a>
+ * <h3>Summary of File Extensions</h3>
+ * <div>
+ * <p>The following table summarizes the names and extensions of the files in
+ * Lucene:</p>
+ * <table class="padding4" style="border-spacing: 1px; border-collapse: separate">
+ * <caption>lucene filenames by extension</caption>
+ * <tr>
+ * <th>Name</th>
+ * <th>Extension</th>
+ * <th>Brief Description</th>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.index.SegmentInfos Segments File}</td>
+ * <td>segments_N</td>
+ * <td>Stores information about a commit point</td>
+ * </tr>
+ * <tr>
+ * <td><a href="#Lock_File">Lock File</a></td>
+ * <td>write.lock</td>
+ * <td>The Write lock prevents multiple IndexWriters from writing to the same
+ * file.</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene86.Lucene86SegmentInfoFormat Segment Info}</td>
+ * <td>.si</td>
+ * <td>Stores metadata about a segment</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat Compound File}</td>
+ * <td>.cfs, .cfe</td>
+ * <td>An optional "virtual" file consisting of all the other index files for
+ * systems that frequently run out of file handles.</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene60.Lucene60FieldInfosFormat Fields}</td>
+ * <td>.fnm</td>
+ * <td>Stores information about the fields</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat Field Index}</td>
+ * <td>.fdx</td>
+ * <td>Contains pointers to field data</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat Field Data}</td>
+ * <td>.fdt</td>
+ * <td>The stored fields for documents</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Term Dictionary}</td>
+ * <td>.tim</td>
+ * <td>The term dictionary, stores term info</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Term Index}</td>
+ * <td>.tip</td>
+ * <td>The index into the Term Dictionary</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Frequencies}</td>
+ * <td>.doc</td>
+ * <td>Contains the list of docs which contain each term along with frequency</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Positions}</td>
+ * <td>.pos</td>
+ * <td>Stores position information about where a term occurs in the index</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat Payloads}</td>
+ * <td>.pay</td>
+ * <td>Stores additional per-position metadata information such as character offsets and user payloads</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene80.Lucene80NormsFormat Norms}</td>
+ * <td>.nvd, .nvm</td>
+ * <td>Encodes length and boost factors for docs and fields</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene80.Lucene80DocValuesFormat Per-Document Values}</td>
+ * <td>.dvd, .dvm</td>
+ * <td>Encodes additional scoring factors or other per-document information.</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene50.Lucene50TermVectorsFormat Term Vector Index}</td>
+ * <td>.tvx</td>
+ * <td>Stores offset into the document data file</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene50.Lucene50TermVectorsFormat Term Vector Data}</td>
+ * <td>.tvd</td>
+ * <td>Contains term vector data.</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat Live Documents}</td>
+ * <td>.liv</td>
+ * <td>Info about what documents are live</td>
+ * </tr>
+ * <tr>
+ * <td>{@link org.apache.lucene.codecs.lucene86.Lucene86PointsFormat Point values}</td>
+ * <td>.dii, .dim</td>
+ * <td>Holds indexed points, if any</td>
+ * </tr>
+ * </table>
+ * </div>
+ * <a id="Lock_File"></a>
+ * <h3>Lock File</h3>
+ * The write lock, which is stored in the index directory by default, is named
+ * "write.lock". If the lock directory is different from the index directory then
+ * the write lock will be named "XXXX-write.lock" where XXXX is a unique prefix
+ * derived from the full path to the index directory. When this file is present, a
+ * writer is currently modifying the index (adding or removing documents). This
+ * lock file ensures that only one writer is modifying the index at a time.
+ * <a id="History"></a>
+ * <h3>History</h3>
+ * <p>Compatibility notes are provided in this document, describing how file
+ * formats have changed from prior versions:</p>
+ * <ul>
+ * <li>In version 2.1, the file format was changed to allow lock-less commits (ie,
+ * no more commit lock). The change is fully backwards compatible: you can open a
+ * pre-2.1 index for searching or adding/deleting of docs. When the new segments
+ * file is saved (committed), it will be written in the new file format (meaning
+ * no specific "upgrade" process is needed). But note that once a commit has
+ * occurred, pre-2.1 Lucene will not be able to read the index.</li>
+ * <li>In version 2.3, the file format was changed to allow segments to share a
+ * single set of doc store (vectors &amp; stored fields) files. This allows for
+ * faster indexing in certain cases. The change is fully backwards compatible (in
+ * the same way as the lock-less commits change in 2.1).</li>
+ * <li>In version 2.4, Strings are now written as true UTF-8 byte sequence, not
+ * Java's modified UTF-8. See <a href="http://issues.apache.org/jira/browse/LUCENE-510">
+ * LUCENE-510</a> for details.</li>
+ * <li>In version 2.9, an optional opaque Map&lt;String,String&gt; CommitUserData
+ * may be passed to IndexWriter's commit methods (and later retrieved), which is
+ * recorded in the segments_N file. See <a href="http://issues.apache.org/jira/browse/LUCENE-1382">
+ * LUCENE-1382</a> for details. Also,
+ * diagnostics were added to each segment written recording details about why it
+ * was written (due to flush, merge; which OS/JRE was used; etc.). See issue
+ * <a href="http://issues.apache.org/jira/browse/LUCENE-1654">LUCENE-1654</a> for details.</li>
+ * <li>In version 3.0, compressed fields are no longer written to the index (they
+ * can still be read, but on merge the new segment will write them, uncompressed).
+ * See issue <a href="http://issues.apache.org/jira/browse/LUCENE-1960">LUCENE-1960</a>
+ * for details.</li>
+ * <li>In version 3.1, segments records the code version that created them. See
+ * <a href="http://issues.apache.org/jira/browse/LUCENE-2720">LUCENE-2720</a> for details.
+ * Additionally segments track explicitly whether or not they have term vectors.
+ * See <a href="http://issues.apache.org/jira/browse/LUCENE-2811">LUCENE-2811</a>
+ * for details.</li>
+ * <li>In version 3.2, numeric fields are written as natively to stored fields
+ * file, previously they were stored in text format only.</li>
+ * <li>In version 3.4, fields can omit position data while still indexing term
+ * frequencies.</li>
+ * <li>In version 4.0, the format of the inverted index became extensible via
+ * the {@link org.apache.lucene.codecs.Codec Codec} api. Fast per-document storage
+ * ({@code DocValues}) was introduced. Normalization factors need no longer be a
+ * single byte, they can be any {@link org.apache.lucene.index.NumericDocValues NumericDocValues}.
+ * Terms need not be unicode strings, they can be any byte sequence. Term offsets
+ * can optionally be indexed into the postings lists. Payloads can be stored in the
+ * term vectors.</li>
+ * <li>In version 4.1, the format of the postings list changed to use either
+ * of FOR compression or variable-byte encoding, depending upon the frequency
+ * of the term. Terms appearing only once were changed to inline directly into
+ * the term dictionary. Stored fields are compressed by default. </li>
+ * <li>In version 4.2, term vectors are compressed by default. DocValues has
+ * a new multi-valued type (SortedSet), that can be used for faceting/grouping/joining
+ * on multi-valued fields.</li>
+ * <li>In version 4.5, DocValues were extended to explicitly represent missing values.</li>
+ * <li>In version 4.6, FieldInfos were extended to support per-field DocValues generation, to
+ * allow updating NumericDocValues fields.</li>
+ * <li>In version 4.8, checksum footers were added to the end of each index file
+ * for improved data integrity. Specifically, the last 8 bytes of every index file
+ * contain the zlib-crc32 checksum of the file.</li>
+ * <li>In version 4.9, DocValues has a new multi-valued numeric type (SortedNumeric)
+ * that is suitable for faceting/sorting/analytics.
+ * <li>In version 5.4, DocValues have been improved to store more information on disk:
+ * addresses for binary fields and ord indexes for multi-valued fields.
+ * <li>In version 6.0, Points were added, for multi-dimensional range/distance search.
+ * <li>In version 6.2, new Segment info format that reads/writes the index sort, to support index sorting.
+ * <li>In version 7.0, DocValues have been improved to better support sparse doc values
+ * thanks to an iterator API.</li>
+ * <li>In version 8.0, postings have been enhanced to record, for each block of
+ * doc ids, the (term freq, normalization factor) pairs that may trigger the
+ * maximum score of the block. This information is recorded alongside skip data
+ * in order to be able to skip blocks of doc ids if they may not produce high
+ * enough scores.
+ * Additionally doc values and norms has been extended with jump-tables to make access O(1)
+ * instead of O(n), where n is the number of elements to skip when advancing in the data.</li>
+ * <li>In version 8.4, postings, positions, offsets and payload lengths have move to a more
+ * performant encoding that is vectorized.</li>
+ * <li>In version 8.6, index sort serialization is delegated to the sorts themselves, to
+ * allow user-defined sorts to be used</li>
+ * </ul>
+ * <a id="Limitations"></a>
+ * <h3>Limitations</h3>
+ * <div>
+ * <p>Lucene uses a Java <code>int</code> to refer to
+ * document numbers, and the index file format uses an <code>Int32</code>
+ * on-disk to store document numbers. This is a limitation
+ * of both the index file format and the current implementation. Eventually these
+ * should be replaced with either <code>UInt64</code> values, or
+ * better yet, {@link org.apache.lucene.store.DataOutput#writeVInt VInt} values which have no limit.</p>
+ * </div>
+ */
+package org.apache.lucene.codecs.lucene87;
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterDeleteQueue.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterDeleteQueue.java
index 92e7be7..cb3db7a 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterDeleteQueue.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterDeleteQueue.java
@@ -573,6 +573,12 @@
     }
   }
 
+  // we use a static method to get this lambda since we previously introduced a memory leak since it would
+  // implicitly reference this.nextSeqNo which holds on to this del queue. see LUCENE-9478 for reference
+  private static LongSupplier getPrevMaxSeqIdSupplier(AtomicLong nextSeqNo) {
+    return () -> nextSeqNo.get() - 1;
+  }
+
   /**
    * Advances the queue to the next queue on flush. This carries over the the generation to the next queue and
    * set the {@link #getMaxSeqNo()} based on the given maxNumPendingOps. This method can only be called once, subsequently
@@ -593,7 +599,7 @@
     return new DocumentsWriterDeleteQueue(infoStream, generation + 1, seqNo + 1,
         // don't pass ::getMaxCompletedSeqNo here b/c otherwise we keep an reference to this queue
         // and this will be a memory leak since the queues can't be GCed
-        () -> nextSeqNo.get() - 1);
+        getPrevMaxSeqIdSupplier(nextSeqNo));
 
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
index e3df505..5684bf4 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
@@ -324,12 +324,16 @@
     }
   }
 
+  /**
+   * To be called only by the owner of this object's monitor lock
+   */
   private void checkoutAndBlock(DocumentsWriterPerThread perThread) {
+    assert Thread.holdsLock(this);
     assert perThreadPool.isRegistered(perThread);
     assert perThread.isHeldByCurrentThread();
     assert perThread.isFlushPending() : "can not block non-pending threadstate";
     assert fullFlush : "can not block if fullFlush == false";
-    numPending--;
+    numPending--; // write access synced
     blockedFlushes.add(perThread);
     boolean checkedOut = perThreadPool.checkout(perThread);
     assert checkedOut;
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java
index ce6956c..5db3ec4 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java
@@ -96,6 +96,10 @@
         throw new ThreadInterruptedException(ie);
       }
     }
+    // we must check if we are closed since this might happen while we are waiting for the writer permit
+    // and if we miss that we might release a new DWPT even though the pool is closed. Yet, that wouldn't be the
+    // end of the world it's violating the contract that we don't release any new DWPT after this pool is closed
+    ensureOpen();
     DocumentsWriterPerThread dwpt = dwptFactory.get();
     dwpt.lock(); // lock so nobody else will get this DWPT
     dwpts.add(dwpt);
@@ -108,9 +112,7 @@
   /** This method is used by DocumentsWriter/FlushControl to obtain a DWPT to do an indexing operation (add/updateDocument). */
   DocumentsWriterPerThread getAndLock() throws IOException {
     synchronized (this) {
-      if (closed) {
-        throw new AlreadyClosedException("DWPTPool is already closed");
-      }
+      ensureOpen();
       // Important that we are LIFO here! This way if number of concurrent indexing threads was once high,
       // but has now reduced, we only use a limited number of DWPTs. This also guarantees that if we have suddenly
       // a single thread indexing
@@ -127,6 +129,12 @@
     }
   }
 
+  private void ensureOpen() {
+    if (closed) {
+      throw new AlreadyClosedException("DWPTPool is already closed");
+    }
+  }
+
   void marksAsFreeAndUnlock(DocumentsWriterPerThread state) {
     synchronized (this) {
       assert dwpts.contains(state) : "we tried to add a DWPT back to the pool but the pool doesn't know aobut this DWPT";
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
index 6c97794..424eb87 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
@@ -39,6 +39,7 @@
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.atomic.AtomicReference;
 import java.util.concurrent.locks.ReentrantLock;
+import java.util.function.BooleanSupplier;
 import java.util.function.IntPredicate;
 import java.util.stream.Collectors;
 import java.util.stream.StreamSupport;
@@ -401,7 +402,7 @@
   private final Set<MergePolicy.OneMerge> runningMerges = new HashSet<>();
   private final List<MergePolicy.OneMerge> mergeExceptions = new ArrayList<>();
   private long mergeGen;
-  private boolean stopMerges; // TODO make sure this is only changed once and never set back to false
+  private Merges merges = new Merges();
   private boolean didMessageState;
   private final AtomicInteger flushCount = new AtomicInteger();
   private final AtomicInteger flushDeletesCount = new AtomicInteger();
@@ -545,9 +546,10 @@
     // obtained during this flush are pooled, the first time
     // this method is called:
     readerPool.enableReaderPooling();
-    DirectoryReader r = null;
+    StandardDirectoryReader r = null;
     doBeforeFlush();
-    boolean anyChanges = false;
+    boolean anyChanges;
+    final long maxFullFlushMergeWaitMillis = config.getMaxFullFlushMergeWaitMillis();
     /*
      * for releasing a NRT reader we must ensure that 
      * DW doesn't add any segments or deletes until we are
@@ -555,8 +557,46 @@
      * We release the two stage full flush after we are done opening the
      * directory reader!
      */
+    MergePolicy.MergeSpecification onGetReaderMerges = null;
+    final AtomicBoolean stopCollectingMergedReaders = new AtomicBoolean(false);
+    final Map<String, SegmentReader> mergedReaders = new HashMap<>();
+    final Map<String, SegmentReader> openedReadOnlyClones = new HashMap<>();
+    // this function is used to control which SR are opened in order to keep track of them
+    // and to reuse them in the case we wait for merges in this getReader call.
+    IOUtils.IOFunction<SegmentCommitInfo, SegmentReader> readerFactory = sci -> {
+      final ReadersAndUpdates rld = getPooledInstance(sci, true);
+      try {
+        assert Thread.holdsLock(IndexWriter.this);
+        SegmentReader segmentReader = rld.getReadOnlyClone(IOContext.READ);
+        if (maxFullFlushMergeWaitMillis > 0) { // only track this if we actually do fullFlush merges
+          openedReadOnlyClones.put(sci.info.name, segmentReader);
+        }
+        return segmentReader;
+      } finally {
+        release(rld);
+      }
+    };
+    Closeable onGetReaderMergeResources = null;
+    SegmentInfos openingSegmentInfos = null;
     boolean success2 = false;
     try {
+      /* this is the essential part of the getReader method. We need to take care of the following things:
+       *  - flush all currently in-memory DWPTs to disk
+       *  - apply all deletes & updates to new and to the existing DWPTs
+       *  - prevent flushes and applying deletes of concurrently indexing DWPTs to be applied
+       *  - open a SDR on the updated SIS
+       *
+       * in order to prevent concurrent flushes we call DocumentsWriter#flushAllThreads that swaps out the deleteQueue
+       *  (this enforces a happens before relationship between this and the subsequent full flush) and informs the
+       * FlushControl (#markForFullFlush()) that it should prevent any new DWPTs from flushing until we are \
+       * done (DocumentsWriter#finishFullFlush(boolean)). All this is guarded by the fullFlushLock to prevent multiple
+       * full flushes from happening concurrently. Once the DocWriter has initiated a full flush we can sequentially flush
+       * and apply deletes & updates to the written segments without worrying about concurrently indexing DWPTs. The important
+       * aspect is that it all happens between DocumentsWriter#flushAllThread() and DocumentsWriter#finishFullFlush(boolean)
+       * since once the flush is marked as done deletes start to be applied to the segments on disk without guarantees that
+       * the corresponding added documents (in the update case) are flushed and visible when opening a SDR.
+       *
+       */
       boolean success = false;
       synchronized (fullFlushLock) {
         try {
@@ -573,7 +613,6 @@
           if (applyAllDeletes) {
             applyAllDeletesAndUpdates();
           }
-
           synchronized(this) {
 
             // NOTE: we cannot carry doc values updates in memory yet, so we always must write them through to disk and re-open each
@@ -581,16 +620,50 @@
 
             // TODO: we could instead just clone SIS and pull/incref readers in sync'd block, and then do this w/o IW's lock?
             // Must do this sync'd on IW to prevent a merge from completing at the last second and failing to write its DV updates:
-           writeReaderPool(writeAllDeletes);
+            writeReaderPool(writeAllDeletes);
 
             // Prevent segmentInfos from changing while opening the
             // reader; in theory we could instead do similar retry logic,
             // just like we do when loading segments_N
-            
-            r = StandardDirectoryReader.open(this, segmentInfos, applyAllDeletes, writeAllDeletes);
+            r = StandardDirectoryReader.open(this, readerFactory, segmentInfos, applyAllDeletes, writeAllDeletes);
             if (infoStream.isEnabled("IW")) {
               infoStream.message("IW", "return reader version=" + r.getVersion() + " reader=" + r);
             }
+            if (maxFullFlushMergeWaitMillis > 0) {
+              // we take the SIS from the reader which has already pruned away fully deleted readers
+              // this makes pulling the readers below after the merge simpler since we can be safe that
+              // they are not closed. Every segment has a corresponding SR in the SDR we opened if we use
+              // this SIS
+              // we need to do this rather complicated management of SRs and infos since we can't wait for merges
+              // while we hold the fullFlushLock since the merge might hit a tragic event and that must not be reported
+              // while holding that lock. Merging outside of the lock ie. after calling docWriter.finishFullFlush(boolean) would
+              // yield wrong results because deletes might sneak in during the merge
+              openingSegmentInfos = r.getSegmentInfos().clone();
+              onGetReaderMerges = preparePointInTimeMerge(openingSegmentInfos, stopCollectingMergedReaders::get, MergeTrigger.GET_READER,
+                  sci -> {
+                    assert stopCollectingMergedReaders.get() == false : "illegal state  merge reader must be not pulled since we already stopped waiting for merges";
+                    SegmentReader apply = readerFactory.apply(sci);
+                    mergedReaders.put(sci.info.name, apply);
+                    // we need to incRef the files of the opened SR otherwise it's possible that another merge
+                    // removes the segment before we pass it on to the SDR
+                    deleter.incRef(sci.files());
+                  });
+              onGetReaderMergeResources = () -> {
+                // this needs to be closed once after we are done. In the case of an exception it releases
+                // all resources, closes the merged readers and decrements the files references.
+                // this only happens for readers that haven't been removed from the mergedReaders and release elsewhere
+                synchronized (this) {
+                  stopCollectingMergedReaders.set(true);
+                  IOUtils.close(mergedReaders.values().stream().map(sr -> (Closeable) () -> {
+                    try {
+                      deleter.decRef(sr.getSegmentInfo().files());
+                    } finally {
+                      sr.close();
+                    }
+                  }).collect(Collectors.toList()));
+                }
+              };
+            }
           }
           success = true;
         } finally {
@@ -607,6 +680,19 @@
           }
         }
       }
+      if (onGetReaderMerges != null) { // only relevant if we do merge on getReader
+        StandardDirectoryReader mergedReader = finishGetReaderMerge(stopCollectingMergedReaders, mergedReaders,
+            openedReadOnlyClones, openingSegmentInfos, applyAllDeletes,
+            writeAllDeletes, onGetReaderMerges, maxFullFlushMergeWaitMillis);
+        if (mergedReader != null) {
+          try {
+            r.close();
+          } finally {
+            r = mergedReader;
+          }
+        }
+      }
+
       anyChanges |= maybeMerge.getAndSet(false);
       if (anyChanges) {
         maybeMerge(config.getMergePolicy(), MergeTrigger.FULL_FLUSH, UNBOUNDED_MAX_MERGE_SEGMENTS);
@@ -621,15 +707,66 @@
     } finally {
       if (!success2) {
         try {
-          IOUtils.closeWhileHandlingException(r);
+          IOUtils.closeWhileHandlingException(r, onGetReaderMergeResources);
         } finally {
           maybeCloseOnTragicEvent();
         }
+      } else {
+        IOUtils.close(onGetReaderMergeResources);
       }
     }
     return r;
   }
 
+  private StandardDirectoryReader finishGetReaderMerge(AtomicBoolean stopCollectingMergedReaders, Map<String, SegmentReader> mergedReaders,
+                                                       Map<String, SegmentReader> openedReadOnlyClones, SegmentInfos openingSegmentInfos,
+                                                       boolean applyAllDeletes, boolean writeAllDeletes,
+                                                       MergePolicy.MergeSpecification pointInTimeMerges, long maxCommitMergeWaitMillis) throws IOException {
+    assert openingSegmentInfos != null;
+    mergeScheduler.merge(mergeSource, MergeTrigger.GET_READER);
+    pointInTimeMerges.await(maxCommitMergeWaitMillis, TimeUnit.MILLISECONDS);
+    synchronized (this) {
+      stopCollectingMergedReaders.set(true);
+      StandardDirectoryReader reader = maybeReopenMergedNRTReader(mergedReaders, openedReadOnlyClones, openingSegmentInfos,
+          applyAllDeletes, writeAllDeletes);
+      IOUtils.close(mergedReaders.values());
+      mergedReaders.clear();
+      return reader;
+    }
+  }
+
+  private StandardDirectoryReader maybeReopenMergedNRTReader(Map<String, SegmentReader> mergedReaders,
+                                                             Map<String, SegmentReader> openedReadOnlyClones, SegmentInfos openingSegmentInfos,
+                                                             boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {
+    assert Thread.holdsLock(this);
+    if (mergedReaders.isEmpty() == false) {
+      Collection<String> files = new ArrayList<>();
+      try {
+        return StandardDirectoryReader.open(this,
+            sci -> {
+              // as soon as we remove the reader and return it the StandardDirectoryReader#open
+              // will take care of closing it. We only need to handle the readers that remain in the
+              // mergedReaders map and close them.
+              SegmentReader remove = mergedReaders.remove(sci.info.name);
+              if (remove == null) {
+                remove = openedReadOnlyClones.remove(sci.info.name);
+                assert remove != null;
+                // each of the readers we reuse from the previous reader needs to be incRef'd
+                // since we reuse them but don't have an implicit incRef in the SDR:open call
+                remove.incRef();
+              } else {
+                files.addAll(remove.getSegmentInfo().files());
+              }
+              return remove;
+            }, openingSegmentInfos, applyAllDeletes, writeAllDeletes);
+      } finally {
+        // now the SDR#open call has incRef'd the files so we can let them go
+        deleter.decRef(files);
+      }
+    }
+    return null;
+  }
+
   @Override
   public final long ramBytesUsed() {
     ensureOpen();
@@ -1097,7 +1234,6 @@
         flush(true, true);
         waitForMerges();
         commitInternal(config.getMergePolicy());
-        rollbackInternal(); // ie close, since we just committed
       } catch (Throwable t) {
         // Be certain to close the index on any exception
         try {
@@ -1107,6 +1243,7 @@
         }
         throw t;
       }
+      rollbackInternal(); // if we got that far lets rollback and close
     }
   }
 
@@ -2131,10 +2268,14 @@
   private final void maybeMerge(MergePolicy mergePolicy, MergeTrigger trigger, int maxNumSegments) throws IOException {
     ensureOpen(false);
     if (updatePendingMerges(mergePolicy, trigger, maxNumSegments) != null) {
-      mergeScheduler.merge(mergeSource, trigger);
+      executeMerge(trigger);
     }
   }
 
+  final void executeMerge(MergeTrigger trigger) throws IOException {
+    mergeScheduler.merge(mergeSource, trigger);
+  }
+
   private synchronized MergePolicy.MergeSpecification updatePendingMerges(MergePolicy mergePolicy, MergeTrigger trigger, int maxNumSegments)
     throws IOException {
 
@@ -2144,7 +2285,7 @@
 
     assert maxNumSegments == UNBOUNDED_MAX_MERGE_SEGMENTS || maxNumSegments > 0;
     assert trigger != null;
-    if (stopMerges) {
+    if (merges.areEnabled() == false) {
       return null;
     }
 
@@ -2168,6 +2309,7 @@
       }
     } else {
       switch (trigger) {
+        case GET_READER:
         case COMMIT:
           spec = mergePolicy.findFullFlushMerges(trigger, segmentInfos, this);
           break;
@@ -2261,10 +2403,9 @@
     
     try {
       synchronized (this) {
-        // must be synced otherwise register merge might throw and exception if stopMerges
+        // must be synced otherwise register merge might throw and exception if merges
         // changes concurrently, abortMerges is synced as well
-        stopMerges = true; // this disables merges forever
-        abortMerges();
+        abortMerges(); // this disables merges forever since we are closing and can't reenable them
         assert mergingSegments.isEmpty() : "we aborted all merges but still have merging segments: " + mergingSegments;
       }
       if (infoStream.isEnabled("IW")) {
@@ -2427,7 +2568,16 @@
           synchronized (this) {
             try {
               // Abort any running merges
-              abortMerges();
+              try {
+                abortMerges();
+                assert merges.areEnabled() == false : "merges should be disabled - who enabled them?";
+                assert mergingSegments.isEmpty() : "found merging segments but merges are disabled: " + mergingSegments;
+              } finally {
+                // abortMerges disables all merges and we need to re-enable them here to make sure
+                // IW can function properly. An exception in abortMerges() might be fatal for IW but just to be sure
+                // lets re-enable merges anyway.
+                merges.enable();
+              }
               adjustPendingNumDocs(-segmentInfos.totalMaxDoc());
               // Remove all segments
               segmentInfos.clear();
@@ -2451,6 +2601,7 @@
               return seqNo;
             } finally {
               if (success == false) {
+
                 if (infoStream.isEnabled("IW")) {
                   infoStream.message("IW", "hit exception during deleteAll");
                 }
@@ -2469,6 +2620,7 @@
    *  method: when you abort a long-running merge, you lose
    *  a lot of work that must later be redone. */
   private synchronized void abortMerges() throws IOException {
+    merges.disable();
     // Abort all pending & running merges:
     IOUtils.applyToAll(pendingMerges, merge -> {
       if (infoStream.isEnabled("IW")) {
@@ -2969,7 +3121,7 @@
 
       synchronized (this) {
         ensureOpen();
-        assert stopMerges == false;
+        assert merges.areEnabled();
         runningAddIndexesMerges.add(merger);
       }
       try {
@@ -2990,7 +3142,7 @@
       final MergePolicy mergePolicy = config.getMergePolicy();
       boolean useCompoundFile;
       synchronized(this) { // Guard segmentInfos
-        if (stopMerges) {
+        if (merges.areEnabled() == false) {
           // Safe: these files must exist
           deleteNewFiles(infoPerCommit.files());
 
@@ -3026,7 +3178,7 @@
 
       // Register the new segment
       synchronized(this) {
-        if (stopMerges) {
+        if (merges.areEnabled() == false) {
           // Safe: these files must exist
           deleteNewFiles(infoPerCommit.files());
 
@@ -3179,9 +3331,9 @@
       SegmentInfos toCommit = null;
       boolean anyChanges = false;
       long seqNo;
-      MergePolicy.MergeSpecification onCommitMerges = null;
-      AtomicBoolean includeInCommit = new AtomicBoolean(true);
-      final long maxCommitMergeWaitMillis = config.getMaxCommitMergeWaitMillis();
+      MergePolicy.MergeSpecification pointInTimeMerges = null;
+      AtomicBoolean stopAddingMergedSegments = new AtomicBoolean(false);
+      final long maxCommitMergeWaitMillis = config.getMaxFullFlushMergeWaitMillis();
       // This is copied from doFlush, except it's modified to
       // clone & incRef the flushed SegmentInfos inside the
       // sync block:
@@ -3242,9 +3394,9 @@
               // removed the files we are now syncing.
               deleter.incRef(toCommit.files(false));
               if (anyChanges && maxCommitMergeWaitMillis > 0) {
-                // we can safely call prepareOnCommitMerge since writeReaderPool(true) above wrote all
+                // we can safely call preparePointInTimeMerge since writeReaderPool(true) above wrote all
                 // necessary files to disk and checkpointed them.
-                onCommitMerges = prepareOnCommitMerge(toCommit, includeInCommit);
+                pointInTimeMerges = preparePointInTimeMerge(toCommit, stopAddingMergedSegments::get, MergeTrigger.COMMIT, sci->{});
               }
             }
             success = true;
@@ -3267,21 +3419,21 @@
         maybeCloseOnTragicEvent();
       }
 
-      if (onCommitMerges != null) {
+      if (pointInTimeMerges != null) {
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "now run merges during commit: " + onCommitMerges.segString(directory));
+          infoStream.message("IW", "now run merges during commit: " + pointInTimeMerges.segString(directory));
         }
         mergeScheduler.merge(mergeSource, MergeTrigger.COMMIT);
-        onCommitMerges.await(maxCommitMergeWaitMillis, TimeUnit.MILLISECONDS);
+        pointInTimeMerges.await(maxCommitMergeWaitMillis, TimeUnit.MILLISECONDS);
         if (infoStream.isEnabled("IW")) {
           infoStream.message("IW", "done waiting for merges during commit");
         }
         synchronized (this) {
           // we need to call this under lock since mergeFinished above is also called under the IW lock
-          includeInCommit.set(false);
+          stopAddingMergedSegments.set(true);
         }
       }
-      // do this after handling any onCommitMerges since the files will have changed if any merges
+      // do this after handling any pointInTimeMerges since the files will have changed if any merges
       // did complete
       filesToCommit = toCommit.files(false);
       try {
@@ -3312,21 +3464,24 @@
   }
 
   /**
-   * This optimization allows a commit to wait for merges on smallish segments to
-   * reduce the eventual number of tiny segments in the commit point.  We wrap a {@code OneMerge} to
-   * update the {@code committingSegmentInfos} once the merge has finished.  We replace the source segments
-   * in the SIS that we are going to commit with the freshly merged segment, but ignore all deletions and updates
-   * that are made to documents in the merged segment while it was merging.  The updates that are made do not belong to
-   * the point-in-time commit point and should therefore not be included. See the clone call in {@code onMergeComplete}
+   * This optimization allows a commit/getReader to wait for merges on smallish segments to
+   * reduce the eventual number of tiny segments in the commit point / NRT Reader.  We wrap a {@code OneMerge} to
+   * update the {@code mergingSegmentInfos} once the merge has finished. We replace the source segments
+   * in the SIS that we are going to commit / open the reader on with the freshly merged segment, but ignore all deletions and updates
+   * that are made to documents in the merged segment while it was merging. The updates that are made do not belong to
+   * the point-in-time commit point / NRT READER and should therefore not be included. See the clone call in {@code onMergeComplete}
    * below.  We also ensure that we pull the merge readers while holding {@code IndexWriter}'s lock.  Otherwise
    * we could see concurrent deletions/updates applied that do not belong to the segment.
    */
-  private MergePolicy.MergeSpecification prepareOnCommitMerge(SegmentInfos committingSegmentInfos, AtomicBoolean includeInCommit) throws IOException {
+  private MergePolicy.MergeSpecification preparePointInTimeMerge(SegmentInfos mergingSegmentInfos, BooleanSupplier stopCollectingMergeResults,
+                                                                 MergeTrigger trigger,
+                                                                 IOUtils.IOConsumer<SegmentCommitInfo> mergeFinished) throws IOException {
     assert Thread.holdsLock(this);
-    MergePolicy.MergeSpecification onCommitMerges = updatePendingMerges(new OneMergeWrappingMergePolicy(config.getMergePolicy(), toWrap ->
+    assert trigger == MergeTrigger.GET_READER || trigger == MergeTrigger.COMMIT : "illegal trigger: " + trigger;
+    MergePolicy.MergeSpecification pointInTimeMerges = updatePendingMerges(new OneMergeWrappingMergePolicy(config.getMergePolicy(), toWrap ->
         new MergePolicy.OneMerge(toWrap.segments) {
           SegmentCommitInfo origInfo;
-          AtomicBoolean onlyOnce = new AtomicBoolean(false);
+          final AtomicBoolean onlyOnce = new AtomicBoolean(false);
 
           @Override
           public void mergeFinished(boolean committed, boolean segmentDropped) throws IOException {
@@ -3334,50 +3489,65 @@
 
             // includedInCommit will be set (above, by our caller) to false if the allowed max wall clock
             // time (IWC.getMaxCommitMergeWaitMillis()) has elapsed, which means we did not make the timeout
-            // and will not commit our merge to the to-be-commited SegmentInfos
-            
+            // and will not commit our merge to the to-be-committed SegmentInfos
             if (segmentDropped == false
                 && committed
-                && includeInCommit.get()) {
+                && stopCollectingMergeResults.getAsBoolean() == false) {
+
+              // make sure onMergeComplete really was called:
+              assert origInfo != null;
 
               if (infoStream.isEnabled("IW")) {
                 infoStream.message("IW", "now apply merge during commit: " + toWrap.segString());
               }
 
-              // make sure onMergeComplete really was called:
-              assert origInfo != null;
-
-              deleter.incRef(origInfo.files());
+              if (trigger == MergeTrigger.COMMIT) {
+                // if we do this in a getReader call here this is obsolete since we already hold a reader that has
+                // incRef'd these files
+                deleter.incRef(origInfo.files());
+              }
               Set<String> mergedSegmentNames = new HashSet<>();
               for (SegmentCommitInfo sci : segments) {
                 mergedSegmentNames.add(sci.info.name);
               }
               List<SegmentCommitInfo> toCommitMergedAwaySegments = new ArrayList<>();
-              for (SegmentCommitInfo sci : committingSegmentInfos) {
+              for (SegmentCommitInfo sci : mergingSegmentInfos) {
                 if (mergedSegmentNames.contains(sci.info.name)) {
                   toCommitMergedAwaySegments.add(sci);
-                  deleter.decRef(sci.files());
+                  if (trigger == MergeTrigger.COMMIT) {
+                    // if we do this in a getReader call here this is obsolete since we already hold a reader that has
+                    // incRef'd these files and will decRef them when it's closed
+                    deleter.decRef(sci.files());
+                  }
                 }
               }
               // Construct a OneMerge that applies to toCommit
               MergePolicy.OneMerge applicableMerge = new MergePolicy.OneMerge(toCommitMergedAwaySegments);
               applicableMerge.info = origInfo;
               long segmentCounter = Long.parseLong(origInfo.info.name.substring(1), Character.MAX_RADIX);
-              committingSegmentInfos.counter = Math.max(committingSegmentInfos.counter, segmentCounter + 1);
-              committingSegmentInfos.applyMergeChanges(applicableMerge, false);
+              mergingSegmentInfos.counter = Math.max(mergingSegmentInfos.counter, segmentCounter + 1);
+              mergingSegmentInfos.applyMergeChanges(applicableMerge, false);
             } else {
               if (infoStream.isEnabled("IW")) {
                 infoStream.message("IW", "skip apply merge during commit: " + toWrap.segString());
               }
             }
-            toWrap.mergeFinished(committed, false);
+            toWrap.mergeFinished(committed, segmentDropped);
             super.mergeFinished(committed, segmentDropped);
           }
 
           @Override
-          void onMergeComplete() {
-            // clone the target info to make sure we have the original info without the updated del and update gens
-            origInfo = info.clone();
+          void onMergeComplete() throws IOException {
+            assert Thread.holdsLock(IndexWriter.this);
+            if (stopCollectingMergeResults.getAsBoolean() == false
+                && isAborted() == false
+                && info.info.maxDoc() > 0/* never do this if the segment if dropped / empty */) {
+              mergeFinished.accept(info);
+              // clone the target info to make sure we have the original info without the updated del and update gens
+              origInfo = info.clone();
+            }
+            toWrap.onMergeComplete();
+            super.onMergeComplete();
           }
 
           @Override
@@ -3394,11 +3564,11 @@
             return toWrap.wrapForMerge(reader); // must delegate
           }
         }
-    ), MergeTrigger.COMMIT, UNBOUNDED_MAX_MERGE_SEGMENTS);
-    if (onCommitMerges != null) {
+    ), trigger, UNBOUNDED_MAX_MERGE_SEGMENTS);
+    if (pointInTimeMerges != null) {
       boolean closeReaders = true;
       try {
-        for (MergePolicy.OneMerge merge : onCommitMerges.merges) {
+        for (MergePolicy.OneMerge merge : pointInTimeMerges.merges) {
           IOContext context = new IOContext(merge.getStoreMergeInfo());
           merge.initMergeReaders(
               sci -> {
@@ -3412,17 +3582,20 @@
         closeReaders = false;
       } finally {
         if (closeReaders) {
-          IOUtils.applyToAll(onCommitMerges.merges, merge -> {
+          IOUtils.applyToAll(pointInTimeMerges.merges, merge -> {
             // that merge is broken we need to clean up after it - it's fine we still have the IW lock to do this
             boolean removed = pendingMerges.remove(merge);
             assert removed: "merge should be pending but isn't: " + merge.segString();
-            abortOneMerge(merge);
-            mergeFinish(merge);
+            try {
+              abortOneMerge(merge);
+            } finally {
+              mergeFinish(merge);
+            }
           });
         }
       }
     }
-    return onCommitMerges;
+    return pointInTimeMerges;
   }
 
   /**
@@ -4232,7 +4405,7 @@
     }
     assert merge.segments.size() > 0;
 
-    if (stopMerges) {
+    if (merges.areEnabled() == false) {
       abortOneMerge(merge);
       throw new MergePolicy.MergeAbortedException("merge is aborted: " + segString(merge.segments));
     }
@@ -5728,4 +5901,24 @@
       return writer.segString();
     }
   }
+
+  private class Merges {
+    private boolean mergesEnabled = true;
+
+    boolean areEnabled() {
+      assert Thread.holdsLock(IndexWriter.this);
+      return mergesEnabled;
+    }
+
+    void disable() {
+      assert Thread.holdsLock(IndexWriter.this);
+      mergesEnabled = false;
+    }
+
+    void enable() {
+      ensureOpen();
+      assert Thread.holdsLock(IndexWriter.this);
+      mergesEnabled = true;
+    }
+  }
 }
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexWriterConfig.java b/lucene/core/src/java/org/apache/lucene/index/IndexWriterConfig.java
index 6dcdf83..25f78c7 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexWriterConfig.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexWriterConfig.java
@@ -110,8 +110,8 @@
   /** Default value for whether calls to {@link IndexWriter#close()} include a commit. */
   public final static boolean DEFAULT_COMMIT_ON_CLOSE = true;
 
-  /** Default value for time to wait for merges on commit (when using a {@link MergePolicy} that implements {@link MergePolicy#findFullFlushMerges}). */
-  public static final long DEFAULT_MAX_COMMIT_MERGE_WAIT_MILLIS = 0;
+  /** Default value for time to wait for merges on commit or getReader (when using a {@link MergePolicy} that implements {@link MergePolicy#findFullFlushMerges}). */
+  public static final long DEFAULT_MAX_FULL_FLUSH_MERGE_WAIT_MILLIS = 0;
   
   // indicates whether this config instance is already attached to a writer.
   // not final so that it can be cloned properly.
@@ -463,17 +463,18 @@
   }
 
   /**
-   * Expert: sets the amount of time to wait for merges (during {@link IndexWriter#commit}) returned by
+   * Expert: sets the amount of time to wait for merges (during {@link IndexWriter#commit}
+   * or {@link IndexWriter#getReader(boolean, boolean)}) returned by
    * MergePolicy.findFullFlushMerges(...).
    * If this time is reached, we proceed with the commit based on segments merged up to that point.
-   * The merges are not cancelled, and will still run to completion independent of the commit,
-   * like natural segment merges. The default is <code>{@value IndexWriterConfig#DEFAULT_MAX_COMMIT_MERGE_WAIT_MILLIS}</code>.
+   * The merges are not aborted, and will still run to completion independent of the commit or getReader call,
+   * like natural segment merges. The default is <code>{@value IndexWriterConfig#DEFAULT_MAX_FULL_FLUSH_MERGE_WAIT_MILLIS}</code>.
    *
    * Note: This settings has no effect unless {@link MergePolicy#findFullFlushMerges(MergeTrigger, SegmentInfos, MergePolicy.MergeContext)}
    * has an implementation that actually returns merges which by default doesn't return any merges.
    */
-  public IndexWriterConfig setMaxCommitMergeWaitMillis(long maxCommitMergeWaitMillis) {
-    this.maxCommitMergeWaitMillis = maxCommitMergeWaitMillis;
+  public IndexWriterConfig setMaxFullFlushMergeWaitMillis(long maxFullFlushMergeWaitMillis) {
+    this.maxFullFlushMergeWaitMillis = maxFullFlushMergeWaitMillis;
     return this;
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java b/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java
index 1450331..f979984 100644
--- a/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java
+++ b/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java
@@ -110,7 +110,7 @@
   protected String softDeletesField = null;
 
   /** Amount of time to wait for merges returned by MergePolicy.findFullFlushMerges(...) */
-  protected volatile long maxCommitMergeWaitMillis;
+  protected volatile long maxFullFlushMergeWaitMillis;
 
   // used by IndexWriterConfig
   LiveIndexWriterConfig(Analyzer analyzer) {
@@ -134,7 +134,7 @@
     flushPolicy = new FlushByRamOrCountsPolicy();
     readerPooling = IndexWriterConfig.DEFAULT_READER_POOLING;
     perThreadHardLimitMB = IndexWriterConfig.DEFAULT_RAM_PER_THREAD_HARD_LIMIT_MB;
-    maxCommitMergeWaitMillis = IndexWriterConfig.DEFAULT_MAX_COMMIT_MERGE_WAIT_MILLIS;
+    maxFullFlushMergeWaitMillis = IndexWriterConfig.DEFAULT_MAX_FULL_FLUSH_MERGE_WAIT_MILLIS;
   }
   
   /** Returns the default analyzer to use for indexing documents. */
@@ -469,8 +469,8 @@
    * If this time is reached, we proceed with the commit based on segments merged up to that point.
    * The merges are not cancelled, and may still run to completion independent of the commit.
    */
-  public long getMaxCommitMergeWaitMillis() {
-    return maxCommitMergeWaitMillis;
+  public long getMaxFullFlushMergeWaitMillis() {
+    return maxFullFlushMergeWaitMillis;
   }
 
   @Override
@@ -496,7 +496,7 @@
     sb.append("indexSort=").append(getIndexSort()).append("\n");
     sb.append("checkPendingFlushOnUpdate=").append(isCheckPendingFlushOnUpdate()).append("\n");
     sb.append("softDeletesField=").append(getSoftDeletesField()).append("\n");
-    sb.append("maxCommitMergeWaitMillis=").append(getMaxCommitMergeWaitMillis()).append("\n");
+    sb.append("maxFullFlushMergeWaitMillis=").append(getMaxFullFlushMergeWaitMillis()).append("\n");
     return sb.toString();
   }
 }
diff --git a/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java b/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
index 796d3b8..641facf 100644
--- a/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
+++ b/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java
@@ -421,7 +421,7 @@
     /**
      * Called just before the merge is applied to IndexWriter's SegmentInfos
      */
-    void onMergeComplete() {
+    void onMergeComplete() throws IOException {
     }
 
     /**
@@ -623,19 +623,20 @@
   /**
    * Identifies merges that we want to execute (synchronously) on commit. By default, this will do no merging on commit.
    * If you implement this method in your {@code MergePolicy} you must also set a non-zero timeout using
-   * {@link IndexWriterConfig#setMaxCommitMergeWaitMillis}.
+   * {@link IndexWriterConfig#setMaxFullFlushMergeWaitMillis}.
    *
-   * Any merges returned here will make {@link IndexWriter#commit()} or {@link IndexWriter#prepareCommit()} block until
-   * the merges complete or until {@link IndexWriterConfig#getMaxCommitMergeWaitMillis()} has elapsed. This may be
-   * used to merge small segments that have just been flushed as part of the commit, reducing the number of segments in
-   * the commit. If a merge does not complete in the allotted time, it will continue to execute, and eventually finish and
-   * apply to future commits, but will not be reflected in the current commit.
+   * Any merges returned here will make {@link IndexWriter#commit()}, {@link IndexWriter#prepareCommit()}
+   * or {@link IndexWriter#getReader(boolean, boolean)} block until
+   * the merges complete or until {@link IndexWriterConfig#getMaxFullFlushMergeWaitMillis()} has elapsed. This may be
+   * used to merge small segments that have just been flushed, reducing the number of segments in
+   * the point in time snapshot. If a merge does not complete in the allotted time, it will continue to execute, and eventually finish and
+   * apply to future point in time snapshot, but will not be reflected in the current one.
    *
    * If a {@link OneMerge} in the returned {@link MergeSpecification} includes a segment already included in a registered
    * merge, then {@link IndexWriter#commit()} or {@link IndexWriter#prepareCommit()} will throw a {@link IllegalStateException}.
    * Use {@link MergeContext#getMergingSegments()} to determine which segments are currently registered to merge.
    *
-   * @param mergeTrigger the event that triggered the merge (COMMIT or FULL_FLUSH).
+   * @param mergeTrigger the event that triggered the merge (COMMIT or GET_READER).
    * @param segmentInfos the total set of segments in the index (while preparing the commit)
    * @param mergeContext the MergeContext to find the merges on, which should be used to determine which segments are
  *                     already in a registered merge (see {@link MergeContext#getMergingSegments()}).
diff --git a/lucene/core/src/java/org/apache/lucene/index/MergeTrigger.java b/lucene/core/src/java/org/apache/lucene/index/MergeTrigger.java
index 01a6b15..f11493f 100644
--- a/lucene/core/src/java/org/apache/lucene/index/MergeTrigger.java
+++ b/lucene/core/src/java/org/apache/lucene/index/MergeTrigger.java
@@ -53,4 +53,8 @@
    * Merge was triggered on commit.
    */
   COMMIT,
+  /**
+   * Merge was triggered on opening NRT readers.
+   */
+  GET_READER,
 }
diff --git a/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java b/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java
index b792be2..8a269c8 100644
--- a/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java
+++ b/lucene/core/src/java/org/apache/lucene/index/ReaderPool.java
@@ -404,7 +404,7 @@
   private boolean noDups() {
     Set<String> seen = new HashSet<>();
     for(SegmentCommitInfo info : readerMap.keySet()) {
-      assert !seen.contains(info.info.name);
+      assert !seen.contains(info.info.name) : "seen twice: " + info.info.name ;
       seen.add(info.info.name);
     }
     return true;
diff --git a/lucene/core/src/java/org/apache/lucene/index/SegmentInfos.java b/lucene/core/src/java/org/apache/lucene/index/SegmentInfos.java
index dc379ab..745b3d7 100644
--- a/lucene/core/src/java/org/apache/lucene/index/SegmentInfos.java
+++ b/lucene/core/src/java/org/apache/lucene/index/SegmentInfos.java
@@ -786,7 +786,6 @@
       // Must carefully compute fileName from "generation"
       // since lastGeneration isn't incremented:
       final String pending = IndexFileNames.fileNameFromGeneration(IndexFileNames.PENDING_SEGMENTS, "", generation);
-
       // Suppress so we keep throwing the original exception
       // in our caller
       IOUtils.deleteFilesIgnoringExceptions(dir, pending);
@@ -836,16 +835,24 @@
     if (pendingCommit == false) {
       throw new IllegalStateException("prepareCommit was not called");
     }
-    boolean success = false;
+    boolean successRenameAndSync = false;
     final String dest;
     try {
       final String src = IndexFileNames.fileNameFromGeneration(IndexFileNames.PENDING_SEGMENTS, "", generation);
       dest = IndexFileNames.fileNameFromGeneration(IndexFileNames.SEGMENTS, "", generation);
       dir.rename(src, dest);
-      dir.syncMetaData();
-      success = true;
+      try {
+        dir.syncMetaData();
+        successRenameAndSync = true;
+      } finally {
+        if (successRenameAndSync == false) {
+          // at this point we already created the file but missed to sync directory let's also remove the
+          // renamed file
+          IOUtils.deleteFilesIgnoringExceptions(dir, dest);
+        }
+      }
     } finally {
-      if (!success) {
+      if (successRenameAndSync == false) {
         // deletes pending_segments_N:
         rollbackCommit(dir);
       }
diff --git a/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java b/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
index 8904eef..1003acf 100644
--- a/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
+++ b/lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java
@@ -82,7 +82,8 @@
   }
 
   /** Used by near real-time search */
-  static DirectoryReader open(IndexWriter writer, SegmentInfos infos, boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {
+  static StandardDirectoryReader open(IndexWriter writer, IOUtils.IOFunction<SegmentCommitInfo, SegmentReader> readerFunction,
+                                      SegmentInfos infos, boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {
     // IndexWriter synchronizes externally before calling
     // us, which ensures infos will not change; so there's
     // no need to process segments in reverse order
@@ -101,19 +102,14 @@
         // IndexWriter's segmentInfos:
         final SegmentCommitInfo info = infos.info(i);
         assert info.info.dir == dir;
-        final ReadersAndUpdates rld = writer.getPooledInstance(info, true);
-        try {
-          final SegmentReader reader = rld.getReadOnlyClone(IOContext.READ);
-          if (reader.numDocs() > 0 || writer.getConfig().mergePolicy.keepFullyDeletedSegment(() -> reader)) {
-            // Steal the ref:
-            readers.add(reader);
-            infosUpto++;
-          } else {
-            reader.decRef();
-            segmentInfos.remove(infosUpto);
-          }
-        } finally {
-          writer.release(rld);
+        final SegmentReader reader = readerFunction.apply(info);
+        if (reader.numDocs() > 0 || writer.getConfig().mergePolicy.keepFullyDeletedSegment(() -> reader)) {
+          // Steal the ref:
+          readers.add(reader);
+          infosUpto++;
+        } else {
+          reader.decRef();
+          segmentInfos.remove(infosUpto);
         }
       }
 
diff --git a/lucene/core/src/java/org/apache/lucene/search/BooleanQuery.java b/lucene/core/src/java/org/apache/lucene/search/BooleanQuery.java
index 2e2d81b..d23bed1 100644
--- a/lucene/core/src/java/org/apache/lucene/search/BooleanQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/BooleanQuery.java
@@ -315,11 +315,13 @@
       }
     }
 
-    // remove FILTER clauses that are also MUST clauses
-    // or that match all documents
-    if (clauseSets.get(Occur.MUST).size() > 0 && clauseSets.get(Occur.FILTER).size() > 0) {
-      final Set<Query> filters = new HashSet<Query>(clauseSets.get(Occur.FILTER));
-      boolean modified = filters.remove(new MatchAllDocsQuery());
+    // remove FILTER clauses that are also MUST clauses or that match all documents
+    if (clauseSets.get(Occur.FILTER).size() > 0) {
+      final Set<Query> filters = new HashSet<>(clauseSets.get(Occur.FILTER));
+      boolean modified = false;
+      if (filters.size() > 1 || clauseSets.get(Occur.MUST).isEmpty() == false) {
+        modified = filters.remove(new MatchAllDocsQuery());
+      }
       modified |= filters.removeAll(clauseSets.get(Occur.MUST));
       if (modified) {
         BooleanQuery.Builder builder = new BooleanQuery.Builder();
diff --git a/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java b/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
index 986b8d9..9adbaf4 100644
--- a/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
+++ b/lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java
@@ -202,7 +202,8 @@
   @Override
   public boolean next() throws IOException {
     if (started == false) {
-      return started = true;
+      started = true;
+      return queue.size() > 0;
     }
     if (queue.top().next() == false) {
       queue.pop();
diff --git a/lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java b/lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java
index 041f0ca..e1359ab 100644
--- a/lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java
@@ -159,11 +159,7 @@
   @Override
   public void visit(QueryVisitor visitor) {
     if (visitor.acceptField(field)) {
-      if (maxEdits == 0 || prefixLength >= term.text().length()) {
-        visitor.consumeTerms(this, term);
-      } else {
-        visitor.consumeTermsMatching(this, term.field(), () -> getAutomata().runAutomaton);
-      }
+      visitor.consumeTermsMatching(this, term.field(), () -> getAutomata().runAutomaton);
     }
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java b/lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java
index 6ffc1c7..72eca50 100644
--- a/lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java
+++ b/lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java
@@ -208,9 +208,6 @@
         if (terms == null) {
           return null;
         }
-        if (terms.hasPositions() == false) {
-          return super.matches(context, doc);
-        }
         return MatchesUtils.forField(query.field, () -> DisjunctionMatchesIterator.fromTermsEnum(context, doc, query, query.field, query.getTermsEnum(terms)));
       }
 
diff --git a/lucene/core/src/java/org/apache/lucene/search/SynonymQuery.java b/lucene/core/src/java/org/apache/lucene/search/SynonymQuery.java
index b232833..ef7e3a2 100644
--- a/lucene/core/src/java/org/apache/lucene/search/SynonymQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/SynonymQuery.java
@@ -226,7 +226,7 @@
     public Matches matches(LeafReaderContext context, int doc) throws IOException {
       String field = terms[0].term.field();
       Terms indexTerms = context.reader().terms(field);
-      if (indexTerms == null || indexTerms.hasPositions() == false) {
+      if (indexTerms == null) {
         return super.matches(context, doc);
       }
       List<Term> termList = Arrays.stream(terms)
diff --git a/lucene/core/src/java/org/apache/lucene/search/TermQuery.java b/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
index 00196eb..2b1ae95 100644
--- a/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
@@ -80,9 +80,6 @@
       if (te == null) {
         return null;
       }
-      if (context.reader().terms(term.field()).hasPositions() == false) {
-        return super.matches(context, doc);
-      }
       return MatchesUtils.forField(term.field(), () -> {
         PostingsEnum pe = te.postings(null, PostingsEnum.OFFSETS);
         if (pe.advance(doc) != doc) {
diff --git a/lucene/core/src/java/org/apache/lucene/util/Version.java b/lucene/core/src/java/org/apache/lucene/util/Version.java
index f5dbcc1..6f7605b 100644
--- a/lucene/core/src/java/org/apache/lucene/util/Version.java
+++ b/lucene/core/src/java/org/apache/lucene/util/Version.java
@@ -117,6 +117,20 @@
   public static final Version LUCENE_8_6_0 = new Version(8, 6, 0);
 
   /**
+   * Match settings and bugs in Lucene's 8.6.1 release.
+   * @deprecated Use latest
+   */
+  @Deprecated
+  public static final Version LUCENE_8_6_1 = new Version(8, 6, 1);
+
+  /**
+   * Match settings and bugs in Lucene's 8.6.2 release.
+   * @deprecated Use latest
+   */
+  @Deprecated
+  public static final Version LUCENE_8_6_2 = new Version(8, 6, 2);
+
+  /**
    * Match settings and bugs in Lucene's 9.0.0 release.
    * <p>
    * Use this to get the latest &amp; greatest settings, bug
diff --git a/lucene/core/src/java/overview.html b/lucene/core/src/java/overview.html
index 95d1589..9338742 100644
--- a/lucene/core/src/java/overview.html
+++ b/lucene/core/src/java/overview.html
@@ -72,8 +72,8 @@
 A TokenStream can be composed by applying {@link org.apache.lucene.analysis.TokenFilter TokenFilter}s
 to the output of a {@link org.apache.lucene.analysis.Tokenizer Tokenizer}.&nbsp;
 Tokenizers and TokenFilters are strung together and applied with an {@link org.apache.lucene.analysis.Analyzer Analyzer}.&nbsp;
-<a href="../analyzers-common/overview-summary.html">analyzers-common</a> provides a number of Analyzer implementations, including 
-<a href="../analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html">StopAnalyzer</a>
+<a href="../analysis/common/overview-summary.html">analyzers-common</a> provides a number of Analyzer implementations, including
+<a href="../analysis/common/org/apache/lucene/analysis/core/StopAnalyzer.html">StopAnalyzer</a>
 and the grammar-based <a href="org/apache/lucene/analysis/standard/StandardAnalyzer.html">StandardAnalyzer</a>.</li>
 
 <li>
diff --git a/lucene/core/src/resources/META-INF/services/org.apache.lucene.codecs.Codec b/lucene/core/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
index 2897a8a..2be0f71 100644
--- a/lucene/core/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
+++ b/lucene/core/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
@@ -13,4 +13,4 @@
 #  See the License for the specific language governing permissions and
 #  limitations under the License.
 
-org.apache.lucene.codecs.lucene86.Lucene86Codec
+org.apache.lucene.codecs.lucene87.Lucene87Codec
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormat.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormat.java
deleted file mode 100644
index 4c7bed4..0000000
--- a/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormat.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.codecs.lucene50;
-
-
-import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.index.BaseStoredFieldsFormatTestCase;
-import org.apache.lucene.util.TestUtil;
-
-public class TestLucene50StoredFieldsFormat extends BaseStoredFieldsFormatTestCase {
-  @Override
-  protected Codec getCodec() {
-    return TestUtil.getDefaultCodec();
-  }
-}
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatHighCompression.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatHighCompression.java
deleted file mode 100644
index cccee73..0000000
--- a/lucene/core/src/test/org/apache/lucene/codecs/lucene50/TestLucene50StoredFieldsFormatHighCompression.java
+++ /dev/null
@@ -1,80 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.codecs.lucene50;
-
-
-import com.carrotsearch.randomizedtesting.generators.RandomPicks;
-import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.Mode;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
-import org.apache.lucene.document.Document;
-import org.apache.lucene.document.StoredField;
-import org.apache.lucene.index.BaseStoredFieldsFormatTestCase;
-import org.apache.lucene.index.DirectoryReader;
-import org.apache.lucene.index.IndexWriter;
-import org.apache.lucene.index.IndexWriterConfig;
-import org.apache.lucene.store.Directory;
-
-public class TestLucene50StoredFieldsFormatHighCompression extends BaseStoredFieldsFormatTestCase {
-  @Override
-  protected Codec getCodec() {
-    return new Lucene86Codec(Mode.BEST_COMPRESSION);
-  }
-  
-  /**
-   * Change compression params (leaving it the same for old segments)
-   * and tests that nothing breaks.
-   */
-  public void testMixedCompressions() throws Exception {
-    Directory dir = newDirectory();
-    for (int i = 0; i < 10; i++) {
-      IndexWriterConfig iwc = newIndexWriterConfig();
-      iwc.setCodec(new Lucene86Codec(RandomPicks.randomFrom(random(), Mode.values())));
-      IndexWriter iw = new IndexWriter(dir, newIndexWriterConfig());
-      Document doc = new Document();
-      doc.add(new StoredField("field1", "value1"));
-      doc.add(new StoredField("field2", "value2"));
-      iw.addDocument(doc);
-      if (random().nextInt(4) == 0) {
-        iw.forceMerge(1);
-      }
-      iw.commit();
-      iw.close();
-    }
-    
-    DirectoryReader ir = DirectoryReader.open(dir);
-    assertEquals(10, ir.numDocs());
-    for (int i = 0; i < 10; i++) {
-      Document doc = ir.document(i);
-      assertEquals("value1", doc.get("field1"));
-      assertEquals("value2", doc.get("field2"));
-    }
-    ir.close();
-    // checkindex
-    dir.close();
-  }
-  
-  public void testInvalidOptions() {
-    expectThrows(NullPointerException.class, () -> {
-      new Lucene86Codec(null);
-    });
-
-    expectThrows(NullPointerException.class, () -> {
-      new Lucene50StoredFieldsFormat(null);
-    });
-  }
-}
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene80/TestLucene80NormsFormat.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene80/TestLucene80NormsFormat.java
index b6e7268..011d2ca 100644
--- a/lucene/core/src/test/org/apache/lucene/codecs/lucene80/TestLucene80NormsFormat.java
+++ b/lucene/core/src/test/org/apache/lucene/codecs/lucene80/TestLucene80NormsFormat.java
@@ -18,14 +18,14 @@
 
 
 import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
 import org.apache.lucene.index.BaseNormsFormatTestCase;
+import org.apache.lucene.util.TestUtil;
 
 /**
  * Tests Lucene80NormsFormat
  */
 public class TestLucene80NormsFormat extends BaseNormsFormatTestCase {
-  private final Codec codec = new Lucene86Codec();
+  private final Codec codec = TestUtil.getDefaultCodec();
   
   @Override
   protected Codec getCodec() {
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene86/TestLucene86PointsFormat.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene86/TestLucene86PointsFormat.java
index 8d5ce08..9198301 100644
--- a/lucene/core/src/test/org/apache/lucene/codecs/lucene86/TestLucene86PointsFormat.java
+++ b/lucene/core/src/test/org/apache/lucene/codecs/lucene86/TestLucene86PointsFormat.java
@@ -49,7 +49,7 @@
   
   public TestLucene86PointsFormat() {
     // standard issue
-    Codec defaultCodec = new Lucene86Codec();
+    Codec defaultCodec = TestUtil.getDefaultCodec();
     if (random().nextBoolean()) {
       // randomize parameters
       maxPointsInLeafNode = TestUtil.nextInt(random(), 50, 500);
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormat.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormat.java
new file mode 100644
index 0000000..5604d41
--- /dev/null
+++ b/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormat.java
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene87;
+
+import org.apache.lucene.codecs.Codec;
+import org.apache.lucene.index.BaseStoredFieldsFormatTestCase;
+import org.apache.lucene.util.TestUtil;
+
+public class TestLucene87StoredFieldsFormat extends BaseStoredFieldsFormatTestCase {
+  @Override
+  protected Codec getCodec() {
+    return TestUtil.getDefaultCodec();
+  }
+}
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormatHighCompression.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormatHighCompression.java
new file mode 100644
index 0000000..f4ebca6
--- /dev/null
+++ b/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormatHighCompression.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene87;
+
+
+import org.apache.lucene.codecs.Codec;
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.Mode;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.document.StoredField;
+import org.apache.lucene.index.BaseStoredFieldsFormatTestCase;
+import org.apache.lucene.index.DirectoryReader;
+import org.apache.lucene.index.IndexWriter;
+import org.apache.lucene.index.IndexWriterConfig;
+import org.apache.lucene.store.Directory;
+
+import com.carrotsearch.randomizedtesting.generators.RandomPicks;
+
+public class TestLucene87StoredFieldsFormatHighCompression extends BaseStoredFieldsFormatTestCase {
+  @Override
+  protected Codec getCodec() {
+    return new Lucene87Codec(Mode.BEST_COMPRESSION);
+  }
+  
+  /**
+   * Change compression params (leaving it the same for old segments)
+   * and tests that nothing breaks.
+   */
+  public void testMixedCompressions() throws Exception {
+    Directory dir = newDirectory();
+    for (int i = 0; i < 10; i++) {
+      IndexWriterConfig iwc = newIndexWriterConfig();
+      iwc.setCodec(new Lucene87Codec(RandomPicks.randomFrom(random(), Mode.values())));
+      IndexWriter iw = new IndexWriter(dir, newIndexWriterConfig());
+      Document doc = new Document();
+      doc.add(new StoredField("field1", "value1"));
+      doc.add(new StoredField("field2", "value2"));
+      iw.addDocument(doc);
+      if (random().nextInt(4) == 0) {
+        iw.forceMerge(1);
+      }
+      iw.commit();
+      iw.close();
+    }
+    
+    DirectoryReader ir = DirectoryReader.open(dir);
+    assertEquals(10, ir.numDocs());
+    for (int i = 0; i < 10; i++) {
+      Document doc = ir.document(i);
+      assertEquals("value1", doc.get("field1"));
+      assertEquals("value2", doc.get("field2"));
+    }
+    ir.close();
+    // checkindex
+    dir.close();
+  }
+  
+  public void testInvalidOptions() {
+    expectThrows(NullPointerException.class, () -> {
+      new Lucene87Codec(null);
+    });
+
+    expectThrows(NullPointerException.class, () -> {
+      new Lucene87StoredFieldsFormat(null);
+    });
+  }
+}
diff --git a/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormatMergeInstance.java b/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormatMergeInstance.java
new file mode 100644
index 0000000..0015fb2
--- /dev/null
+++ b/lucene/core/src/test/org/apache/lucene/codecs/lucene87/TestLucene87StoredFieldsFormatMergeInstance.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene87;
+
+/**
+ * Test the merge instance of the Lucene50 stored fields format.
+ */
+public class TestLucene87StoredFieldsFormatMergeInstance extends TestLucene87StoredFieldsFormat {
+
+  @Override
+  protected boolean shouldTestMergeInstance() {
+    return true;
+  }
+
+}
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterDeleteQueue.java b/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterDeleteQueue.java
index 8a1f30c..41909bc 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterDeleteQueue.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterDeleteQueue.java
@@ -16,6 +16,7 @@
  */
 package org.apache.lucene.index;
 
+import java.lang.ref.WeakReference;
 import java.util.HashSet;
 import java.util.Set;
 import java.util.concurrent.CountDownLatch;
@@ -36,6 +37,26 @@
  */
 public class TestDocumentsWriterDeleteQueue extends LuceneTestCase {
 
+
+  public void testAdvanceReferencesOriginal() {
+    WeakAndNext weakAndNext = new WeakAndNext();
+    DocumentsWriterDeleteQueue next = weakAndNext.next;
+    assertNotNull(next);
+    System.gc();
+    assertNull(weakAndNext.weak.get());
+  }
+  class WeakAndNext {
+    final WeakReference<DocumentsWriterDeleteQueue> weak;
+    final DocumentsWriterDeleteQueue next;
+
+    WeakAndNext() {
+      DocumentsWriterDeleteQueue deleteQueue = new DocumentsWriterDeleteQueue(null);
+      weak = new WeakReference<>(deleteQueue);
+      next = deleteQueue.advanceQueue(2);
+    }
+  }
+
+
   public void testUpdateDelteSlices() throws Exception {
     DocumentsWriterDeleteQueue queue = new DocumentsWriterDeleteQueue(null);
     final int size = 200 + random().nextInt(500) * RANDOM_MULTIPLIER;
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterPerThreadPool.java b/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterPerThreadPool.java
new file mode 100644
index 0000000..dc31750
--- /dev/null
+++ b/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterPerThreadPool.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.index;
+
+import java.io.IOException;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.lucene.store.AlreadyClosedException;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.LuceneTestCase;
+import org.apache.lucene.util.Version;
+
+public class TestDocumentsWriterPerThreadPool extends LuceneTestCase {
+
+  public void testLockReleaseAndClose() throws IOException {
+    try (Directory directory = newDirectory()) {
+      DocumentsWriterPerThreadPool pool = new DocumentsWriterPerThreadPool(() ->
+          new DocumentsWriterPerThread(Version.LATEST.major, "", directory, directory,
+              newIndexWriterConfig(), new DocumentsWriterDeleteQueue(null), null, new AtomicLong(), false));
+
+      DocumentsWriterPerThread first = pool.getAndLock();
+      assertEquals(1, pool.size());
+      DocumentsWriterPerThread second = pool.getAndLock();
+      assertEquals(2, pool.size());
+      pool.marksAsFreeAndUnlock(first);
+      assertEquals(2, pool.size());
+      DocumentsWriterPerThread third = pool.getAndLock();
+      assertSame(first, third);
+      assertEquals(2, pool.size());
+      pool.checkout(third);
+      assertEquals(1, pool.size());
+
+      pool.close();
+      assertEquals(1, pool.size());
+      pool.marksAsFreeAndUnlock(second);
+      assertEquals(1, pool.size());
+      for (DocumentsWriterPerThread lastPerThead : pool.filterAndLock(x -> true)) {
+        pool.checkout(lastPerThead);
+        lastPerThead.unlock();
+      }
+      assertEquals(0, pool.size());
+    }
+  }
+
+  public void testCloseWhileNewWritersLocked() throws IOException, InterruptedException {
+    try (Directory directory = newDirectory()) {
+      DocumentsWriterPerThreadPool pool = new DocumentsWriterPerThreadPool(() ->
+          new DocumentsWriterPerThread(Version.LATEST.major, "", directory, directory,
+              newIndexWriterConfig(), new DocumentsWriterDeleteQueue(null), null, new AtomicLong(), false));
+
+      DocumentsWriterPerThread first = pool.getAndLock();
+      pool.lockNewWriters();
+      CountDownLatch latch = new CountDownLatch(1);
+      Thread t = new Thread(() -> {
+        try {
+          latch.countDown();
+          pool.getAndLock();
+          fail();
+        } catch (AlreadyClosedException e) {
+          // fine
+        } catch (IOException e) {
+          throw new AssertionError(e);
+        }
+      });
+      t.start();
+      latch.await();
+      while (t.getState().equals(Thread.State.WAITING) == false) {
+        Thread.yield();
+      }
+      first.unlock();
+      pool.close();
+      pool.unlockNewWriters();
+      for (DocumentsWriterPerThread perThread : pool.filterAndLock(x -> true)) {
+        assertTrue(pool.checkout(perThread));
+        perThread.unlock();
+      }
+      assertEquals(0, pool.size());
+    }
+  }
+}
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestForTooMuchCloning.java b/lucene/core/src/test/org/apache/lucene/index/TestForTooMuchCloning.java
index 6be8e78..e7e6624 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestForTooMuchCloning.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestForTooMuchCloning.java
@@ -40,7 +40,9 @@
     final RandomIndexWriter w = new RandomIndexWriter(random(), dir,
                                                       newIndexWriterConfig(new MockAnalyzer(random()))
                                                         .setMaxBufferedDocs(2)
-                                                        .setMergePolicy(tmp));
+                                                        // use a FilterMP otherwise RIW will randomly reconfigure
+                                                        // the MP while the test runs
+                                                        .setMergePolicy(new FilterMergePolicy(tmp)));
     final int numDocs = 20;
     for(int docs=0;docs<numDocs;docs++) {
       StringBuilder sb = new StringBuilder();
@@ -54,7 +56,6 @@
     }
     final IndexReader r = w.getReader();
     w.close();
-
     //System.out.println("merge clone count=" + cloneCount);
     assertTrue("too many calls to IndexInput.clone during merging: " + dir.getInputCloneCount(), dir.getInputCloneCount() < 500);
 
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
index 2c9bc23..3ede19e 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
@@ -1414,7 +1414,7 @@
           IndexFileNames.CODEC_FILE_PATTERN.matcher(file).matches()) {
         if (file.lastIndexOf('.') < 0
             // don't count stored fields and term vectors in, or any temporary files they might
-            || !Arrays.asList("fdt", "tvd", "tmp").contains(file.substring(file.lastIndexOf('.') + 1))) {
+            || !Arrays.asList("fdm", "fdt", "tvm", "tvd", "tmp").contains(file.substring(file.lastIndexOf('.') + 1))) {
           ++computedExtraFileCount;
         }
       }
@@ -4206,7 +4206,7 @@
   public void testMergeOnCommitKeepFullyDeletedSegments() throws Exception {
     Directory dir = newDirectory();
     IndexWriterConfig iwc = newIndexWriterConfig();
-    iwc.setMaxCommitMergeWaitMillis(30 * 1000);
+    iwc.setMaxFullFlushMergeWaitMillis(30 * 1000);
     iwc.mergePolicy = new FilterMergePolicy(newMergePolicy()) {
       @Override
       public boolean keepFullyDeletedSegment(IOSupplier<CodecReader> readerIOSupplier) {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
index e7dbfd2..1f9781a 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
@@ -300,10 +300,11 @@
     modifier.close();
     dir.close();
   }
-  
+
   public void testDeleteAllNoDeadLock() throws IOException, InterruptedException {
     Directory dir = newDirectory();
-    final RandomIndexWriter modifier = new RandomIndexWriter(random(), dir); 
+    final RandomIndexWriter modifier = new RandomIndexWriter(random(), dir,
+        newIndexWriterConfig().setMergePolicy(new MockRandomMergePolicy(random())));
     int numThreads = atLeast(2);
     Thread[] threads = new Thread[numThreads];
     final CountDownLatch latch = new CountDownLatch(1);
@@ -341,7 +342,7 @@
       threads[i].start();
     }
     latch.countDown();
-    while(!doneLatch.await(1, TimeUnit.MILLISECONDS)) {
+    while (!doneLatch.await(1, TimeUnit.MILLISECONDS)) {
       if (VERBOSE) {
         System.out.println("\nTEST: now deleteAll");
       }
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
index ca17825..8aecf54 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
@@ -28,6 +28,7 @@
 import java.util.List;
 import java.util.Random;
 import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.function.BooleanSupplier;
 
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.analysis.MockAnalyzer;
@@ -40,6 +41,7 @@
 import org.apache.lucene.document.Document;
 import org.apache.lucene.document.Field;
 import org.apache.lucene.document.FieldType;
+import org.apache.lucene.document.IntPoint;
 import org.apache.lucene.document.NumericDocValuesField;
 import org.apache.lucene.document.SortedDocValuesField;
 import org.apache.lucene.document.SortedNumericDocValuesField;
@@ -2023,4 +2025,106 @@
 
     dir.close();
   }
+
+
+  public void testOnlyRollbackOnceOnException() throws IOException {
+    AtomicBoolean once = new AtomicBoolean(false);
+    InfoStream stream = new InfoStream() {
+      @Override
+      public void message(String component, String message) {
+        if ("TP".equals(component) && "rollback before checkpoint".equals(message)) {
+          if (once.compareAndSet(false, true)) {
+            throw new RuntimeException("boom");
+          } else {
+            throw new AssertionError("has been rolled back twice");
+          }
+
+        }
+      }
+
+      @Override
+      public boolean isEnabled(String component) {
+        return "TP".equals(component);
+      }
+
+      @Override
+      public void close() {
+      }
+    };
+    try (Directory dir = newDirectory()) {
+      try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig().setInfoStream(stream)){
+        @Override
+        protected boolean isEnableTestPoints() {
+          return true;
+        }
+      }) {
+        writer.rollback();
+        fail();
+      }
+    } catch (RuntimeException e) {
+      assertEquals("boom", e.getMessage());
+      assertEquals("has suppressed exceptions: " + Arrays.toString(e.getSuppressed()), 0, e.getSuppressed().length);
+      assertNull(e.getCause());
+    }
+  }
+
+  public void testExceptionOnSyncMetadata() throws IOException {
+    try (MockDirectoryWrapper dir = newMockDirectory()) {
+      IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig().setCommitOnClose(false));
+        writer.commit();
+        AtomicBoolean maybeFailDelete = new AtomicBoolean(false);
+        BooleanSupplier failDelete = () -> random().nextBoolean() && maybeFailDelete.get();
+        dir.failOn(new MockDirectoryWrapper.Failure() {
+          @Override
+          public void eval(MockDirectoryWrapper dir)  {
+            if (callStackContains(MockDirectoryWrapper.class, "syncMetaData")
+                && callStackContains(SegmentInfos.class, "finishCommit")) {
+              throw new RuntimeException("boom");
+            } else if (failDelete.getAsBoolean() &&
+                callStackContains(IndexWriter.class, "rollbackInternalNoCommit") && callStackContains(IndexFileDeleter.class, "deleteFiles")) {
+              throw new RuntimeException("bang");
+            }
+          }});
+        for (int i = 0; i < 5; i++) {
+          Document doc = new Document();
+          doc.add(newStringField("id", Integer.toString(i), Field.Store.NO));
+          doc.add(new NumericDocValuesField("dv", i));
+          doc.add(new BinaryDocValuesField("dv2", new BytesRef(Integer.toString(i))));
+          doc.add(new SortedDocValuesField("dv3", new BytesRef(Integer.toString(i))));
+          doc.add(new SortedSetDocValuesField("dv4", new BytesRef(Integer.toString(i))));
+          doc.add(new SortedSetDocValuesField("dv4", new BytesRef(Integer.toString(i - 1))));
+          doc.add(new SortedNumericDocValuesField("dv5", i));
+          doc.add(new SortedNumericDocValuesField("dv5", i - 1));
+          doc.add(newTextField("text1", TestUtil.randomAnalysisString(random(), 20, true), Field.Store.NO));
+          // ensure we store something
+          doc.add(new StoredField("stored1", "foo"));
+          doc.add(new StoredField("stored1", "bar"));
+          // ensure we get some payloads
+          doc.add(newTextField("text_payloads", TestUtil.randomAnalysisString(random(), 6, true), Field.Store.NO));
+          // ensure we get some vectors
+          FieldType ft = new FieldType(TextField.TYPE_NOT_STORED);
+          ft.setStoreTermVectors(true);
+          doc.add(newField("text_vectors", TestUtil.randomAnalysisString(random(), 6, true), ft));
+          doc.add(new IntPoint("point", random().nextInt()));
+          doc.add(new IntPoint("point2d", random().nextInt(), random().nextInt()));
+          writer.addDocument(new Document());
+        }
+        try {
+          writer.commit();
+          fail();
+        } catch (RuntimeException e) {
+          assertEquals("boom", e.getMessage());
+        }
+        try {
+          maybeFailDelete.set(true);
+          writer.rollback();
+        } catch (RuntimeException e) {
+          assertEquals("bang", e.getMessage());
+        }
+        maybeFailDelete.set(false);
+        assertTrue(writer.isClosed());
+        assertTrue(DirectoryReader.indexExists(dir));
+        DirectoryReader.open(dir).close();
+    }
+  }
 }
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMergePolicy.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMergePolicy.java
index f7f37d0..241f21f 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMergePolicy.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterMergePolicy.java
@@ -32,32 +32,13 @@
 import org.apache.lucene.index.IndexWriterConfig.OpenMode;
 import org.apache.lucene.search.IndexSearcher;
 import org.apache.lucene.search.MatchAllDocsQuery;
+import org.apache.lucene.search.TermQuery;
 import org.apache.lucene.store.Directory;
 
 import org.apache.lucene.util.LuceneTestCase;
 
 public class TestIndexWriterMergePolicy extends LuceneTestCase {
 
-  private static final MergePolicy MERGE_ON_COMMIT_POLICY = new LogDocMergePolicy() {
-    @Override
-    public MergeSpecification findFullFlushMerges(MergeTrigger mergeTrigger, SegmentInfos segmentInfos, MergeContext mergeContext) {
-      // Optimize down to a single segment on commit
-      if (mergeTrigger == MergeTrigger.COMMIT && segmentInfos.size() > 1) {
-        List<SegmentCommitInfo> nonMergingSegments = new ArrayList<>();
-        for (SegmentCommitInfo sci : segmentInfos) {
-          if (mergeContext.getMergingSegments().contains(sci) == false) {
-            nonMergingSegments.add(sci);
-          }
-        }
-        if (nonMergingSegments.size() > 1) {
-          MergeSpecification mergeSpecification = new MergeSpecification();
-          mergeSpecification.add(new OneMerge(nonMergingSegments));
-          return mergeSpecification;
-        }
-      }
-      return null;
-    }
-  };
 
   // Test the normal case
   public void testNormalCase() throws IOException {
@@ -324,7 +305,7 @@
     firstWriter.close(); // When this writer closes, it does not merge on commit.
 
     IndexWriterConfig iwc = newIndexWriterConfig(new MockAnalyzer(random()))
-        .setMergePolicy(MERGE_ON_COMMIT_POLICY).setMaxCommitMergeWaitMillis(Integer.MAX_VALUE);
+        .setMergePolicy(new MergeOnXMergePolicy(newMergePolicy(), MergeTrigger.COMMIT)).setMaxFullFlushMergeWaitMillis(Integer.MAX_VALUE);
 
 
     IndexWriter writerWithMergePolicy = new IndexWriter(dir, iwc);
@@ -369,13 +350,14 @@
     // TODO: Add more checks for other non-double setters!
   }
 
-  public void testCarryOverNewDeletes() throws IOException, InterruptedException {
+
+  public void testCarryOverNewDeletesOnCommit() throws IOException, InterruptedException {
     try (Directory directory = newDirectory()) {
       boolean useSoftDeletes = random().nextBoolean();
       CountDownLatch waitForMerge = new CountDownLatch(1);
       CountDownLatch waitForUpdate = new CountDownLatch(1);
       try (IndexWriter writer = new IndexWriter(directory, newIndexWriterConfig()
-          .setMergePolicy(MERGE_ON_COMMIT_POLICY).setMaxCommitMergeWaitMillis(30 * 1000)
+          .setMergePolicy(new MergeOnXMergePolicy(NoMergePolicy.INSTANCE, MergeTrigger.COMMIT)).setMaxFullFlushMergeWaitMillis(30 * 1000)
           .setSoftDeletesField("soft_delete")
           .setMaxBufferedDocs(Integer.MAX_VALUE)
           .setRAMBufferSizeMB(100)
@@ -443,12 +425,22 @@
    * This test makes sure we release the merge readers on abort. MDW will fail if it
    * can't close all files
    */
-  public void testAbortCommitMerge() throws IOException, InterruptedException {
+
+  public void testAbortMergeOnCommit() throws IOException, InterruptedException {
+    abortMergeOnX(false);
+  }
+
+  public void testAbortMergeOnGetReader() throws IOException, InterruptedException {
+    abortMergeOnX(true);
+  }
+
+  void abortMergeOnX(boolean useGetReader) throws IOException, InterruptedException {
     try (Directory directory = newDirectory()) {
       CountDownLatch waitForMerge = new CountDownLatch(1);
       CountDownLatch waitForDeleteAll = new CountDownLatch(1);
       try (IndexWriter writer = new IndexWriter(directory, newIndexWriterConfig()
-          .setMergePolicy(MERGE_ON_COMMIT_POLICY).setMaxCommitMergeWaitMillis(30 * 1000)
+          .setMergePolicy(new MergeOnXMergePolicy(newMergePolicy(), useGetReader ? MergeTrigger.GET_READER : MergeTrigger.COMMIT))
+          .setMaxFullFlushMergeWaitMillis(30 * 1000)
           .setMergeScheduler(new SerialMergeScheduler() {
             @Override
             public synchronized void merge(MergeSource mergeSource, MergeTrigger trigger) throws IOException {
@@ -472,10 +464,20 @@
         writer.flush();
         writer.addDocument(d2);
         Thread t = new Thread(() -> {
+          boolean success = false;
           try {
-            writer.commit();
+            if (useGetReader) {
+              writer.getReader().close();
+            } else {
+              writer.commit();
+            }
+            success = true;
           } catch (IOException e) {
             throw new AssertionError(e);
+          } finally {
+            if (success == false) {
+              waitForMerge.countDown();
+            }
           }
         });
         t.start();
@@ -487,10 +489,100 @@
     }
   }
 
+  public void testForceMergeWhileGetReader() throws IOException, InterruptedException {
+    try (Directory directory = newDirectory()) {
+      CountDownLatch waitForMerge = new CountDownLatch(1);
+      CountDownLatch waitForForceMergeCalled = new CountDownLatch(1);
+      try (IndexWriter writer = new IndexWriter(directory, newIndexWriterConfig()
+          .setMergePolicy(new MergeOnXMergePolicy(newMergePolicy(),  MergeTrigger.GET_READER))
+          .setMaxFullFlushMergeWaitMillis(30 * 1000)
+          .setMergeScheduler(new SerialMergeScheduler() {
+            @Override
+            public void merge(MergeSource mergeSource, MergeTrigger trigger) throws IOException {
+              waitForMerge.countDown();
+              try {
+                waitForForceMergeCalled.await();
+              } catch (InterruptedException e) {
+                throw new AssertionError(e);
+              }
+              super.merge(mergeSource, trigger);
+            }
+          }))) {
+        Document d1 = new Document();
+        d1.add(new StringField("id", "1", Field.Store.NO));
+        writer.addDocument(d1);
+        writer.flush();
+        Document d2 = new Document();
+        d2.add(new StringField("id", "2", Field.Store.NO));
+        writer.addDocument(d2);
+        Thread t = new Thread(() -> {
+          try (DirectoryReader reader = writer.getReader()){
+            assertEquals(2, reader.maxDoc());
+          } catch (IOException e) {
+            throw new AssertionError(e);
+          }
+        });
+        t.start();
+        waitForMerge.await();
+        Document d3 = new Document();
+        d3.add(new StringField("id", "3", Field.Store.NO));
+        writer.addDocument(d3);
+        waitForForceMergeCalled.countDown();
+        writer.forceMerge(1);
+        t.join();
+      }
+    }
+  }
+
+  public void testFailAfterMergeCommitted() throws IOException {
+    try (Directory directory = newDirectory()) {
+      AtomicBoolean mergeAndFail = new AtomicBoolean(false);
+      try (IndexWriter writer = new IndexWriter(directory, newIndexWriterConfig()
+          .setMergePolicy(new MergeOnXMergePolicy(NoMergePolicy.INSTANCE,  MergeTrigger.GET_READER))
+          .setMaxFullFlushMergeWaitMillis(30 * 1000)
+          .setMergeScheduler(new SerialMergeScheduler())) {
+        @Override
+        protected void doAfterFlush() throws IOException {
+          if (mergeAndFail.get() && hasPendingMerges()) {
+            executeMerge(MergeTrigger.GET_READER);
+            throw new RuntimeException("boom");
+          }
+        }
+      }) {
+        Document d1 = new Document();
+        d1.add(new StringField("id", "1", Field.Store.NO));
+        writer.addDocument(d1);
+        writer.flush();
+        Document d2 = new Document();
+        d2.add(new StringField("id", "2", Field.Store.NO));
+        writer.addDocument(d2);
+        writer.flush();
+        mergeAndFail.set(true);
+        try (DirectoryReader reader = writer.getReader()){
+          assertNotNull(reader); // make compiler happy and use the reader
+          fail();
+        } catch (RuntimeException e) {
+          assertEquals("boom", e.getMessage());
+        } finally {
+          mergeAndFail.set(false);
+        }
+      }
+    }
+  }
+
+  public void testStressUpdateSameDocumentWithMergeOnGetReader() throws IOException, InterruptedException {
+    stressUpdateSameDocumentWithMergeOnX(true);
+  }
+
   public void testStressUpdateSameDocumentWithMergeOnCommit() throws IOException, InterruptedException {
+    stressUpdateSameDocumentWithMergeOnX(false);
+  }
+
+  void stressUpdateSameDocumentWithMergeOnX(boolean useGetReader) throws IOException, InterruptedException {
     try (Directory directory = newDirectory()) {
       try (RandomIndexWriter writer = new RandomIndexWriter(random(), directory, newIndexWriterConfig()
-          .setMergePolicy(MERGE_ON_COMMIT_POLICY).setMaxCommitMergeWaitMillis(10 + random().nextInt(2000))
+          .setMergePolicy(new MergeOnXMergePolicy(newMergePolicy(), useGetReader ? MergeTrigger.GET_READER : MergeTrigger.COMMIT))
+          .setMaxFullFlushMergeWaitMillis(10 + random().nextInt(2000))
           .setSoftDeletesField("soft_delete")
           .setMergeScheduler(new ConcurrentMergeScheduler()))) {
         Document d1 = new Document();
@@ -499,13 +591,17 @@
         writer.commit();
 
         AtomicInteger iters = new AtomicInteger(100 + random().nextInt(TEST_NIGHTLY ? 5000 : 1000));
+        AtomicInteger numFullFlushes = new AtomicInteger(10 + random().nextInt(TEST_NIGHTLY ? 500 : 100));
         AtomicBoolean done = new AtomicBoolean(false);
         Thread[] threads = new Thread[1 + random().nextInt(4)];
         for (int i = 0; i < threads.length; i++) {
           Thread t = new Thread(() -> {
             try {
-              while (iters.decrementAndGet() > 0) {
+              while (iters.decrementAndGet() > 0 || numFullFlushes.get() > 0) {
                 writer.updateDocument(new Term("id", "1"), d1);
+                if (random().nextBoolean()) {
+                  writer.addDocument(new Document());
+                }
               }
             } catch (Exception e) {
               throw new AssertionError(e);
@@ -519,14 +615,22 @@
         }
         try {
           while (done.get() == false) {
-            if (random().nextBoolean()) {
-              writer.commit();
+            if (useGetReader) {
+              try (DirectoryReader reader = writer.getReader()) {
+                assertEquals(1, new IndexSearcher(reader).search(new TermQuery(new Term("id", "1")), 10).totalHits.value);
+              }
+            } else {
+              if (random().nextBoolean()) {
+                writer.commit();
+              }
+              try (DirectoryReader open = new SoftDeletesDirectoryReaderWrapper(DirectoryReader.open(directory), "___soft_deletes")) {
+                assertEquals(1, new IndexSearcher(open).search(new TermQuery(new Term("id", "1")), 10).totalHits.value);
+              }
             }
-            try (DirectoryReader open = new SoftDeletesDirectoryReaderWrapper(DirectoryReader.open(directory), "___soft_deletes")) {
-              assertEquals(1, open.numDocs());
-            }
+            numFullFlushes.decrementAndGet();
           }
         } finally {
+          numFullFlushes.set(0);
           for (Thread t : threads) {
             t.join();
           }
@@ -534,4 +638,66 @@
       }
     }
   }
+
+  // Test basic semantics of merge on getReader
+  public void testMergeOnGetReader() throws IOException {
+    Directory dir = newDirectory();
+
+    IndexWriter firstWriter = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random()))
+        .setMergePolicy(NoMergePolicy.INSTANCE));
+    for (int i = 0; i < 5; i++) {
+      TestIndexWriter.addDoc(firstWriter);
+      firstWriter.flush();
+    }
+    DirectoryReader firstReader = DirectoryReader.open(firstWriter);
+    assertEquals(5, firstReader.leaves().size());
+    firstReader.close();
+    firstWriter.close(); // When this writer closes, it does not merge on commit.
+
+    IndexWriterConfig iwc = newIndexWriterConfig(new MockAnalyzer(random()))
+        .setMergePolicy(new MergeOnXMergePolicy(newMergePolicy(), MergeTrigger.GET_READER)).setMaxFullFlushMergeWaitMillis(Integer.MAX_VALUE);
+
+    IndexWriter writerWithMergePolicy = new IndexWriter(dir, iwc);
+
+    try (DirectoryReader unmergedReader =  DirectoryReader.open(dir)) { // No changes. GetReader doesn't trigger a merge.
+      assertEquals(5, unmergedReader.leaves().size());
+    }
+
+    TestIndexWriter.addDoc(writerWithMergePolicy);
+    try (DirectoryReader mergedReader =  writerWithMergePolicy.getReader()) {
+      // Doc added, do merge on getReader.
+      assertEquals(1, mergedReader.leaves().size());
+    }
+
+    writerWithMergePolicy.close();
+    dir.close();
+  }
+
+  private static class MergeOnXMergePolicy extends FilterMergePolicy {
+    private final MergeTrigger trigger;
+
+    private MergeOnXMergePolicy(MergePolicy in, MergeTrigger trigger) {
+      super(in);
+      this.trigger = trigger;
+    }
+
+    @Override
+    public MergeSpecification findFullFlushMerges(MergeTrigger mergeTrigger, SegmentInfos segmentInfos, MergeContext mergeContext) {
+      // Optimize down to a single segment on commit
+      if (mergeTrigger == trigger && segmentInfos.size() > 1) {
+        List<SegmentCommitInfo> nonMergingSegments = new ArrayList<>();
+        for (SegmentCommitInfo sci : segmentInfos) {
+          if (mergeContext.getMergingSegments().contains(sci) == false) {
+            nonMergingSegments.add(sci);
+          }
+        }
+        if (nonMergingSegments.size() > 1) {
+          MergeSpecification mergeSpecification = new MergeSpecification();
+          mergeSpecification.add(new OneMerge(nonMergingSegments));
+          return mergeSpecification;
+        }
+      }
+      return null;
+    }
+  }
 }
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnVMError.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnVMError.java
index 73704dd..7c4ccac 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnVMError.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnVMError.java
@@ -224,7 +224,9 @@
       // assertTrue("hit OOM but writer is still open, WTF: ", writer.isClosed());
       try {
         writer.rollback();
-      } catch (Throwable t) {}
+      } catch (Throwable t) {
+        t.printStackTrace(log);
+      }
       return (VirtualMachineError) e;
     } else {
       Rethrow.rethrow(disaster);
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterReader.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterReader.java
index 228c343..ab1b7e9 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterReader.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterReader.java
@@ -241,7 +241,7 @@
     boolean doFullMerge = false;
 
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
-    IndexWriterConfig iwc = newIndexWriterConfig(new MockAnalyzer(random()));
+    IndexWriterConfig iwc = newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0);
     if (iwc.getMaxBufferedDocs() < 20) {
       iwc.setMaxBufferedDocs(20);
     }
@@ -294,11 +294,11 @@
     boolean doFullMerge = false;
 
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
-    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())));
+    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
 
     // create a 2nd index
     Directory dir2 = newDirectory();
-    IndexWriter writer2 = new IndexWriter(dir2, newIndexWriterConfig(new MockAnalyzer(random())));
+    IndexWriter writer2 = new IndexWriter(dir2, newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
     createIndexNoClose(!doFullMerge, "index2", writer2);
     writer2.close();
 
@@ -324,7 +324,7 @@
     boolean doFullMerge = true;
 
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
-    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())));
+    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
     // create the index
     createIndexNoClose(!doFullMerge, "index1", writer);
     writer.flush(false, true);
@@ -361,7 +361,7 @@
     writer.close();
         
     // reopen the writer to verify the delete made it to the directory
-    writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())));
+    writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
     IndexReader w2r1 = writer.getReader();
     assertEquals(0, count(new Term("id", id10), w2r1));
     w2r1.close();
@@ -377,7 +377,8 @@
     Directory mainDir = getAssertNoDeletesDirectory(newDirectory());
 
     IndexWriter mainWriter = new IndexWriter(mainDir, newIndexWriterConfig(new MockAnalyzer(random()))
-                                                        .setMergePolicy(newLogMergePolicy()));
+                                                        .setMergePolicy(newLogMergePolicy())
+                                                        .setMaxFullFlushMergeWaitMillis(0));
     TestUtil.reduceOpenFiles(mainWriter);
 
     AddDirectoriesThreads addDirThreads = new AddDirectoriesThreads(numIter, mainWriter);
@@ -421,6 +422,7 @@
       this.mainWriter = mainWriter;
       addDir = newDirectory();
       IndexWriter writer = new IndexWriter(addDir, newIndexWriterConfig(new MockAnalyzer(random()))
+                                                     .setMaxFullFlushMergeWaitMillis(0)
                                                      .setMaxBufferedDocs(2));
       TestUtil.reduceOpenFiles(writer);
       for (int i = 0; i < NUM_INIT_DOCS; i++) {
@@ -533,7 +535,8 @@
    */
   public void doTestIndexWriterReopenSegment(boolean doFullMerge) throws Exception {
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
-    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())));
+    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random()))
+        .setMaxFullFlushMergeWaitMillis(0));
     IndexReader r1 = writer.getReader();
     assertEquals(0, r1.maxDoc());
     createIndexNoClose(false, "index1", writer);
@@ -569,7 +572,7 @@
     writer.close();
 
     // test whether the changes made it to the directory
-    writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())));
+    writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
     IndexReader w2r1 = writer.getReader();
     // insure the deletes were actually flushed to the directory
     assertEquals(200, w2r1.maxDoc());
@@ -619,6 +622,7 @@
         dir1,
         newIndexWriterConfig(new MockAnalyzer(random()))
           .setMaxBufferedDocs(2)
+          .setMaxFullFlushMergeWaitMillis(0)
           .setMergedSegmentWarmer((leafReader) -> warmCount.incrementAndGet())
           .setMergeScheduler(new ConcurrentMergeScheduler())
           .setMergePolicy(newLogMergePolicy())
@@ -653,7 +657,8 @@
   public void testAfterCommit() throws Exception {
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
     IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random()))
-                                                 .setMergeScheduler(new ConcurrentMergeScheduler()));
+                                                 .setMergeScheduler(new ConcurrentMergeScheduler())
+                                                 .setMaxFullFlushMergeWaitMillis(0));
     writer.commit();
 
     // create the index
@@ -685,7 +690,7 @@
   // Make sure reader remains usable even if IndexWriter closes
   public void testAfterClose() throws Exception {
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
-    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())));
+    IndexWriter writer = new IndexWriter(dir1, newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
 
     // create the index
     createIndexNoClose(false, "test", writer);
@@ -715,7 +720,7 @@
     Directory dir1 = getAssertNoDeletesDirectory(newDirectory());
     final IndexWriter writer = new IndexWriter(
         dir1,
-        newIndexWriterConfig(new MockAnalyzer(random()))
+        newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0)
           .setMergePolicy(newLogMergePolicy(2))
     );
 
@@ -1031,7 +1036,7 @@
     Directory d = getAssertNoDeletesDirectory(newDirectory());
     IndexWriter w = new IndexWriter(
         d,
-        newIndexWriterConfig(new MockAnalyzer(random())));
+        newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0));
 
     DirectoryReader r = w.getReader(); // start pooling readers
 
@@ -1085,7 +1090,7 @@
       }
     });
     
-    IndexWriterConfig conf = newIndexWriterConfig(new MockAnalyzer(random()));
+    IndexWriterConfig conf = newIndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0);
     conf.setMergePolicy(NoMergePolicy.INSTANCE); // prevent merges from getting in the way
     IndexWriter writer = new IndexWriter(dir, conf);
     
@@ -1116,7 +1121,7 @@
     Directory dir = getAssertNoDeletesDirectory(new ByteBuffersDirectory());
     // Don't use newIndexWriterConfig, because we need a
     // "sane" mergePolicy:
-    IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random()));
+    IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random())).setMaxFullFlushMergeWaitMillis(0);
     IndexWriter w = new IndexWriter(dir, iwc);
     // Create 500 segments:
     for(int i=0;i<500;i++) {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java
index 4ff6686..c4f379e 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java
@@ -59,7 +59,6 @@
     private final CyclicBarrier syncStart;
     boolean diskFull;
     Throwable error;
-    AlreadyClosedException ace;
     IndexWriter writer;
     boolean noErrors;
     volatile int addCount;
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestPointValues.java b/lucene/core/src/test/org/apache/lucene/index/TestPointValues.java
index d982953..d937c2f 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestPointValues.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestPointValues.java
@@ -396,7 +396,7 @@
   public void testDifferentCodecs1() throws Exception {
     Directory dir = newDirectory();
     IndexWriterConfig iwc = new IndexWriterConfig(new MockAnalyzer(random()));
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     IndexWriter w = new IndexWriter(dir, iwc);
     Document doc = new Document();
     doc.add(new IntPoint("int", 1));
@@ -427,7 +427,7 @@
     w.close();
     
     iwc = new IndexWriterConfig(new MockAnalyzer(random()));
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     w = new IndexWriter(dir, iwc);
     doc = new Document();
     doc.add(new IntPoint("int", 1));
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java b/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
index 9c845f6..1a10610 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestSoftDeletesRetentionMergePolicy.java
@@ -109,7 +109,7 @@
 
   public void testKeepFullyDeletedSegments() throws IOException {
     Directory dir = newDirectory();
-    IndexWriterConfig indexWriterConfig = newIndexWriterConfig();
+    IndexWriterConfig indexWriterConfig = newIndexWriterConfig().setMergePolicy(NoMergePolicy.INSTANCE);
     IndexWriter writer = new IndexWriter(dir, indexWriterConfig);
 
     Document doc = new Document();
diff --git a/lucene/core/src/test/org/apache/lucene/search/MultiCollectorTest.java b/lucene/core/src/test/org/apache/lucene/search/MultiCollectorTest.java
deleted file mode 100644
index 80a5a9a..0000000
--- a/lucene/core/src/test/org/apache/lucene/search/MultiCollectorTest.java
+++ /dev/null
@@ -1,338 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.search;
-
-
-import java.io.IOException;
-
-import org.apache.lucene.document.Document;
-import org.apache.lucene.index.DirectoryReader;
-import org.apache.lucene.index.LeafReaderContext;
-import org.apache.lucene.index.RandomIndexWriter;
-import org.apache.lucene.store.Directory;
-import org.apache.lucene.util.LuceneTestCase;
-import org.junit.Test;
-
-public class MultiCollectorTest extends LuceneTestCase {
-
-  private static class DummyCollector extends SimpleCollector {
-
-    boolean collectCalled = false;
-    boolean setNextReaderCalled = false;
-    boolean setScorerCalled = false;
-
-    @Override
-    public void collect(int doc) throws IOException {
-      collectCalled = true;
-    }
-
-    @Override
-    protected void doSetNextReader(LeafReaderContext context) throws IOException {
-      setNextReaderCalled = true;
-    }
-
-    @Override
-    public void setScorer(Scorable scorer) throws IOException {
-      setScorerCalled = true;
-    }
-
-    @Override
-    public ScoreMode scoreMode() {
-      return ScoreMode.COMPLETE;
-    }
-  }
-
-  @Test
-  public void testNullCollectors() throws Exception {
-    // Tests that the collector rejects all null collectors.
-    expectThrows(IllegalArgumentException.class, () -> {
-      MultiCollector.wrap(null, null);
-    });
-
-    // Tests that the collector handles some null collectors well. If it
-    // doesn't, an NPE would be thrown.
-    Collector c = MultiCollector.wrap(new DummyCollector(), null, new DummyCollector());
-    assertTrue(c instanceof MultiCollector);
-    final LeafCollector ac = c.getLeafCollector(null);
-    ac.collect(1);
-    c.getLeafCollector(null);
-    c.getLeafCollector(null).setScorer(new ScoreAndDoc());
-  }
-
-  @Test
-  public void testSingleCollector() throws Exception {
-    // Tests that if a single Collector is input, it is returned (and not MultiCollector).
-    DummyCollector dc = new DummyCollector();
-    assertSame(dc, MultiCollector.wrap(dc));
-    assertSame(dc, MultiCollector.wrap(dc, null));
-  }
-  
-  @Test
-  public void testCollector() throws Exception {
-    // Tests that the collector delegates calls to input collectors properly.
-
-    // Tests that the collector handles some null collectors well. If it
-    // doesn't, an NPE would be thrown.
-    DummyCollector[] dcs = new DummyCollector[] { new DummyCollector(), new DummyCollector() };
-    Collector c = MultiCollector.wrap(dcs);
-    LeafCollector ac = c.getLeafCollector(null);
-    ac.collect(1);
-    ac = c.getLeafCollector(null);
-    ac.setScorer(new ScoreAndDoc());
-
-    for (DummyCollector dc : dcs) {
-      assertTrue(dc.collectCalled);
-      assertTrue(dc.setNextReaderCalled);
-      assertTrue(dc.setScorerCalled);
-    }
-
-  }
-
-  private static Collector collector(ScoreMode scoreMode, Class<?> expectedScorer) {
-    return new Collector() {
-
-      @Override
-      public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException {
-        return new LeafCollector() {
-
-          @Override
-          public void setScorer(Scorable scorer) throws IOException {
-            while (expectedScorer.equals(scorer.getClass()) == false && scorer instanceof FilterScorable) {
-              scorer = ((FilterScorable) scorer).in;
-            }
-            assertEquals(expectedScorer, scorer.getClass());
-          }
-
-          @Override
-          public void collect(int doc) throws IOException {}
-          
-        };
-      }
-
-      @Override
-      public ScoreMode scoreMode() {
-        return scoreMode;
-      }
-      
-    };
-  }
-
-  public void testCacheScoresIfNecessary() throws IOException {
-    Directory dir = newDirectory();
-    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
-    iw.addDocument(new Document());
-    iw.commit();
-    DirectoryReader reader = iw.getReader();
-    iw.close();
-    
-    final LeafReaderContext ctx = reader.leaves().get(0);
-
-    expectThrows(AssertionError.class, () -> {
-      collector(ScoreMode.COMPLETE_NO_SCORES, ScoreCachingWrappingScorer.class).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
-    });
-
-    // no collector needs scores => no caching
-    Collector c1 = collector(ScoreMode.COMPLETE_NO_SCORES, ScoreAndDoc.class);
-    Collector c2 = collector(ScoreMode.COMPLETE_NO_SCORES, ScoreAndDoc.class);
-    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
-
-    // only one collector needs scores => no caching
-    c1 = collector(ScoreMode.COMPLETE, ScoreAndDoc.class);
-    c2 = collector(ScoreMode.COMPLETE_NO_SCORES, ScoreAndDoc.class);
-    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
-
-    // several collectors need scores => caching
-    c1 = collector(ScoreMode.COMPLETE, ScoreCachingWrappingScorer.class);
-    c2 = collector(ScoreMode.COMPLETE, ScoreCachingWrappingScorer.class);
-    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
-
-    reader.close();
-    dir.close();
-  }
-  
-  public void testScorerWrappingForTopScores() throws IOException {
-    Directory dir = newDirectory();
-    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
-    iw.addDocument(new Document());
-    DirectoryReader reader = iw.getReader();
-    iw.close();
-    final LeafReaderContext ctx = reader.leaves().get(0);
-    Collector c1 = collector(ScoreMode.TOP_SCORES, MultiCollector.MinCompetitiveScoreAwareScorable.class);
-    Collector c2 = collector(ScoreMode.TOP_SCORES, MultiCollector.MinCompetitiveScoreAwareScorable.class);
-    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
-    
-    c1 = collector(ScoreMode.TOP_SCORES, ScoreCachingWrappingScorer.class);
-    c2 = collector(ScoreMode.COMPLETE, ScoreCachingWrappingScorer.class);
-    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
-    
-    reader.close();
-    dir.close();
-  }
-  
-  public void testMinCompetitiveScore() throws IOException {
-    float[] currentMinScores = new float[3];
-    float[] minCompetitiveScore = new float[1];
-    Scorable scorer = new Scorable() {
-      
-      @Override
-      public float score() throws IOException {
-        return 0;
-      }
-      
-      @Override
-      public int docID() {
-        return 0;
-      }
-      
-      @Override
-      public void setMinCompetitiveScore(float minScore) throws IOException {
-        minCompetitiveScore[0] = minScore;
-      }
-    };
-    Scorable s0 = new MultiCollector.MinCompetitiveScoreAwareScorable(scorer, 0, currentMinScores);
-    Scorable s1 = new MultiCollector.MinCompetitiveScoreAwareScorable(scorer, 1, currentMinScores);
-    Scorable s2 = new MultiCollector.MinCompetitiveScoreAwareScorable(scorer, 2, currentMinScores);
-    assertEquals(0f, minCompetitiveScore[0], 0);
-    s0.setMinCompetitiveScore(0.5f);
-    assertEquals(0f, minCompetitiveScore[0], 0);
-    s1.setMinCompetitiveScore(0.8f);
-    assertEquals(0f, minCompetitiveScore[0], 0);
-    s2.setMinCompetitiveScore(0.3f);
-    assertEquals(0.3f, minCompetitiveScore[0], 0);
-    s2.setMinCompetitiveScore(0.1f);
-    assertEquals(0.3f, minCompetitiveScore[0], 0);
-    s1.setMinCompetitiveScore(Float.MAX_VALUE);
-    assertEquals(0.3f, minCompetitiveScore[0], 0);
-    s2.setMinCompetitiveScore(Float.MAX_VALUE);
-    assertEquals(0.5f, minCompetitiveScore[0], 0);
-    s0.setMinCompetitiveScore(Float.MAX_VALUE);
-    assertEquals(Float.MAX_VALUE, minCompetitiveScore[0], 0);
-  }
-  
-  public void testCollectionTermination() throws IOException {
-    Directory dir = newDirectory();
-    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
-    iw.addDocument(new Document());
-    DirectoryReader reader = iw.getReader();
-    iw.close();
-    final LeafReaderContext ctx = reader.leaves().get(0);
-    DummyCollector c1 = new TerminatingDummyCollector(1, ScoreMode.COMPLETE);
-    DummyCollector c2 = new TerminatingDummyCollector(2, ScoreMode.COMPLETE);
-
-    Collector mc = MultiCollector.wrap(c1, c2);
-    LeafCollector lc = mc.getLeafCollector(ctx);
-    lc.setScorer(new ScoreAndDoc());
-    lc.collect(0); // OK
-    assertTrue("c1's collect should be called", c1.collectCalled);
-    assertTrue("c2's collect should be called", c2.collectCalled);
-    c1.collectCalled = false;
-    c2.collectCalled = false;
-    lc.collect(1); // OK, but c1 should terminate
-    assertFalse("c1 should be removed already", c1.collectCalled);
-    assertTrue("c2's collect should be called", c2.collectCalled);
-    c2.collectCalled = false;
-    
-    expectThrows(CollectionTerminatedException.class, () -> {
-      lc.collect(2);
-    });
-    assertFalse("c1 should be removed already", c1.collectCalled);
-    assertFalse("c2 should be removed already", c2.collectCalled);
-    
-    reader.close();
-    dir.close();
-  }
-  
-  public void testSetScorerOnCollectionTerminationSkipNonCompetitive() throws IOException {
-    doTestSetScorerOnCollectionTermination(true);
-  }
-  
-  public void testSetScorerOnCollectionTerminationSkipNoSkips() throws IOException {
-    doTestSetScorerOnCollectionTermination(false);
-  }
-  
-  private void doTestSetScorerOnCollectionTermination(boolean allowSkipNonCompetitive) throws IOException {
-    Directory dir = newDirectory();
-    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
-    iw.addDocument(new Document());
-    DirectoryReader reader = iw.getReader();
-    iw.close();
-    final LeafReaderContext ctx = reader.leaves().get(0);
-    
-    DummyCollector c1 = new TerminatingDummyCollector(1, allowSkipNonCompetitive? ScoreMode.TOP_SCORES : ScoreMode.COMPLETE);
-    DummyCollector c2 = new TerminatingDummyCollector(2, allowSkipNonCompetitive? ScoreMode.TOP_SCORES : ScoreMode.COMPLETE);
-    
-    Collector mc = MultiCollector.wrap(c1, c2);
-    LeafCollector lc = mc.getLeafCollector(ctx);
-    assertFalse(c1.setScorerCalled);
-    assertFalse(c2.setScorerCalled);
-    lc.setScorer(new ScoreAndDoc());
-    assertTrue(c1.setScorerCalled);
-    assertTrue(c2.setScorerCalled);
-    c1.setScorerCalled = false;
-    c2.setScorerCalled = false;
-    lc.collect(0); // OK
-    
-    lc.setScorer(new ScoreAndDoc());
-    assertTrue(c1.setScorerCalled);
-    assertTrue(c2.setScorerCalled);
-    c1.setScorerCalled = false;
-    c2.setScorerCalled = false;
-    
-    lc.collect(1); // OK, but c1 should terminate
-    lc.setScorer(new ScoreAndDoc());
-    assertFalse(c1.setScorerCalled);
-    assertTrue(c2.setScorerCalled);
-    c2.setScorerCalled = false;
-    
-    expectThrows(CollectionTerminatedException.class, () -> {
-      lc.collect(2);
-    });
-    lc.setScorer(new ScoreAndDoc());
-    assertFalse(c1.setScorerCalled);
-    assertFalse(c2.setScorerCalled);
-    
-    reader.close();
-    dir.close();
-  }
-  
-  private static class TerminatingDummyCollector extends DummyCollector {
-    
-    private final int terminateOnDoc;
-    private final ScoreMode scoreMode;
-    
-    public TerminatingDummyCollector(int terminateOnDoc, ScoreMode scoreMode) {
-      super();
-      this.terminateOnDoc = terminateOnDoc;
-      this.scoreMode = scoreMode;
-    }
-    
-    @Override
-    public void collect(int doc) throws IOException {
-      if (doc == terminateOnDoc) {
-        throw new CollectionTerminatedException();
-      }
-      super.collect(doc);
-    }
-    
-    @Override
-    public ScoreMode scoreMode() {
-      return scoreMode;
-    }
-    
-  }
-
-}
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java b/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
index 3400f0e..cac56e9 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
@@ -23,7 +23,6 @@
 import java.util.Random;
 
 import org.apache.lucene.analysis.MockAnalyzer;
-import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.document.Document;
 import org.apache.lucene.document.Field;
 import org.apache.lucene.document.FieldType;
@@ -96,7 +95,7 @@
 
     IndexWriterConfig iwc = newIndexWriterConfig(new MockAnalyzer(random()));
     // randomized codecs are sometimes too costly for this test:
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     iwc.setMergePolicy(newLogMergePolicy());
     RandomIndexWriter writer= new RandomIndexWriter(random(), directory, iwc);
     // we'll make a ton of docs, disable store/norms/vectors
@@ -141,7 +140,7 @@
     iwc = newIndexWriterConfig(new MockAnalyzer(random()));
     // we need docID order to be preserved:
     // randomized codecs are sometimes too costly for this test:
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     iwc.setMergePolicy(newLogMergePolicy());
     try (IndexWriter w = new IndexWriter(singleSegmentDirectory, iwc)) {
       w.forceMerge(1, true);
@@ -167,7 +166,7 @@
 
       iwc = newIndexWriterConfig(new MockAnalyzer(random()));
       // randomized codecs are sometimes too costly for this test:
-      iwc.setCodec(Codec.forName("Lucene86"));
+      iwc.setCodec(TestUtil.getDefaultCodec());
       RandomIndexWriter w = new RandomIndexWriter(random(), dir2, iwc);
       w.addIndexes(copy);
       copy.close();
@@ -179,7 +178,7 @@
     iwc = newIndexWriterConfig(new MockAnalyzer(random()));
     iwc.setMaxBufferedDocs(TestUtil.nextInt(random(), 50, 1000));
     // randomized codecs are sometimes too costly for this test:
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     RandomIndexWriter w = new RandomIndexWriter(random(), dir2, iwc);
 
     doc = new Document();
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestBooleanRewrites.java b/lucene/core/src/test/org/apache/lucene/search/TestBooleanRewrites.java
index 497035b..51ca8e5 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestBooleanRewrites.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestBooleanRewrites.java
@@ -312,12 +312,28 @@
         .add(new TermQuery(new Term("foo", "baz")), Occur.MUST)
         .add(new MatchAllDocsQuery(), Occur.FILTER)
         .build();
-    BooleanQuery expected = new BooleanQuery.Builder()
+    Query expected = new BooleanQuery.Builder()
         .setMinimumNumberShouldMatch(bq.getMinimumNumberShouldMatch())
         .add(new TermQuery(new Term("foo", "bar")), Occur.MUST)
         .add(new TermQuery(new Term("foo", "baz")), Occur.MUST)
         .build();
     assertEquals(expected, searcher.rewrite(bq));
+
+    bq = new BooleanQuery.Builder()
+            .add(new TermQuery(new Term("foo", "bar")), Occur.FILTER)
+            .add(new MatchAllDocsQuery(), Occur.FILTER)
+            .build();
+    expected = new BoostQuery(new ConstantScoreQuery(
+            new TermQuery(new Term("foo", "bar"))), 0.0f);
+    assertEquals(expected, searcher.rewrite(bq));
+
+    bq = new BooleanQuery.Builder()
+            .add(new MatchAllDocsQuery(), Occur.FILTER)
+            .add(new MatchAllDocsQuery(), Occur.FILTER)
+            .build();
+    expected = new BoostQuery(new ConstantScoreQuery(
+            new MatchAllDocsQuery()), 0.0f);
+    assertEquals(expected, searcher.rewrite(bq));
   }
 
   public void testRandom() throws IOException {
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java b/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
index 3948406..9dce502 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestMatchesIterator.java
@@ -21,9 +21,11 @@
 import java.util.Arrays;
 import java.util.HashSet;
 import java.util.IdentityHashMap;
+import java.util.List;
 import java.util.Objects;
 import java.util.Set;
 import java.util.stream.Collectors;
+import java.util.stream.IntStream;
 
 import org.apache.lucene.analysis.MockAnalyzer;
 import org.apache.lucene.document.Document;
@@ -184,7 +186,11 @@
       Matches matches = w.matches(ctx, doc);
       if (expected[i]) {
         MatchesIterator mi = matches.getMatches(field);
-        assertNull(mi);
+        assertTrue(mi.next());
+        assertEquals(-1, mi.startPosition());
+        while (mi.next()) {
+          assertEquals(-1, mi.startPosition());
+        }
       }
       else {
         assertNull(matches);
@@ -754,6 +760,91 @@
     }
   }
 
+  public void testFromSubIteratorsMethod() throws IOException {
+    class CountIterator implements MatchesIterator {
+      private int count;
+      private int max;
+
+      CountIterator(int count) {
+        this.count = count;
+        this.max = count;
+      }
+
+      @Override
+      public boolean next() throws IOException {
+        if (count == 0) {
+          return false;
+        } else {
+          count--;
+          return true;
+        }
+      }
+
+      @Override
+      public int startPosition() {
+        return max - count;
+      }
+
+      @Override
+      public int endPosition() {
+        return max - count;
+      }
+
+      @Override
+      public int startOffset() throws IOException {
+        throw new AssertionError();
+      }
+
+      @Override
+      public int endOffset() throws IOException {
+        throw new AssertionError();
+      }
+
+      @Override
+      public MatchesIterator getSubMatches() throws IOException {
+        throw new AssertionError();
+      }
+
+      @Override
+      public Query getQuery() {
+        throw new AssertionError();
+      }
+    }
+
+    int [][] checks = {
+        { 0 },
+        { 1 },
+        { 0, 0 },
+        { 0, 1 },
+        { 1, 0 },
+        { 1, 1 },
+        { 0, 0, 0 },
+        { 0, 0, 1 },
+        { 0, 1, 0 },
+        { 1, 0, 0 },
+        { 1, 0, 1 },
+        { 1, 1, 0 },
+        { 1, 1, 1 },
+    };
+
+    for (int[] counts : checks) {
+      List<MatchesIterator> its = IntStream.of(counts)
+          .mapToObj(CountIterator::new)
+          .collect(Collectors.toList());
+
+      int expectedCount = IntStream.of(counts).sum();
+
+      MatchesIterator merged = DisjunctionMatchesIterator.fromSubIterators(its);
+      int actualCount = 0;
+      while (merged.next()) {
+        actualCount++;
+      }
+
+      assertEquals("Sub-iterator count is not right for: "
+          + Arrays.toString(counts), expectedCount, actualCount);
+    }
+  }
+
   private static class SeekCountingLeafReader extends FilterLeafReader {
 
     int seeks = 0;
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestMultiCollector.java b/lucene/core/src/test/org/apache/lucene/search/TestMultiCollector.java
index dda314b..f1adc1b 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestMultiCollector.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestMultiCollector.java
@@ -36,6 +36,8 @@
 import org.apache.lucene.util.LuceneTestCase;
 import org.apache.lucene.util.TestUtil;
 
+import org.junit.Test;
+
 public class TestMultiCollector extends LuceneTestCase {
 
   private static class TerminateAfterCollector extends FilterCollector {
@@ -217,4 +219,311 @@
     reader.close();
     dir.close();
   }
+
+  private static class DummyCollector extends SimpleCollector {
+
+    boolean collectCalled = false;
+    boolean setNextReaderCalled = false;
+    boolean setScorerCalled = false;
+
+    @Override
+    public void collect(int doc) throws IOException {
+      collectCalled = true;
+    }
+
+    @Override
+    protected void doSetNextReader(LeafReaderContext context) throws IOException {
+      setNextReaderCalled = true;
+    }
+
+    @Override
+    public void setScorer(Scorable scorer) throws IOException {
+      setScorerCalled = true;
+    }
+
+    @Override
+    public ScoreMode scoreMode() {
+      return ScoreMode.COMPLETE;
+    }
+  }
+
+  @Test
+  public void testNullCollectors() throws Exception {
+    // Tests that the collector rejects all null collectors.
+    expectThrows(IllegalArgumentException.class, () -> {
+      MultiCollector.wrap(null, null);
+    });
+
+    // Tests that the collector handles some null collectors well. If it
+    // doesn't, an NPE would be thrown.
+    Collector c = MultiCollector.wrap(new DummyCollector(), null, new DummyCollector());
+    assertTrue(c instanceof MultiCollector);
+    final LeafCollector ac = c.getLeafCollector(null);
+    ac.collect(1);
+    c.getLeafCollector(null);
+    c.getLeafCollector(null).setScorer(new ScoreAndDoc());
+  }
+
+  @Test
+  public void testSingleCollector() throws Exception {
+    // Tests that if a single Collector is input, it is returned (and not MultiCollector).
+    DummyCollector dc = new DummyCollector();
+    assertSame(dc, MultiCollector.wrap(dc));
+    assertSame(dc, MultiCollector.wrap(dc, null));
+  }
+  
+  @Test
+  public void testCollector() throws Exception {
+    // Tests that the collector delegates calls to input collectors properly.
+
+    // Tests that the collector handles some null collectors well. If it
+    // doesn't, an NPE would be thrown.
+    DummyCollector[] dcs = new DummyCollector[] { new DummyCollector(), new DummyCollector() };
+    Collector c = MultiCollector.wrap(dcs);
+    LeafCollector ac = c.getLeafCollector(null);
+    ac.collect(1);
+    ac = c.getLeafCollector(null);
+    ac.setScorer(new ScoreAndDoc());
+
+    for (DummyCollector dc : dcs) {
+      assertTrue(dc.collectCalled);
+      assertTrue(dc.setNextReaderCalled);
+      assertTrue(dc.setScorerCalled);
+    }
+
+  }
+
+  private static Collector collector(ScoreMode scoreMode, Class<?> expectedScorer) {
+    return new Collector() {
+
+      @Override
+      public LeafCollector getLeafCollector(LeafReaderContext context) throws IOException {
+        return new LeafCollector() {
+
+          @Override
+          public void setScorer(Scorable scorer) throws IOException {
+            while (expectedScorer.equals(scorer.getClass()) == false && scorer instanceof FilterScorable) {
+              scorer = ((FilterScorable) scorer).in;
+            }
+            assertEquals(expectedScorer, scorer.getClass());
+          }
+
+          @Override
+          public void collect(int doc) throws IOException {}
+          
+        };
+      }
+
+      @Override
+      public ScoreMode scoreMode() {
+        return scoreMode;
+      }
+      
+    };
+  }
+
+  public void testCacheScoresIfNecessary() throws IOException {
+    Directory dir = newDirectory();
+    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
+    iw.addDocument(new Document());
+    iw.commit();
+    DirectoryReader reader = iw.getReader();
+    iw.close();
+    
+    final LeafReaderContext ctx = reader.leaves().get(0);
+
+    expectThrows(AssertionError.class, () -> {
+      collector(ScoreMode.COMPLETE_NO_SCORES, ScoreCachingWrappingScorer.class).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
+    });
+
+    // no collector needs scores => no caching
+    Collector c1 = collector(ScoreMode.COMPLETE_NO_SCORES, ScoreAndDoc.class);
+    Collector c2 = collector(ScoreMode.COMPLETE_NO_SCORES, ScoreAndDoc.class);
+    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
+
+    // only one collector needs scores => no caching
+    c1 = collector(ScoreMode.COMPLETE, ScoreAndDoc.class);
+    c2 = collector(ScoreMode.COMPLETE_NO_SCORES, ScoreAndDoc.class);
+    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
+
+    // several collectors need scores => caching
+    c1 = collector(ScoreMode.COMPLETE, ScoreCachingWrappingScorer.class);
+    c2 = collector(ScoreMode.COMPLETE, ScoreCachingWrappingScorer.class);
+    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
+
+    reader.close();
+    dir.close();
+  }
+  
+  public void testScorerWrappingForTopScores() throws IOException {
+    Directory dir = newDirectory();
+    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
+    iw.addDocument(new Document());
+    DirectoryReader reader = iw.getReader();
+    iw.close();
+    final LeafReaderContext ctx = reader.leaves().get(0);
+    Collector c1 = collector(ScoreMode.TOP_SCORES, MultiCollector.MinCompetitiveScoreAwareScorable.class);
+    Collector c2 = collector(ScoreMode.TOP_SCORES, MultiCollector.MinCompetitiveScoreAwareScorable.class);
+    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
+    
+    c1 = collector(ScoreMode.TOP_SCORES, ScoreCachingWrappingScorer.class);
+    c2 = collector(ScoreMode.COMPLETE, ScoreCachingWrappingScorer.class);
+    MultiCollector.wrap(c1, c2).getLeafCollector(ctx).setScorer(new ScoreAndDoc());
+    
+    reader.close();
+    dir.close();
+  }
+  
+  public void testMinCompetitiveScore() throws IOException {
+    float[] currentMinScores = new float[3];
+    float[] minCompetitiveScore = new float[1];
+    Scorable scorer = new Scorable() {
+      
+      @Override
+      public float score() throws IOException {
+        return 0;
+      }
+      
+      @Override
+      public int docID() {
+        return 0;
+      }
+      
+      @Override
+      public void setMinCompetitiveScore(float minScore) throws IOException {
+        minCompetitiveScore[0] = minScore;
+      }
+    };
+    Scorable s0 = new MultiCollector.MinCompetitiveScoreAwareScorable(scorer, 0, currentMinScores);
+    Scorable s1 = new MultiCollector.MinCompetitiveScoreAwareScorable(scorer, 1, currentMinScores);
+    Scorable s2 = new MultiCollector.MinCompetitiveScoreAwareScorable(scorer, 2, currentMinScores);
+    assertEquals(0f, minCompetitiveScore[0], 0);
+    s0.setMinCompetitiveScore(0.5f);
+    assertEquals(0f, minCompetitiveScore[0], 0);
+    s1.setMinCompetitiveScore(0.8f);
+    assertEquals(0f, minCompetitiveScore[0], 0);
+    s2.setMinCompetitiveScore(0.3f);
+    assertEquals(0.3f, minCompetitiveScore[0], 0);
+    s2.setMinCompetitiveScore(0.1f);
+    assertEquals(0.3f, minCompetitiveScore[0], 0);
+    s1.setMinCompetitiveScore(Float.MAX_VALUE);
+    assertEquals(0.3f, minCompetitiveScore[0], 0);
+    s2.setMinCompetitiveScore(Float.MAX_VALUE);
+    assertEquals(0.5f, minCompetitiveScore[0], 0);
+    s0.setMinCompetitiveScore(Float.MAX_VALUE);
+    assertEquals(Float.MAX_VALUE, minCompetitiveScore[0], 0);
+  }
+  
+  public void testCollectionTermination() throws IOException {
+    Directory dir = newDirectory();
+    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
+    iw.addDocument(new Document());
+    DirectoryReader reader = iw.getReader();
+    iw.close();
+    final LeafReaderContext ctx = reader.leaves().get(0);
+    DummyCollector c1 = new TerminatingDummyCollector(1, ScoreMode.COMPLETE);
+    DummyCollector c2 = new TerminatingDummyCollector(2, ScoreMode.COMPLETE);
+
+    Collector mc = MultiCollector.wrap(c1, c2);
+    LeafCollector lc = mc.getLeafCollector(ctx);
+    lc.setScorer(new ScoreAndDoc());
+    lc.collect(0); // OK
+    assertTrue("c1's collect should be called", c1.collectCalled);
+    assertTrue("c2's collect should be called", c2.collectCalled);
+    c1.collectCalled = false;
+    c2.collectCalled = false;
+    lc.collect(1); // OK, but c1 should terminate
+    assertFalse("c1 should be removed already", c1.collectCalled);
+    assertTrue("c2's collect should be called", c2.collectCalled);
+    c2.collectCalled = false;
+    
+    expectThrows(CollectionTerminatedException.class, () -> {
+      lc.collect(2);
+    });
+    assertFalse("c1 should be removed already", c1.collectCalled);
+    assertFalse("c2 should be removed already", c2.collectCalled);
+    
+    reader.close();
+    dir.close();
+  }
+  
+  public void testSetScorerOnCollectionTerminationSkipNonCompetitive() throws IOException {
+    doTestSetScorerOnCollectionTermination(true);
+  }
+  
+  public void testSetScorerOnCollectionTerminationSkipNoSkips() throws IOException {
+    doTestSetScorerOnCollectionTermination(false);
+  }
+  
+  private void doTestSetScorerOnCollectionTermination(boolean allowSkipNonCompetitive) throws IOException {
+    Directory dir = newDirectory();
+    RandomIndexWriter iw = new RandomIndexWriter(random(), dir);
+    iw.addDocument(new Document());
+    DirectoryReader reader = iw.getReader();
+    iw.close();
+    final LeafReaderContext ctx = reader.leaves().get(0);
+    
+    DummyCollector c1 = new TerminatingDummyCollector(1, allowSkipNonCompetitive? ScoreMode.TOP_SCORES : ScoreMode.COMPLETE);
+    DummyCollector c2 = new TerminatingDummyCollector(2, allowSkipNonCompetitive? ScoreMode.TOP_SCORES : ScoreMode.COMPLETE);
+    
+    Collector mc = MultiCollector.wrap(c1, c2);
+    LeafCollector lc = mc.getLeafCollector(ctx);
+    assertFalse(c1.setScorerCalled);
+    assertFalse(c2.setScorerCalled);
+    lc.setScorer(new ScoreAndDoc());
+    assertTrue(c1.setScorerCalled);
+    assertTrue(c2.setScorerCalled);
+    c1.setScorerCalled = false;
+    c2.setScorerCalled = false;
+    lc.collect(0); // OK
+    
+    lc.setScorer(new ScoreAndDoc());
+    assertTrue(c1.setScorerCalled);
+    assertTrue(c2.setScorerCalled);
+    c1.setScorerCalled = false;
+    c2.setScorerCalled = false;
+    
+    lc.collect(1); // OK, but c1 should terminate
+    lc.setScorer(new ScoreAndDoc());
+    assertFalse(c1.setScorerCalled);
+    assertTrue(c2.setScorerCalled);
+    c2.setScorerCalled = false;
+    
+    expectThrows(CollectionTerminatedException.class, () -> {
+      lc.collect(2);
+    });
+    lc.setScorer(new ScoreAndDoc());
+    assertFalse(c1.setScorerCalled);
+    assertFalse(c2.setScorerCalled);
+    
+    reader.close();
+    dir.close();
+  }
+  
+  private static class TerminatingDummyCollector extends DummyCollector {
+    
+    private final int terminateOnDoc;
+    private final ScoreMode scoreMode;
+    
+    public TerminatingDummyCollector(int terminateOnDoc, ScoreMode scoreMode) {
+      super();
+      this.terminateOnDoc = terminateOnDoc;
+      this.scoreMode = scoreMode;
+    }
+    
+    @Override
+    public void collect(int doc) throws IOException {
+      if (doc == terminateOnDoc) {
+        throw new CollectionTerminatedException();
+      }
+      super.collect(doc);
+    }
+    
+    @Override
+    public ScoreMode scoreMode() {
+      return scoreMode;
+    }
+    
+  }
+
 }
diff --git a/lucene/core/src/test/org/apache/lucene/util/TestVersion.java b/lucene/core/src/test/org/apache/lucene/util/TestVersion.java
index 02d566e..462d23e 100644
--- a/lucene/core/src/test/org/apache/lucene/util/TestVersion.java
+++ b/lucene/core/src/test/org/apache/lucene/util/TestVersion.java
@@ -193,7 +193,7 @@
     String commonBuildVersion = System.getProperty("tests.LUCENE_VERSION");
     assumeTrue("Null 'tests.LUCENE_VERSION' test property. You should run the tests with the official Lucene build file",
         commonBuildVersion != null);
-    assertEquals("Version.LATEST does not match the one given in common-build.xml",
+    assertEquals("Version.LATEST does not match the one given in tests.LUCENE_VERSION property",
         Version.LATEST.toString(), commonBuildVersion);
   }
 
diff --git a/lucene/default-nested-ivy-settings.xml b/lucene/default-nested-ivy-settings.xml
deleted file mode 100644
index 1b1603f..0000000
--- a/lucene/default-nested-ivy-settings.xml
+++ /dev/null
@@ -1,56 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<ivysettings>
-  <!-- This file is included by default by top-level-ivy-settings.xml,
-       which loads ivy-versions.properties as Ivy variables.          -->
-
-  <settings defaultResolver="default"/>
-
-  <property name="local-maven2-dir" value="${user.home}/.m2/repository/" />
-
-  <include url="${ivy.default.settings.dir}/ivysettings-public.xml"/>
-  <include url="${ivy.default.settings.dir}/ivysettings-shared.xml"/>
-  <include url="${ivy.default.settings.dir}/ivysettings-local.xml"/>
-  <include url="${ivy.default.settings.dir}/ivysettings-main-chain.xml"/>
-
-  <caches lockStrategy="${ivy.lock-strategy}" resolutionCacheDir="${ivy.resolution-cache.dir}" />
-
-  <resolvers>
-    <ibiblio name="sonatype-releases" root="https://oss.sonatype.org/content/repositories/releases" m2compatible="true" />
-    <ibiblio name="maven.restlet.com" root="https://maven.restlet.com" m2compatible="true" />
-    <ibiblio name="releases.cloudera.com" root="https://repository.cloudera.com/artifactory/libs-release-local" m2compatible="true" />
-
-    <filesystem name="local-maven-2" m2compatible="true" local="true">
-      <artifact
-          pattern="${local-maven2-dir}/[organisation]/[module]/[revision]/[module]-[revision](-[classifier]).[ext]" />
-      <ivy
-          pattern="${local-maven2-dir}/[organisation]/[module]/[revision]/[module]-[revision].pom" />
-    </filesystem>
-
-    <chain name="default" returnFirst="true" checkmodified="true" changingPattern=".*SNAPSHOT">
-      <resolver ref="local"/>
-      <!-- <resolver ref="local-maven-2" /> -->
-      <resolver ref="main"/>
-      <resolver ref="maven.restlet.com" />
-      <resolver ref="sonatype-releases" />
-      <resolver ref="releases.cloudera.com"/>
-    </chain>
-  </resolvers>
-
-</ivysettings>
diff --git a/lucene/demo/build.xml b/lucene/demo/build.xml
deleted file mode 100644
index d5e50ad..0000000
--- a/lucene/demo/build.xml
+++ /dev/null
@@ -1,61 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="demo" default="default" xmlns:artifact="antlib:org.apache.maven.artifact.ant">
-
-  <description>
-    Simple example code
-  </description>
-
-  <property name="demo.name" value="lucene-demos-${version}"/>
-
-  <import file="../module-build.xml"/>
-
-  <target name="init" depends="module-build.init,jar-lucene-core"/>
-  
-  <path id="classpath">
-   <pathelement path="${analyzers-common.jar}"/>
-   <pathelement path="${queryparser.jar}"/>
-   <pathelement path="${lucene-core.jar}"/>
-   <pathelement path="${queries.jar}"/>
-   <pathelement path="${facet.jar}"/>
-   <pathelement path="${expressions.jar}"/>
-   <fileset dir="../expressions/lib"/>
-  </path>
-
-  <target name="javadocs" depends="javadocs-analyzers-common,javadocs-queryparser,javadocs-facet,javadocs-expressions,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <!-- we link the example source in the javadocs, as it's ref'ed elsewhere -->
-    <invoke-module-javadoc linksource="yes">
-      <links>
-        <link href="../analyzers-common"/>
-        <link href="../queryparser"/>
-        <link href="../queries"/>
-        <link href="../facet"/>
-        <link href="../expressions"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-  <!-- we don't check for sysout in demo, because the demo is there to use sysout :-) -->
-  <target name="-check-forbidden-sysout"/>
-
-  <target name="compile-core" depends="jar-analyzers-common,jar-queryparser,jar-queries,jar-facet,jar-expressions,common.compile-core" />
-
-</project>
diff --git a/lucene/demo/ivy.xml b/lucene/demo/ivy.xml
deleted file mode 100644
index 69076d1..0000000
--- a/lucene/demo/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="demo"/>
-</ivy-module>
diff --git a/lucene/demo/src/java/overview.html b/lucene/demo/src/java/overview.html
index 8f1a08a..d3a1c12 100644
--- a/lucene/demo/src/java/overview.html
+++ b/lucene/demo/src/java/overview.html
@@ -151,7 +151,7 @@
 rules for every language, and you should use the proper analyzer for each.
 Lucene currently provides Analyzers for a number of different languages (see
 the javadocs under <a href=
-"../analyzers-common/overview-summary.html">lucene/analysis/common/src/java/org/apache/lucene/analysis</a>).</p>
+"../analysis/common/overview-summary.html">lucene/analysis/common/src/java/org/apache/lucene/analysis</a>).</p>
 <p>The <span class="codefrag">IndexWriterConfig</span> instance holds all
 configuration for <span class="codefrag">IndexWriter</span>. For example, we
 set the <span class="codefrag">OpenMode</span> to use here based on the value
diff --git a/lucene/expressions/build.xml b/lucene/expressions/build.xml
deleted file mode 100644
index 61ae64f..0000000
--- a/lucene/expressions/build.xml
+++ /dev/null
@@ -1,120 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="expressions" default="default">
-
-  <description>
-    Dynamically computed values to sort/facet/search on based on a pluggable grammar.
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <path refid="base.classpath"/>
-    <fileset dir="lib"/>
-  </path>
-
-  <path id="test.classpath">
-    <path refid="test.base.classpath"/>
-    <fileset dir="lib"/>
-    <pathelement path="src/test-files"/>
-  </path>
-
-  <target name="regenerate" depends="run-antlr"/>
-
-  <target name="resolve-antlr" xmlns:ivy="antlib:org.apache.ivy.ant">
-    <ivy:cachepath organisation="org.antlr" module="antlr4" revision="4.5.1-1"
-                  inline="true" conf="default" type="jar" pathid="antlr.classpath"/>
-  </target>
-
-  <target name="run-antlr" depends="resolve-antlr">
-    <regen-grammar package="js" grammar="Javascript"/>
-  </target>
-  
-  <macrodef name="replace-value">
-    <attribute name="value" />
-    <attribute name="property" />
-    <attribute name="from" />
-    <attribute name="to" />
-    <sequential>
-      <loadresource property="@{property}">
-        <string value="@{value}"/>
-        <filterchain>
-          <tokenfilter>
-            <filetokenizer/>
-            <replacestring from="@{from}" to="@{to}"/>
-          </tokenfilter>
-        </filterchain>
-      </loadresource>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="regen-grammar">
-    <attribute name="package" />
-    <attribute name="grammar" />
-    <sequential>
-      <local name="grammar.path"/>
-      <patternset id="grammar.@{grammar}.patternset">
-        <include name="@{grammar}Lexer.java" />
-        <include name="@{grammar}Parser.java" />
-        <include name="@{grammar}Visitor.java" />
-        <include name="@{grammar}BaseVisitor.java" />
-      </patternset>
-      <property name="grammar.path" location="src/java/org/apache/lucene/expressions/@{package}"/>
-      <!-- delete parser and lexer so files will be generated -->
-      <delete dir="${grammar.path}">
-        <patternset refid="grammar.@{grammar}.patternset"/>
-      </delete>
-      <!-- invoke ANTLR4 -->
-      <java classname="org.antlr.v4.Tool" fork="true" failonerror="true" classpathref="antlr.classpath" taskname="antlr">
-        <sysproperty key="file.encoding" value="UTF-8"/>
-        <sysproperty key="user.language" value="en"/>
-        <sysproperty key="user.country" value="US"/>
-        <sysproperty key="user.variant" value=""/>
-        <arg value="-package"/>
-        <arg value="org.apache.lucene.expressions.@{package}"/>
-        <arg value="-no-listener"/>
-        <arg value="-visitor"/>
-        <arg value="-o"/>
-        <arg path="${grammar.path}"/>
-        <arg path="${grammar.path}/@{grammar}.g4"/>
-      </java>
-      <!-- fileset with files to edit -->
-      <fileset id="grammar.fileset" dir="${grammar.path}">
-        <patternset refid="grammar.@{grammar}.patternset"/>
-      </fileset>
-      <!-- remove files that are not needed to compile or at runtime -->
-      <delete dir="${grammar.path}" includes="@{grammar}*.tokens"/>
-      <!-- make the generated classes package private -->
-      <replaceregexp match="public ((interface|class) \Q@{grammar}\E\w+)" replace="\1" encoding="UTF-8">
-        <fileset refid="grammar.fileset"/>
-      </replaceregexp>
-      <!-- nuke timestamps/filenames in generated files -->
-      <replaceregexp match="\Q// Generated from \E.*" replace="\/\/ ANTLR GENERATED CODE: DO NOT EDIT" encoding="UTF-8">
-        <fileset refid="grammar.fileset"/>
-      </replaceregexp>
-      <!-- remove tabs in antlr generated files -->
-      <replaceregexp match="\t" flags="g" replace="  " encoding="UTF-8">
-        <fileset refid="grammar.fileset"/>
-      </replaceregexp>
-      <!-- fix line endings -->
-      <fixcrlf srcdir="${grammar.path}">
-        <patternset refid="grammar.@{grammar}.patternset"/>
-      </fixcrlf>
-    </sequential>
-  </macrodef>
-</project>
diff --git a/lucene/expressions/ivy.xml b/lucene/expressions/ivy.xml
deleted file mode 100644
index 6065522..0000000
--- a/lucene/expressions/ivy.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="expressions"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.antlr" name="antlr4-runtime" rev="${/org.antlr/antlr4-runtime}" conf="compile"/>
-    <dependency org="org.ow2.asm" name="asm" rev="${/org.ow2.asm/asm}" conf="compile"/>
-    <dependency org="org.ow2.asm" name="asm-commons" rev="${/org.ow2.asm/asm-commons}" conf="compile"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/facet/build.xml b/lucene/facet/build.xml
deleted file mode 100644
index 2fc2f9a..0000000
--- a/lucene/facet/build.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="facet" default="default">
-
-  <description>
-    Faceted indexing and search capabilities
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <path refid="base.classpath"/>
-    <fileset dir="lib"/>
-  </path>
-
-  <path id="test.classpath">
-    <pathelement path="${queries.jar}"/>
-    <path refid="test.base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-queries,common.compile-core"/>
-
-  <target name="run-encoding-benchmark" depends="compile-test">
-    <java classname="org.apache.lucene.util.encoding.EncodingSpeed" fork="true" failonerror="true">
-      <classpath refid="test.classpath" />
-      <classpath path="${build.dir}/classes/test" />
-    </java>
-  </target>
-
-</project>
diff --git a/lucene/facet/ivy.xml b/lucene/facet/ivy.xml
deleted file mode 100644
index 249c78e..0000000
--- a/lucene/facet/ivy.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="facet"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="com.carrotsearch" name="hppc" rev="${/com.carrotsearch/hppc}" conf="compile"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/grouping/build.xml b/lucene/grouping/build.xml
deleted file mode 100644
index 5fb1fd1..0000000
--- a/lucene/grouping/build.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="grouping" default="default">
-  
-    <description>
-       Collectors for grouping search results.
-    </description>
-
-    <import file="../module-build.xml"/>
-
-    <path id="test.classpath">
-      <path refid="test.base.classpath" />
-      <pathelement path="${queries.jar}" />
-    </path>
-
-    <path id="classpath">
-      <pathelement path="${queries.jar}" />
-      <path refid="base.classpath"/>
-    </path>
-
-    <target name="init" depends="module-build.init,jar-queries"/>
-
-    <target name="javadocs" depends="javadocs-queries,compile-core,check-javadocs-uptodate"
-            unless="javadocs-uptodate-${name}">
-      <invoke-module-javadoc>
-        <links>
-          <link href="../queries"/>
-        </links>
-      </invoke-module-javadoc>
-    </target>
-
-</project>
diff --git a/lucene/grouping/ivy.xml b/lucene/grouping/ivy.xml
deleted file mode 100644
index d7e4eb3..0000000
--- a/lucene/grouping/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="grouping"/>
-</ivy-module>
diff --git a/lucene/highlighter/build.gradle b/lucene/highlighter/build.gradle
index 6e105d5..6bd8426 100644
--- a/lucene/highlighter/build.gradle
+++ b/lucene/highlighter/build.gradle
@@ -28,4 +28,5 @@
 
   testImplementation project(':lucene:test-framework')
   testImplementation project(':lucene:analysis:common')
+  testImplementation project(':lucene:queryparser')
 }
diff --git a/lucene/highlighter/build.xml b/lucene/highlighter/build.xml
deleted file mode 100644
index 6ecd793..0000000
--- a/lucene/highlighter/build.xml
+++ /dev/null
@@ -1,54 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="highlighter" default="default">
-
-  <description>
-    Highlights search keywords in results
-  </description>
-
-  <!-- some files for testing that do not have license headers -->
-  <property name="rat.excludes" value="**/*.utf8"/>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${memory.jar}"/>
-    <pathelement path="${queries.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <path id="test.classpath">
-    <pathelement path="${memory.jar}"/>
-    <pathelement path="${queries.jar}"/>
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="test.base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-memory,jar-queries,jar-join,jar-analyzers-common,common.compile-core" />
-
-  <target name="javadocs" depends="javadocs-memory,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../memory"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-</project>
diff --git a/lucene/highlighter/ivy.xml b/lucene/highlighter/ivy.xml
deleted file mode 100644
index 262fa53..0000000
--- a/lucene/highlighter/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="highlighter"/>
-</ivy-module>
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/BreakIteratorShrinkingAdjuster.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/BreakIteratorShrinkingAdjuster.java
new file mode 100644
index 0000000..21166da
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/BreakIteratorShrinkingAdjuster.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import java.text.BreakIterator;
+import java.util.Locale;
+
+/**
+ * A {@link PassageAdjuster} that adjusts the {@link Passage} range to
+ * word boundaries hinted by the given {@link BreakIterator}.
+ */
+public class BreakIteratorShrinkingAdjuster implements PassageAdjuster {
+  private final BreakIterator bi;
+  private CharSequence value;
+
+  public BreakIteratorShrinkingAdjuster() {
+    this(BreakIterator.getWordInstance(Locale.ROOT));
+  }
+
+  public BreakIteratorShrinkingAdjuster(BreakIterator bi) {
+    this.bi = bi;
+  }
+
+  @Override
+  public void currentValue(CharSequence value) {
+    this.value = value;
+    bi.setText(new CharSequenceIterator(value));
+  }
+
+  @Override
+  public OffsetRange adjust(Passage passage) {
+    int from = passage.from;
+    if (from > 0) {
+      while (!bi.isBoundary(from)
+          || (from < value.length() && Character.isWhitespace(value.charAt(from)))) {
+        from = bi.following(from);
+        if (from == BreakIterator.DONE) {
+          from = passage.from;
+          break;
+        }
+      }
+      if (from == value.length()) {
+        from = passage.from;
+      }
+    }
+
+    int to = passage.to;
+    if (to != value.length()) {
+      while (!bi.isBoundary(to) || (to > 0 && Character.isWhitespace(value.charAt(to - 1)))) {
+        to = bi.preceding(to);
+        if (to == BreakIterator.DONE) {
+          to = passage.to;
+          break;
+        }
+      }
+      if (to == 0) {
+        to = passage.to;
+      }
+    }
+
+    for (OffsetRange r : passage.markers) {
+      from = Math.min(from, r.from);
+      to = Math.max(to, r.to);
+    }
+
+    if (from > to) {
+      from = to;
+    }
+
+    return new OffsetRange(from, to);
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/CharSequenceIterator.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/CharSequenceIterator.java
new file mode 100644
index 0000000..701717f
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/CharSequenceIterator.java
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import java.text.CharacterIterator;
+
+/**
+ * A {@link CharacterIterator} over a {@link CharSequence}.
+ */
+final class CharSequenceIterator implements CharacterIterator {
+  private final CharSequence text;
+
+  private int begin;
+  private int end;
+  private int pos;
+
+  public CharSequenceIterator(CharSequence text) {
+    this.text = text;
+    this.begin = 0;
+    this.end = text.length();
+  }
+
+  public char first() {
+    pos = begin;
+    return current();
+  }
+
+  public char last() {
+    if (end != begin) {
+      pos = end - 1;
+    } else {
+      pos = end;
+    }
+    return current();
+  }
+
+  public char setIndex(int p) {
+    if (p < begin || p > end) throw new IllegalArgumentException("Invalid index");
+    pos = p;
+    return current();
+  }
+
+  public char current() {
+    if (pos >= begin && pos < end) {
+      return text.charAt(pos);
+    } else {
+      return DONE;
+    }
+  }
+
+  public char next() {
+    if (pos < end - 1) {
+      pos++;
+      return text.charAt(pos);
+    } else {
+      pos = end;
+      return DONE;
+    }
+  }
+
+  public char previous() {
+    if (pos > begin) {
+      pos--;
+      return text.charAt(pos);
+    } else {
+      return DONE;
+    }
+  }
+
+  public int getBeginIndex() {
+    return begin;
+  }
+
+  public int getEndIndex() {
+    return end;
+  }
+
+  public int getIndex() {
+    return pos;
+  }
+
+  @Override
+  public Object clone() {
+    try {
+      return super.clone();
+    } catch (CloneNotSupportedException e) {
+      throw new RuntimeException(e);
+    }
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/MatchRegionRetriever.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/MatchRegionRetriever.java
new file mode 100644
index 0000000..16c9a11
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/MatchRegionRetriever.java
@@ -0,0 +1,304 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.index.FieldInfo;
+import org.apache.lucene.index.FieldInfos;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.LeafReader;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.search.IndexSearcher;
+import org.apache.lucene.search.Matches;
+import org.apache.lucene.search.MatchesIterator;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.QueryVisitor;
+import org.apache.lucene.search.ScoreMode;
+import org.apache.lucene.search.TopDocs;
+import org.apache.lucene.search.Weight;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.PrimitiveIterator;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.function.Predicate;
+
+/**
+ * Utility class to compute a list of "match regions" for a given query, searcher and
+ * document(s) using {@link Matches} API.
+ */
+public class MatchRegionRetriever {
+  private final List<LeafReaderContext> leaves;
+  private final Weight weight;
+  private final TreeSet<String> affectedFields;
+  private final Map<String, OffsetsRetrievalStrategy> offsetStrategies;
+  private final Set<String> preloadFields;
+
+  /**
+   * A callback for accepting a single document (and its associated leaf reader, leaf document ID)
+   * and its match offset ranges, as indicated by the {@link Matches} interface retrieved for
+   * the query.
+   */
+  @FunctionalInterface
+  public interface MatchOffsetsConsumer {
+    void accept(int docId, LeafReader leafReader, int leafDocId, Map<String, List<OffsetRange>> hits)
+        throws IOException;
+  }
+
+  /**
+   * An abstraction that provides document values for a given field. Default implementation
+   * in {@link DocumentFieldValueProvider} just reaches to a preloaded {@link Document}. It is
+   * possible to write a more efficient implementation on top of a reusable character buffer
+   * (that reuses the buffer while retrieving hit regions for documents).
+   */
+  @FunctionalInterface
+  public interface FieldValueProvider {
+    List<CharSequence> getValues(String field);
+  }
+
+  /**
+   * A constructor with the default offset strategy supplier.
+   */
+  public MatchRegionRetriever(IndexSearcher searcher, Query query, Analyzer analyzer) throws IOException {
+    this(searcher, query, analyzer, computeOffsetRetrievalStrategies(searcher.getIndexReader(), analyzer));
+  }
+
+  /**
+   * @param searcher Index searcher to be used for retrieving matches.
+   * @param query The query for which matches should be retrieved. The query should be rewritten
+   *              against the provided searcher.
+   * @param analyzer An analyzer that may be used to reprocess (retokenize) document fields
+   *                 in the absence of position offsets in the index. Note that the analyzer must return
+   *                 tokens (positions and offsets) identical to the ones stored in the index.
+   * @param fieldOffsetStrategySupplier A custom supplier of per-field {@link OffsetsRetrievalStrategy}
+   *                                    instances.
+   */
+  public MatchRegionRetriever(IndexSearcher searcher, Query query, Analyzer analyzer,
+                              OffsetsRetrievalStrategySupplier fieldOffsetStrategySupplier)
+      throws IOException {
+    leaves = searcher.getIndexReader().leaves();
+    assert checkOrderConsistency(leaves);
+
+    // We need full scoring mode so that we can receive matches from all sub-clauses
+    // (no optimizations in Boolean queries take place).
+    weight = searcher.createWeight(query, ScoreMode.COMPLETE, 0);
+
+    // Compute the subset of fields affected by this query so that we don't load or scan
+    // fields that are irrelevant.
+    affectedFields = new TreeSet<>();
+    query.visit(
+        new QueryVisitor() {
+          @Override
+          public boolean acceptField(String field) {
+            affectedFields.add(field);
+            return false;
+          }
+        });
+
+    // Compute value offset retrieval strategy for all affected fields.
+    offsetStrategies = new HashMap<>();
+    for (String field : affectedFields) {
+      offsetStrategies.put(field, fieldOffsetStrategySupplier.apply(field));
+    }
+
+    // Ask offset strategies if they'll need field values.
+    preloadFields = new HashSet<>();
+    offsetStrategies.forEach(
+        (field, strategy) -> {
+          if (strategy.requiresDocument()) {
+            preloadFields.add(field);
+          }
+        });
+
+    // Only preload those field values that can be affected by the query and are required
+    // by strategies.
+    preloadFields.retainAll(affectedFields);
+  }
+
+  public void highlightDocuments(TopDocs topDocs, MatchOffsetsConsumer consumer) throws IOException {
+    highlightDocuments(Arrays.stream(topDocs.scoreDocs)
+        .mapToInt(scoreDoc -> scoreDoc.doc)
+        .sorted()
+        .iterator(), consumer);
+  }
+
+  /**
+   * Low-level, high-efficiency method for highlighting large numbers of documents at once in a
+   * streaming fashion.
+   *
+   * @param docIds A stream of <em>sorted</em> document identifiers for which hit ranges should
+   *               be returned.
+   * @param consumer A streaming consumer for document-hits pairs.
+   */
+  public void highlightDocuments(PrimitiveIterator.OfInt docIds, MatchOffsetsConsumer consumer)
+      throws IOException {
+    if (leaves.isEmpty()) {
+      return;
+    }
+
+    Iterator<LeafReaderContext> ctx = leaves.iterator();
+    LeafReaderContext currentContext = ctx.next();
+    int previousDocId = -1;
+    Map<String, List<OffsetRange>> highlights = new TreeMap<>();
+    while (docIds.hasNext()) {
+      int docId = docIds.nextInt();
+
+      if (docId < previousDocId) {
+        throw new RuntimeException("Input document IDs must be sorted (increasing).");
+      }
+      previousDocId = docId;
+
+      while (docId >= currentContext.docBase + currentContext.reader().maxDoc()) {
+        currentContext = ctx.next();
+      }
+
+      int contextRelativeDocId = docId - currentContext.docBase;
+
+      // Only preload fields we may potentially need.
+      FieldValueProvider documentSupplier;
+      if (preloadFields.isEmpty()) {
+        documentSupplier = null;
+      } else {
+        Document doc = currentContext.reader().document(contextRelativeDocId, preloadFields);
+        documentSupplier = new DocumentFieldValueProvider(doc);
+      }
+
+      highlights.clear();
+      highlightDocument(currentContext, contextRelativeDocId, documentSupplier, (field) -> true, highlights);
+      consumer.accept(docId, currentContext.reader(), contextRelativeDocId, highlights);
+    }
+  }
+
+  /**
+   * Low-level method for retrieving hit ranges for a single document. This method can be used with
+   * custom document {@link FieldValueProvider}.
+   */
+  public void highlightDocument(
+      LeafReaderContext leafReaderContext,
+      int contextDocId,
+      FieldValueProvider doc,
+      Predicate<String> acceptField,
+      Map<String, List<OffsetRange>> outputHighlights)
+      throws IOException {
+    Matches matches = weight.matches(leafReaderContext, contextDocId);
+    if (matches == null) {
+      return;
+    }
+
+    for (String field : affectedFields) {
+      if (acceptField.test(field)) {
+        MatchesIterator matchesIterator = matches.getMatches(field);
+        if (matchesIterator == null) {
+          // No matches on this field, even though the field was part of the query. This may be possible
+          // with complex queries that source non-text fields (have no "hit regions" in any textual
+          // representation). Skip.
+        } else {
+          OffsetsRetrievalStrategy offsetStrategy = offsetStrategies.get(field);
+          if (offsetStrategy == null) {
+            throw new IOException(
+                "Non-empty matches but no offset retrieval strategy for field: " + field);
+          }
+          List<OffsetRange> ranges = offsetStrategy.get(matchesIterator, doc);
+          if (!ranges.isEmpty()) {
+            outputHighlights.put(field, ranges);
+          }
+        }
+      }
+    }
+  }
+
+  private boolean checkOrderConsistency(List<LeafReaderContext> leaves) {
+    for (int i = 1; i < leaves.size(); i++) {
+      LeafReaderContext prev = leaves.get(i - 1);
+      LeafReaderContext next = leaves.get(i);
+      assert prev.docBase <= next.docBase;
+      assert prev.docBase + prev.reader().maxDoc() == next.docBase;
+    }
+    return true;
+  }
+
+  /**
+   * Compute default strategies for retrieving offsets from {@link MatchesIterator}
+   * instances for a set of given fields.
+   */
+  public static OffsetsRetrievalStrategySupplier computeOffsetRetrievalStrategies(
+      IndexReader reader, Analyzer analyzer) {
+    FieldInfos fieldInfos = FieldInfos.getMergedFieldInfos(reader);
+    return (field) -> {
+      FieldInfo fieldInfo = fieldInfos.fieldInfo(field);
+      if (fieldInfo == null) {
+        return (mi, doc) -> {
+          throw new IOException("FieldInfo is null for field: " + field);
+        };
+      }
+
+      switch (fieldInfo.getIndexOptions()) {
+        case DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS:
+          return new OffsetsFromMatchIterator(field);
+
+        case DOCS_AND_FREQS_AND_POSITIONS:
+          return new OffsetsFromPositions(field, analyzer);
+
+        case DOCS_AND_FREQS:
+        case DOCS:
+          // By default retrieve offsets from individual tokens
+          // retrieved by the analyzer (possibly narrowed down to
+          // only those terms that the query hinted at when passed
+          // a QueryVisitor.
+          //
+          // Alternative straties are also possible and may make sense
+          // depending on the use case (OffsetsFromValues, for example).
+          return new OffsetsFromTokens(field, analyzer);
+
+        default:
+          return
+              (matchesIterator, doc) -> {
+                throw new IOException(
+                    "Field is indexed without positions and/or offsets: "
+                        + field
+                        + ", "
+                        + fieldInfo.getIndexOptions());
+              };
+      }
+    };
+  }
+
+  /**
+   * Implements {@link FieldValueProvider} wrapping a preloaded
+   * {@link Document}.
+   */
+  private static final class DocumentFieldValueProvider implements FieldValueProvider {
+    private final Document doc;
+
+    public DocumentFieldValueProvider(Document doc) {
+      this.doc = doc;
+    }
+
+    @Override
+    public List<CharSequence> getValues(String field) {
+      return Arrays.asList(doc.getValues(field));
+    }
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetRange.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetRange.java
new file mode 100644
index 0000000..7ed397c
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetRange.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import java.util.Objects;
+
+/**
+ * A non-empty range of offset positions.
+ */
+public class OffsetRange {
+  /** Start index, inclusive. */
+  public final int from;
+
+  /** End index, exclusive. */
+  public final int to;
+
+  /**
+   * @param from Start index, inclusive.
+   * @param to End index, exclusive.
+   */
+  public OffsetRange(int from, int to) {
+    assert from <= to : "A non-empty offset range is required: " + from + "-" + to;
+    this.from = from;
+    this.to = to;
+  }
+
+  public int length() {
+    return to - from;
+  }
+
+  @Override
+  public String toString() {
+    return "[from=" + from + ", to=" + to + "]";
+  }
+
+  @Override
+  public boolean equals(Object other) {
+    if (other == this) return true;
+    if (other instanceof OffsetRange) {
+      OffsetRange that = (OffsetRange) other;
+      return from == that.from && to == that.to;
+    } else {
+      return false;
+    }
+  }
+
+  @Override
+  public int hashCode() {
+    return Objects.hash(from, to);
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromMatchIterator.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromMatchIterator.java
new file mode 100644
index 0000000..1352776
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromMatchIterator.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.search.MatchesIterator;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * This strategy retrieves offsets directly from {@link MatchesIterator}.
+ */
+public final class OffsetsFromMatchIterator implements OffsetsRetrievalStrategy {
+  private final String field;
+
+  OffsetsFromMatchIterator(String field) {
+    this.field = field;
+  }
+
+  @Override
+  public List<OffsetRange> get(MatchesIterator matchesIterator, MatchRegionRetriever.FieldValueProvider doc)
+      throws IOException {
+    ArrayList<OffsetRange> ranges = new ArrayList<>();
+    while (matchesIterator.next()) {
+      int from = matchesIterator.startOffset();
+      int to = matchesIterator.endOffset();
+      if (from < 0 || to < 0) {
+        throw new IOException("Matches API returned negative offsets for field: " + field);
+      }
+      ranges.add(new OffsetRange(from, to));
+    }
+    return ranges;
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromPositions.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromPositions.java
new file mode 100644
index 0000000..8eb932f
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromPositions.java
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
+import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
+import org.apache.lucene.search.MatchesIterator;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * This strategy applies to fields with stored positions but no offsets. We re-analyze
+ * the field's value to find out offsets of match positions.
+ * <p>
+ * Note that this may fail if index data (positions stored in the index) is out of sync
+ * with the field values or the analyzer. This strategy assumes it'll never happen.
+ */
+public final class OffsetsFromPositions implements OffsetsRetrievalStrategy {
+  private final String field;
+  private final Analyzer analyzer;
+
+  OffsetsFromPositions(String field, Analyzer analyzer) {
+    this.field = field;
+    this.analyzer = analyzer;
+  }
+
+  @Override
+  public List<OffsetRange> get(MatchesIterator matchesIterator, MatchRegionRetriever.FieldValueProvider doc)
+      throws IOException {
+    ArrayList<OffsetRange> ranges = new ArrayList<>();
+    while (matchesIterator.next()) {
+      int from = matchesIterator.startPosition();
+      int to = matchesIterator.endPosition();
+      if (from < 0 || to < 0) {
+        throw new IOException("Matches API returned negative positions for field: " + field);
+      }
+      ranges.add(new OffsetRange(from, to));
+    }
+
+    // Convert from positions to offsets.
+    ranges = convertPositionsToOffsets(ranges, analyzer, field, doc.getValues(field));
+
+    return ranges;
+  }
+
+  @Override
+  public boolean requiresDocument() {
+    return true;
+  }
+
+  private static ArrayList<OffsetRange> convertPositionsToOffsets(
+      ArrayList<OffsetRange> ranges,
+      Analyzer analyzer,
+      String fieldName,
+      List<CharSequence> values)
+      throws IOException {
+
+    if (ranges.isEmpty()) {
+      return ranges;
+    }
+
+    class LeftRight {
+      int left = Integer.MAX_VALUE;
+      int right = Integer.MIN_VALUE;
+
+      @Override
+      public String toString() {
+        return "[" + "L: " + left + ", R: " + right + ']';
+      }
+    }
+
+    Map<Integer, LeftRight> requiredPositionSpans = new HashMap<>();
+    int minPosition = Integer.MAX_VALUE;
+    int maxPosition = Integer.MIN_VALUE;
+    for (OffsetRange range : ranges) {
+      requiredPositionSpans.computeIfAbsent(range.from, (key) -> new LeftRight());
+      requiredPositionSpans.computeIfAbsent(range.to, (key) -> new LeftRight());
+      minPosition = Math.min(minPosition, range.from);
+      maxPosition = Math.max(maxPosition, range.to);
+    }
+
+    int position = -1;
+    int valueOffset = 0;
+    for (int valueIndex = 0, max = values.size(); valueIndex < max; valueIndex++) {
+      final String value = values.get(valueIndex).toString();
+      final boolean lastValue = valueIndex + 1 == max;
+
+      TokenStream ts = analyzer.tokenStream(fieldName, value);
+      OffsetAttribute offsetAttr = ts.getAttribute(OffsetAttribute.class);
+      PositionIncrementAttribute posAttr = ts.getAttribute(PositionIncrementAttribute.class);
+      ts.reset();
+      while (ts.incrementToken()) {
+        position += posAttr.getPositionIncrement();
+
+        if (position >= minPosition) {
+          LeftRight leftRight = requiredPositionSpans.get(position);
+          if (leftRight != null) {
+            int startOffset = valueOffset + offsetAttr.startOffset();
+            int endOffset = valueOffset + offsetAttr.endOffset();
+
+            leftRight.left = Math.min(leftRight.left, startOffset);
+            leftRight.right = Math.max(leftRight.right, endOffset);
+          }
+
+          // Only short-circuit if we're on the last value (which should be the common
+          // case since most fields would only have a single value anyway). We need
+          // to make sure of this because otherwise offsetAttr would have incorrect value.
+          if (position > maxPosition && lastValue) {
+            break;
+          }
+        }
+      }
+      ts.end();
+      position += posAttr.getPositionIncrement() + analyzer.getPositionIncrementGap(fieldName);
+      valueOffset += offsetAttr.endOffset() + analyzer.getOffsetGap(fieldName);
+      ts.close();
+    }
+
+    ArrayList<OffsetRange> converted = new ArrayList<>();
+    for (OffsetRange range : ranges) {
+      LeftRight left = requiredPositionSpans.get(range.from);
+      LeftRight right = requiredPositionSpans.get(range.to);
+      if (left == null
+          || right == null
+          || left.left == Integer.MAX_VALUE
+          || right.right == Integer.MIN_VALUE) {
+        throw new RuntimeException("Position not properly initialized for range: " + range);
+      }
+      converted.add(new OffsetRange(left.left, right.right));
+    }
+
+    return converted;
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromTokens.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromTokens.java
new file mode 100644
index 0000000..85c4f40
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromTokens.java
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
+import org.apache.lucene.analysis.tokenattributes.TermToBytesRefAttribute;
+import org.apache.lucene.index.Term;
+import org.apache.lucene.search.MatchesIterator;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.QueryVisitor;
+import org.apache.lucene.util.BytesRef;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+/**
+ * This strategy works for fields where we know the match occurred but there are
+ * no known positions or offsets.
+ * <p>
+ * We re-analyze field values and return offset ranges for returned tokens that
+ * are also returned by the query's term collector.
+ */
+public final class OffsetsFromTokens implements OffsetsRetrievalStrategy {
+  private final String field;
+  private final Analyzer analyzer;
+
+  public OffsetsFromTokens(String field, Analyzer analyzer) {
+    this.field = field;
+    this.analyzer = analyzer;
+  }
+
+  @Override
+  public List<OffsetRange> get(MatchesIterator matchesIterator, MatchRegionRetriever.FieldValueProvider doc) throws IOException {
+    List<CharSequence> values = doc.getValues(field);
+
+    Set<BytesRef> matchTerms = new HashSet<>();
+    while (matchesIterator.next()) {
+      Query q = matchesIterator.getQuery();
+      q.visit(new QueryVisitor() {
+        @Override
+        public void consumeTerms(Query query, Term... terms) {
+          for (Term t : terms) {
+            if (field.equals(t.field())) {
+              matchTerms.add(t.bytes());
+            }
+          }
+        }
+      });
+    }
+
+    ArrayList<OffsetRange> ranges = new ArrayList<>();
+    int valueOffset = 0;
+    for (int valueIndex = 0, max = values.size(); valueIndex < max; valueIndex++) {
+      final String value = values.get(valueIndex).toString();
+
+      TokenStream ts = analyzer.tokenStream(field, value);
+      OffsetAttribute offsetAttr = ts.getAttribute(OffsetAttribute.class);
+      TermToBytesRefAttribute termAttr = ts.getAttribute(TermToBytesRefAttribute.class);
+      ts.reset();
+      while (ts.incrementToken()) {
+        if (matchTerms.contains(termAttr.getBytesRef())) {
+          int startOffset = valueOffset + offsetAttr.startOffset();
+          int endOffset = valueOffset + offsetAttr.endOffset();
+          ranges.add(new OffsetRange(startOffset, endOffset));
+        }
+      }
+      ts.end();
+      valueOffset += offsetAttr.endOffset() + analyzer.getOffsetGap(field);
+      ts.close();
+    }
+    return ranges;
+  }
+
+  @Override
+  public boolean requiresDocument() {
+    return true;
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromValues.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromValues.java
new file mode 100644
index 0000000..d32650d
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsFromValues.java
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
+import org.apache.lucene.search.MatchesIterator;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * This strategy works for fields where we know the match occurred but there are
+ * no known positions or offsets.
+ * <p>
+ * We re-analyze field values and return offset ranges for entire values
+ * (not individual tokens). Re-analysis is required because analyzer may return
+ * an unknown offset gap.
+ */
+public final class OffsetsFromValues implements OffsetsRetrievalStrategy {
+  private final String field;
+  private final Analyzer analyzer;
+
+  public OffsetsFromValues(String field, Analyzer analyzer) {
+    this.field = field;
+    this.analyzer = analyzer;
+  }
+
+  @Override
+  public List<OffsetRange> get(MatchesIterator matchesIterator, MatchRegionRetriever.FieldValueProvider doc) throws IOException {
+    List<CharSequence> values = doc.getValues(field);
+
+    ArrayList<OffsetRange> ranges = new ArrayList<>();
+    int valueOffset = 0;
+    for (CharSequence charSequence : values) {
+      final String value = charSequence.toString();
+
+      TokenStream ts = analyzer.tokenStream(field, value);
+      OffsetAttribute offsetAttr = ts.getAttribute(OffsetAttribute.class);
+      ts.reset();
+      int startOffset = valueOffset;
+      while (ts.incrementToken()) {
+        // Go through all tokens to increment offset attribute properly.
+      }
+      ts.end();
+      valueOffset += offsetAttr.endOffset();
+      ranges.add(new OffsetRange(startOffset, valueOffset));
+      valueOffset += analyzer.getOffsetGap(field);
+      ts.close();
+    }
+    return ranges;
+  }
+
+  @Override
+  public boolean requiresDocument() {
+    return true;
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsRetrievalStrategy.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsRetrievalStrategy.java
new file mode 100644
index 0000000..62eb25f
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsRetrievalStrategy.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.search.MatchesIterator;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Determines how match offset regions are computed from {@link MatchesIterator}. Several
+ * possibilities exist, ranging from retrieving offsets directly from a match instance
+ * to re-evaluating the document's field and recomputing offsets from there.
+ */
+public interface OffsetsRetrievalStrategy {
+  /**
+   * Return value offsets (match ranges) acquired from the given {@link MatchesIterator}.
+   */
+  List<OffsetRange> get(MatchesIterator matchesIterator, MatchRegionRetriever.FieldValueProvider doc)
+      throws IOException;
+
+  /**
+   * Whether this strategy requires document field access.
+   */
+  default boolean requiresDocument() {
+    return false;
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsRetrievalStrategySupplier.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsRetrievalStrategySupplier.java
new file mode 100644
index 0000000..15f4c04
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/OffsetsRetrievalStrategySupplier.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import java.util.function.Function;
+
+/**
+ * A per-field supplier of {@link OffsetsRetrievalStrategy}.
+ */
+@FunctionalInterface
+public interface OffsetsRetrievalStrategySupplier extends Function<String, OffsetsRetrievalStrategy> {
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/Passage.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/Passage.java
new file mode 100644
index 0000000..9a4dc4b
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/Passage.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import java.util.List;
+
+/**
+ * A passage is a fragment of source text, scored and possibly with a list of sub-offsets (markers)
+ * to be highlighted. The markers can be overlapping or nested, but they're always contained within
+ * the passage.
+ */
+public class Passage extends OffsetRange {
+  public List<OffsetRange> markers;
+
+  public Passage(int from, int to, List<OffsetRange> markers) {
+    super(from, to);
+
+    this.markers = markers;
+  }
+
+  @Override
+  public String toString() {
+    return "[" + super.toString() + ", markers=" + markers + "]";
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageAdjuster.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageAdjuster.java
new file mode 100644
index 0000000..8415a2e
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageAdjuster.java
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+/**
+ * Adjusts the range of one or more passages over a given value. An example
+ * adjuster could shift passage boundary to the next or previous word delimiter
+ * or white space, for example.
+ */
+public interface PassageAdjuster {
+  void currentValue(CharSequence value);
+  OffsetRange adjust(Passage p);
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageFormatter.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageFormatter.java
new file mode 100644
index 0000000..4aef7aa
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageFormatter.java
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.RandomAccess;
+import java.util.function.Function;
+
+/**
+ * Formats a collection of {@linkplain Passage passages} over a given string, cleaning up
+ * and resolving restrictions concerning overlaps, allowed sub-ranges over the
+ * input string and length restrictions.
+ *
+ * Passages are demarcated with constructor-provided ellipsis and start/end marker
+ * sequences.
+ */
+public class PassageFormatter {
+  private final String ellipsis;
+  private final Function<OffsetRange, String> markerStart;
+  private final Function<OffsetRange, String> markerEnd;
+
+  private final ArrayList<OffsetRange> markerStack = new ArrayList<>();
+
+  public PassageFormatter(String ellipsis, String markerStart, String markerEnd) {
+    this(ellipsis, (m) -> markerStart, (m) -> markerEnd);
+  }
+
+  public PassageFormatter(
+      String ellipsis,
+      Function<OffsetRange, String> markerStart,
+      Function<OffsetRange, String> markerEnd) {
+    this.ellipsis = ellipsis;
+    this.markerStart = markerStart;
+    this.markerEnd = markerEnd;
+  }
+
+  public List<String> format(CharSequence value, List<Passage> passages, List<OffsetRange> ranges) {
+    assert PassageSelector.sortedAndNonOverlapping(passages);
+    assert PassageSelector.sortedAndNonOverlapping(ranges);
+    assert ranges instanceof RandomAccess;
+
+    if (ranges.isEmpty()) {
+      return Collections.emptyList();
+    }
+
+    ArrayList<String> result = new ArrayList<>();
+    StringBuilder buf = new StringBuilder();
+
+    int rangeIndex = 0;
+    OffsetRange range = ranges.get(rangeIndex);
+    passageFormatting:
+    for (Passage passage : passages) {
+      // Move to the range of the current passage.
+      while (passage.from >= range.to) {
+        if (++rangeIndex == ranges.size()) {
+          break passageFormatting;
+        }
+        range = ranges.get(rangeIndex);
+      }
+
+      assert range.from <= passage.from && range.to >= passage.to : range + " ? " + passage;
+
+      buf.setLength(0);
+      if (range.from < passage.from) {
+        buf.append(ellipsis);
+      }
+      format(buf, value, passage);
+      if (range.to > passage.to) {
+        buf.append(ellipsis);
+      }
+      result.add(buf.toString());
+    }
+    return result;
+  }
+
+  public StringBuilder format(StringBuilder buf, CharSequence value, final Passage passage) {
+    switch (passage.markers.size()) {
+      case 0:
+        // No markers, full passage appended.
+        buf.append(value, passage.from, passage.to);
+        break;
+
+      case 1:
+        // One marker, trivial and frequent case so it's handled separately.
+        OffsetRange m = passage.markers.iterator().next();
+        buf.append(value, passage.from, m.from);
+        buf.append(markerStart.apply(m));
+        buf.append(value, m.from, m.to);
+        buf.append(markerEnd.apply(m));
+        buf.append(value, m.to, passage.to);
+        break;
+
+      default:
+        // Multiple markers, possibly overlapping or nested.
+        markerStack.clear();
+        multipleMarkers(value, passage, buf, markerStack);
+        break;
+    }
+
+    return buf;
+  }
+
+  /** Handle multiple markers, possibly overlapping or nested. */
+  private void multipleMarkers(
+      CharSequence value, final Passage p, StringBuilder b, ArrayList<OffsetRange> markerStack) {
+    int at = p.from;
+    int max = p.to;
+    SlicePoint[] slicePoints = slicePoints(p);
+    for (SlicePoint slicePoint : slicePoints) {
+      b.append(value, at, slicePoint.offset);
+      OffsetRange currentMarker = slicePoint.marker;
+      switch (slicePoint.type) {
+        case START:
+          markerStack.add(currentMarker);
+          b.append(markerStart.apply(currentMarker));
+          break;
+
+        case END:
+          int markerIndex = markerStack.lastIndexOf(currentMarker);
+          for (int k = markerIndex; k < markerStack.size(); k++) {
+            b.append(markerEnd.apply(markerStack.get(k)));
+          }
+          markerStack.remove(markerIndex);
+          for (int k = markerIndex; k < markerStack.size(); k++) {
+            b.append(markerStart.apply(markerStack.get(k)));
+          }
+          break;
+
+        default:
+          throw new RuntimeException();
+      }
+
+      at = slicePoint.offset;
+    }
+
+    if (at < max) {
+      b.append(value, at, max);
+    }
+  }
+
+  private static SlicePoint[] slicePoints(Passage p) {
+    SlicePoint[] slicePoints = new SlicePoint[p.markers.size() * 2];
+    int x = 0;
+    for (OffsetRange m : p.markers) {
+      slicePoints[x++] = new SlicePoint(SlicePoint.Type.START, m.from, m);
+      slicePoints[x++] = new SlicePoint(SlicePoint.Type.END, m.to, m);
+    }
+
+    // Order slice points by their offset
+    Comparator<SlicePoint> c =
+        Comparator.<SlicePoint>comparingInt(pt -> pt.offset)
+            .thenComparingInt(pt -> pt.type.ordering)
+            .thenComparing(
+                (a, b) -> {
+                  if (a.type == SlicePoint.Type.START) {
+                    // Longer start slice points come first.
+                    return Integer.compare(b.marker.to, a.marker.to);
+                  } else {
+                    // Shorter end slice points come first.
+                    return Integer.compare(b.marker.from, a.marker.from);
+                  }
+                });
+
+    Arrays.sort(slicePoints, c);
+
+    return slicePoints;
+  }
+
+  static class SlicePoint {
+    enum Type {
+      START(2),
+      END(1);
+
+      private final int ordering;
+
+      Type(int ordering) {
+        this.ordering = ordering;
+      }
+    }
+
+    public final int offset;
+    public final Type type;
+    public final OffsetRange marker;
+
+    public SlicePoint(Type t, int offset, OffsetRange m) {
+      this.type = t;
+      this.offset = offset;
+      this.marker = m;
+    }
+
+    @Override
+    public String toString() {
+      return "(" + type + ", " + marker + ")";
+    }
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageSelector.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageSelector.java
new file mode 100644
index 0000000..3ac18fe
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/PassageSelector.java
@@ -0,0 +1,273 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.PriorityQueue;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.List;
+import java.util.RandomAccess;
+
+/** Selects fragments of text that score best for the given set of highlight markers. */
+public class PassageSelector {
+  public static final Comparator<Passage> DEFAULT_SCORER =
+      (a, b) -> {
+        // Compare the number of highlights first.
+        int v;
+        v = Integer.compare(a.markers.size(), b.markers.size());
+        if (v != 0) {
+          return v;
+        }
+
+        // Total number of characters covered by the highlights.
+        int len1 = 0, len2 = 0;
+        for (OffsetRange o : a.markers) {
+          len1 += o.length();
+        }
+        for (OffsetRange o : b.markers) {
+          len2 += o.length();
+        }
+        if (len1 != len2) {
+          return Integer.compare(len1, len2);
+        }
+
+        return Integer.compare(b.from, a.from);
+      };
+
+  private final Comparator<Passage> passageScorer;
+  private final PassageAdjuster passageAdjuster;
+
+  public PassageSelector() {
+    this(DEFAULT_SCORER, null);
+  }
+
+  public PassageSelector(Comparator<Passage> passageScorer, PassageAdjuster passageAdjuster) {
+    this.passageScorer = passageScorer;
+    this.passageAdjuster = passageAdjuster;
+  }
+
+  public List<Passage> pickBest(
+      CharSequence value,
+      List<? extends OffsetRange> markers,
+      int maxPassageWindow,
+      int maxPassages) {
+    return pickBest(
+        value, markers, maxPassageWindow, maxPassages, List.of(new OffsetRange(0, value.length())));
+  }
+
+  public List<Passage> pickBest(
+      CharSequence value,
+      List<? extends OffsetRange> markers,
+      int maxPassageWindow,
+      int maxPassages,
+      List<OffsetRange> permittedPassageRanges) {
+    assert markers instanceof RandomAccess && permittedPassageRanges instanceof RandomAccess;
+
+    // Handle odd special cases early.
+    if (value.length() == 0 || maxPassageWindow == 0) {
+      return Collections.emptyList();
+    }
+
+    // Sort markers by their start offset, shortest first.
+    markers.sort(
+        (a, b) -> {
+          int v = Integer.compare(a.from, b.from);
+          return v != 0 ? v : Integer.compare(a.to, b.to);
+        });
+
+    // Determine a maximum offset window around each highlight marker and
+    // pick the best scoring passage candidates.
+    PriorityQueue<Passage> pq =
+        new PriorityQueue<>(maxPassages) {
+          @Override
+          protected boolean lessThan(Passage a, Passage b) {
+            return passageScorer.compare(a, b) < 0;
+          }
+        };
+
+    assert sortedAndNonOverlapping(permittedPassageRanges);
+
+    final int max = markers.size();
+    int markerIndex = 0;
+    nextRange:
+    for (OffsetRange range : permittedPassageRanges) {
+      final int rangeTo = Math.min(range.to, value.length());
+
+      // Skip ranges outside of the value window anyway.
+      if (range.from >= rangeTo) {
+        continue;
+      }
+
+      while (markerIndex < max) {
+        OffsetRange m = markers.get(markerIndex);
+
+        // Markers are sorted so if the current marker's start is past the range,
+        // we can advance, but we need to check the same marker against the new range.
+        if (m.from >= rangeTo) {
+          continue nextRange;
+        }
+
+        // Check if current marker falls within the range and is smaller than the largest allowed
+        // passage window.
+        if (m.from >= range.from && m.to <= rangeTo && m.length() <= maxPassageWindow) {
+
+          // Adjust the window range to center the highlight marker.
+          int from = (m.from + m.to - maxPassageWindow) / 2;
+          int to = (m.from + m.to + maxPassageWindow) / 2;
+          if (from < range.from) {
+            to += range.from - from;
+            from = range.from;
+          }
+          if (to > rangeTo) {
+            from -= to - rangeTo;
+            to = rangeTo;
+            if (from < range.from) {
+              from = range.from;
+            }
+          }
+
+          if (from < to && to <= value.length()) {
+            // Find other markers that are completely inside the passage window.
+            ArrayList<OffsetRange> inside = new ArrayList<>();
+            int i = markerIndex;
+            while (i > 0 && markers.get(i - 1).from >= from) {
+              i--;
+            }
+
+            OffsetRange c;
+            for (; i < max && (c = markers.get(i)).from < to; i++) {
+              if (c.to <= to) {
+                inside.add(c);
+              }
+            }
+
+            if (!inside.isEmpty()) {
+              pq.insertWithOverflow(new Passage(from, to, inside));
+            }
+          }
+        }
+
+        // Advance to the next marker.
+        markerIndex++;
+      }
+    }
+
+    // Collect from the priority queue (reverse the order so that highest-scoring are first).
+    Passage[] passages;
+    if (pq.size() > 0) {
+      passages = new Passage[pq.size()];
+      for (int i = pq.size(); --i >= 0; ) {
+        passages[i] = pq.pop();
+      }
+    } else {
+      // Handle the default, no highlighting markers case.
+      passages = pickDefaultPassage(value, maxPassageWindow, permittedPassageRanges);
+    }
+
+    // Correct passage boundaries from maxExclusive window. Typically shrink boundaries until we're
+    // on a proper word/sentence boundary.
+    if (passageAdjuster != null) {
+      passageAdjuster.currentValue(value);
+      for (int x = 0; x < passages.length; x++) {
+        Passage p = passages[x];
+        OffsetRange newRange = passageAdjuster.adjust(p);
+        if (newRange.from != p.from || newRange.to != p.to) {
+          assert newRange.from >= p.from && newRange.to <= p.to
+              : "Adjusters must not expand the passage's range: was "
+                  + p
+                  + " => changed to "
+                  + newRange;
+          passages[x] = new Passage(newRange.from, newRange.to, p.markers);
+        }
+      }
+    }
+
+    // Ensure there are no overlaps on passages. In case of conflicts, better score wins.
+    int last = 0;
+    for (int i = 0; i < passages.length; i++) {
+      Passage a = passages[i];
+      if (a != null && a.length() > 0) {
+        passages[last++] = a;
+        for (int j = i + 1; j < passages.length; j++) {
+          Passage b = passages[j];
+          if (b != null) {
+            if (adjecentOrOverlapping(a, b)) {
+              passages[j] = null;
+            }
+          }
+        }
+      }
+    }
+
+    // Remove nullified slots.
+    if (passages.length != last) {
+      passages = ArrayUtil.copyOfSubArray(passages, 0, last);
+    }
+
+    // Sort in the offset order again.
+    Arrays.sort(passages, (a, b) -> Integer.compare(a.from, b.from));
+
+    return Arrays.asList(passages);
+  }
+
+  static boolean sortedAndNonOverlapping(List<? extends OffsetRange> permittedPassageRanges) {
+    if (permittedPassageRanges.size() > 1) {
+      Iterator<? extends OffsetRange> i = permittedPassageRanges.iterator();
+      for (OffsetRange next, previous = i.next(); i.hasNext(); previous = next) {
+        next = i.next();
+        if (previous.to > next.from) {
+          throw new AssertionError(
+              "Ranges must be sorted and non-overlapping: " + permittedPassageRanges);
+        }
+      }
+    }
+
+    return true;
+  }
+
+  /**
+   * Invoked when no passages could be selected (due to constraints or lack of highlight markers).
+   */
+  protected Passage[] pickDefaultPassage(
+      CharSequence value, int maxCharacterWindow, List<OffsetRange> permittedPassageRanges) {
+    // Search for the first range that is not empty.
+    for (OffsetRange o : permittedPassageRanges) {
+      int to = Math.min(value.length(), o.to);
+      if (o.from < to) {
+        return new Passage[] {
+          new Passage(
+              o.from, o.from + Math.min(maxCharacterWindow, o.length()), Collections.emptyList())
+        };
+      }
+    }
+
+    return new Passage[] {};
+  }
+
+  private static boolean adjecentOrOverlapping(Passage a, Passage b) {
+    if (a.from >= b.from) {
+      return a.from <= b.to - 1;
+    } else {
+      return a.to - 1 >= b.from;
+    }
+  }
+}
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/package-info.java b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/package-info.java
new file mode 100644
index 0000000..49af79f
--- /dev/null
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/matchhighlight/package-info.java
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * This package contains several components useful to build a highlighter
+ * on top of the {@link org.apache.lucene.search.Matches} API.
+ *
+ * {@link org.apache.lucene.search.matchhighlight.MatchRegionRetriever} can be
+ * used to retrieve hit areas for a given {@link org.apache.lucene.search.Query}
+ * and one (or more) indexed documents. These hit areas can be then passed to
+ * {@link org.apache.lucene.search.matchhighlight.PassageSelector} and formatted
+ * with {@link org.apache.lucene.search.matchhighlight.PassageFormatter}.
+ */
+package org.apache.lucene.search.matchhighlight;
diff --git a/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/AsciiMatchRangeHighlighter.java b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/AsciiMatchRangeHighlighter.java
new file mode 100644
index 0000000..fa01af8
--- /dev/null
+++ b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/AsciiMatchRangeHighlighter.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.document.Document;
+
+import java.util.ArrayList;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * A simple ASCII match range highlighter for tests.
+ */
+final class AsciiMatchRangeHighlighter {
+  private final Analyzer analyzer;
+  private final PassageFormatter passageFormatter;
+  private final PassageSelector selector;
+
+  private int maxPassageWindow = 160;
+  private int maxPassages = 10;
+
+  public AsciiMatchRangeHighlighter(Analyzer analyzer) {
+    this.passageFormatter = new PassageFormatter("...", ">", "<");
+    this.selector = new PassageSelector();
+    this.analyzer = analyzer;
+  }
+
+  public Map<String, List<String>> apply(Document document, Map<String, List<OffsetRange>> fieldHighlights) {
+    ArrayList<OffsetRange> valueRanges = new ArrayList<>();
+    Map<String, List<String>> fieldSnippets = new LinkedHashMap<>();
+
+    fieldHighlights.forEach(
+        (field, matchRanges) -> {
+          int offsetGap = analyzer.getOffsetGap(field);
+
+          String[] values = document.getValues(field);
+          String value;
+          if (values.length == 1) {
+            value = values[0];
+          } else {
+            // This can be inefficient if offset gap is large but recomputing
+            // offsets in a smart way doesn't make sense for tests.
+            String fieldGapPadding = " ".repeat(offsetGap);
+            value = String.join(fieldGapPadding, values);
+          }
+
+          // Create permitted range windows for passages so that they don't cross
+          // multi-value boundary.
+          valueRanges.clear();
+          int offset = 0;
+          for (CharSequence v : values) {
+            valueRanges.add(new OffsetRange(offset, offset + v.length()));
+            offset += v.length();
+            offset += offsetGap;
+          }
+
+          List<Passage> passages =
+              selector.pickBest(value, matchRanges, maxPassageWindow, maxPassages, valueRanges);
+
+          fieldSnippets.put(field, passageFormatter.format(value, passages, valueRanges));
+        });
+
+    return fieldSnippets;
+  }
+}
diff --git a/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/MissingAnalyzer.java b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/MissingAnalyzer.java
new file mode 100644
index 0000000..6ff2067
--- /dev/null
+++ b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/MissingAnalyzer.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import org.apache.lucene.analysis.Analyzer;
+
+import java.io.Reader;
+
+/** An {@link Analyzer} that throws a runtime exception when used for anything. */
+final class MissingAnalyzer extends Analyzer {
+  @Override
+  protected Reader initReader(String fieldName, Reader reader) {
+    throw new RuntimeException("Field must have an explicit Analyzer: " + fieldName);
+  }
+
+  @Override
+  protected TokenStreamComponents createComponents(String fieldName) {
+    throw new RuntimeException("Field must have an explicit Analyzer: " + fieldName);
+  }
+
+  @Override
+  public int getOffsetGap(String fieldName) {
+    return 0;
+  }
+}
diff --git a/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/TestMatchRegionRetriever.java b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/TestMatchRegionRetriever.java
new file mode 100644
index 0000000..0fd9ca0
--- /dev/null
+++ b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/TestMatchRegionRetriever.java
@@ -0,0 +1,767 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import com.carrotsearch.randomizedtesting.RandomizedTest;
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.Tokenizer;
+import org.apache.lucene.analysis.core.WhitespaceTokenizer;
+import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper;
+import org.apache.lucene.analysis.synonym.SynonymGraphFilter;
+import org.apache.lucene.analysis.synonym.SynonymMap;
+import org.apache.lucene.analysis.util.CharTokenizer;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.document.Field;
+import org.apache.lucene.document.FieldType;
+import org.apache.lucene.document.StringField;
+import org.apache.lucene.document.TextField;
+import org.apache.lucene.index.DirectoryReader;
+import org.apache.lucene.index.IndexOptions;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.IndexWriter;
+import org.apache.lucene.index.IndexWriterConfig;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.Term;
+import org.apache.lucene.queries.intervals.IntervalQuery;
+import org.apache.lucene.queries.intervals.Intervals;
+import org.apache.lucene.queryparser.flexible.core.QueryNodeException;
+import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
+import org.apache.lucene.queryparser.flexible.standard.config.StandardQueryConfigHandler;
+import org.apache.lucene.search.BooleanClause;
+import org.apache.lucene.search.BooleanQuery;
+import org.apache.lucene.search.IndexSearcher;
+import org.apache.lucene.search.MatchAllDocsQuery;
+import org.apache.lucene.search.PhraseQuery;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.TermQuery;
+import org.apache.lucene.search.TopDocs;
+import org.apache.lucene.search.spans.SpanNearQuery;
+import org.apache.lucene.search.spans.SpanTermQuery;
+import org.apache.lucene.store.ByteBuffersDirectory;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.CharsRef;
+import org.apache.lucene.util.IOUtils;
+import org.apache.lucene.util.LuceneTestCase;
+import org.hamcrest.Matchers;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.function.BiFunction;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import static org.hamcrest.Matchers.containsInAnyOrder;
+import static org.hamcrest.Matchers.emptyArray;
+import static org.hamcrest.Matchers.not;
+
+public class TestMatchRegionRetriever extends LuceneTestCase {
+  private static final String FLD_ID = "field_id";
+
+  private static final String FLD_TEXT_POS_OFFS1 = "field_text_offs1";
+  private static final String FLD_TEXT_POS_OFFS2 = "field_text_offs2";
+
+  private static final String FLD_TEXT_POS_OFFS = "field_text_offs";
+  private static final String FLD_TEXT_POS = "field_text";
+
+  private static final String FLD_TEXT_SYNONYMS_POS_OFFS = "field_text_syns_offs";
+  private static final String FLD_TEXT_SYNONYMS_POS = "field_text_syns";
+
+  private static final String FLD_TEXT_NOPOS = "field_text_nopos";
+
+  private static final String FLD_NON_EXISTING = "field_missing";
+
+  private FieldType TYPE_STORED_WITH_OFFSETS;
+  private FieldType TYPE_STORED_NO_POSITIONS;
+
+  private Analyzer analyzer;
+
+  @Before
+  public void setup() {
+    TYPE_STORED_WITH_OFFSETS = new FieldType(TextField.TYPE_STORED);
+    TYPE_STORED_WITH_OFFSETS.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS);
+    TYPE_STORED_WITH_OFFSETS.freeze();
+
+    TYPE_STORED_NO_POSITIONS = new FieldType(TextField.TYPE_STORED);
+    TYPE_STORED_NO_POSITIONS.setIndexOptions(IndexOptions.DOCS_AND_FREQS);
+    TYPE_STORED_NO_POSITIONS.freeze();
+
+    Analyzer whitespaceAnalyzer =
+        new Analyzer() {
+          final int offsetGap = RandomizedTest.randomIntBetween(0, 2);
+          final int positionGap = RandomizedTest.randomFrom(new int[]{0, 1, 100});
+
+          @Override
+          protected TokenStreamComponents createComponents(String fieldName) {
+            WhitespaceTokenizer tokenizer =
+                new WhitespaceTokenizer(CharTokenizer.DEFAULT_MAX_WORD_LEN);
+            return new TokenStreamComponents(tokenizer);
+          }
+
+          @Override
+          public int getOffsetGap(String fieldName) {
+            return offsetGap;
+          }
+
+          @Override
+          public int getPositionIncrementGap(String fieldName) {
+            return positionGap;
+          }
+        };
+
+    Map<String, Analyzer> fieldAnalyzers = new HashMap<>();
+    fieldAnalyzers.put(FLD_TEXT_POS, whitespaceAnalyzer);
+    fieldAnalyzers.put(FLD_TEXT_POS_OFFS, whitespaceAnalyzer);
+    fieldAnalyzers.put(FLD_TEXT_POS_OFFS1, whitespaceAnalyzer);
+    fieldAnalyzers.put(FLD_TEXT_POS_OFFS2, whitespaceAnalyzer);
+    fieldAnalyzers.put(FLD_TEXT_NOPOS, whitespaceAnalyzer);
+
+    try {
+      SynonymMap.Builder b = new SynonymMap.Builder();
+      b.add(new CharsRef("foo\u0000bar"), new CharsRef("syn1"), true);
+      b.add(new CharsRef("baz"), new CharsRef("syn2\u0000syn3"), true);
+      SynonymMap synonymMap = b.build();
+      Analyzer synonymsAnalyzer =
+          new Analyzer() {
+            @Override
+            protected TokenStreamComponents createComponents(String fieldName) {
+              Tokenizer tokenizer = new WhitespaceTokenizer();
+              TokenStream tokenStream = new SynonymGraphFilter(tokenizer, synonymMap, true);
+              return new TokenStreamComponents(tokenizer, tokenStream);
+            }
+          };
+      fieldAnalyzers.put(FLD_TEXT_SYNONYMS_POS_OFFS, synonymsAnalyzer);
+      fieldAnalyzers.put(FLD_TEXT_SYNONYMS_POS, synonymsAnalyzer);
+    } catch (IOException e) {
+      throw new UncheckedIOException(e);
+    }
+
+    analyzer = new PerFieldAnalyzerWrapper(new MissingAnalyzer(), fieldAnalyzers);
+  }
+
+  BiFunction<String, String, Query> stdQueryParser =
+      (query, defField) -> {
+        try {
+          StandardQueryParser parser = new StandardQueryParser(analyzer);
+          parser.setDefaultOperator(StandardQueryConfigHandler.Operator.AND);
+          return parser.parse(query, defField);
+        } catch (QueryNodeException e) {
+          throw new RuntimeException(e);
+        }
+      };
+
+  @Test
+  public void testTermQueryWithOffsets() throws IOException {
+    checkTermQuery(FLD_TEXT_POS_OFFS);
+  }
+
+  @Test
+  public void testTermQueryWithPositions() throws IOException {
+    checkTermQuery(FLD_TEXT_POS);
+  }
+
+  private void checkTermQuery(String field) throws IOException {
+    withReader(
+        List.of(
+            Map.of(field, values("foo bar baz")),
+            Map.of(field, values("bar foo baz")),
+            Map.of(field, values("bar baz foo")),
+            Map.of(field, values("bar bar bar irrelevant"))),
+        reader -> {
+          assertThat(highlights(reader, new TermQuery(new Term(field, "foo"))),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo< bar baz')", field),
+                  fmt("1: (%s: 'bar >foo< baz')", field),
+                  fmt("2: (%s: 'bar baz >foo<')", field)));
+        });
+  }
+
+  @Test
+  public void testBooleanMultifieldQueryWithOffsets() throws IOException {
+    checkBooleanMultifieldQuery(FLD_TEXT_POS_OFFS);
+  }
+
+  @Test
+  public void testBooleanMultifieldQueryWithPositions() throws IOException {
+    checkBooleanMultifieldQuery(FLD_TEXT_POS);
+  }
+
+  private void checkBooleanMultifieldQuery(String field) throws IOException {
+    Query query =
+        new BooleanQuery.Builder()
+            .add(new PhraseQuery(1, field, "foo", "baz"), BooleanClause.Occur.SHOULD)
+            .add(new TermQuery(new Term(FLD_NON_EXISTING, "abc")), BooleanClause.Occur.SHOULD)
+            .add(new TermQuery(new Term(field, "xyz")), BooleanClause.Occur.MUST_NOT)
+            .build();
+
+    withReader(
+        List.of(
+            Map.of(field, values("foo bar baz abc")),
+            Map.of(field, values("bar foo baz def")),
+            Map.of(field, values("bar baz foo xyz"))),
+        reader -> {
+          assertThat(highlights(reader, query),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo bar baz< abc')", field),
+                  fmt("1: (%s: 'bar >foo baz< def')", field)));
+        });
+  }
+
+  @Test
+  public void testVariousQueryTypesWithOffsets() throws IOException {
+    checkVariousQueryTypes(FLD_TEXT_POS_OFFS);
+  }
+
+  @Test
+  public void testVariousQueryTypesWithPositions() throws IOException {
+    checkVariousQueryTypes(FLD_TEXT_POS);
+  }
+
+  private void checkVariousQueryTypes(String field) throws IOException {
+    withReader(
+        List.of(
+            Map.of(field, values("foo bar baz abc")),
+            Map.of(field, values("bar foo baz def")),
+            Map.of(field, values("bar baz foo xyz"))),
+        reader -> {
+          assertThat(highlights(reader, stdQueryParser.apply("foo baz", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo< bar >baz< abc')", field),
+                  fmt("1: (%s: 'bar >foo< >baz< def')", field),
+                  fmt("2: (%s: 'bar >baz< >foo< xyz')", field)));
+
+          assertThat(highlights(reader, stdQueryParser.apply("foo OR xyz", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo< bar baz abc')", field),
+                  fmt("1: (%s: 'bar >foo< baz def')", field),
+                  fmt("2: (%s: 'bar baz >foo< >xyz<')", field)));
+
+          assertThat(highlights(reader, stdQueryParser.apply("bas~2", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo >bar< >baz< >abc<')", field),
+                  fmt("1: (%s: '>bar< foo >baz< def')", field),
+                  fmt("2: (%s: '>bar< >baz< foo xyz')", field)));
+
+          assertThat(highlights(reader, stdQueryParser.apply("\"foo bar\"", field)),
+              containsInAnyOrder((fmt("0: (%s: '>foo bar< baz abc')", field))));
+
+          assertThat(highlights(reader, stdQueryParser.apply("\"foo bar\"~3", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo bar< baz abc')", field),
+                  fmt("1: (%s: '>bar foo< baz def')", field),
+                  fmt("2: (%s: '>bar baz foo< xyz')", field)));
+
+          assertThat(highlights(reader, stdQueryParser.apply("ba*", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo >bar< >baz< abc')", field),
+                  fmt("1: (%s: '>bar< foo >baz< def')", field),
+                  fmt("2: (%s: '>bar< >baz< foo xyz')", field)));
+
+          assertThat(highlights(reader, stdQueryParser.apply("[bar TO bas]", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo >bar< baz abc')", field),
+                  fmt("1: (%s: '>bar< foo baz def')", field),
+                  fmt("2: (%s: '>bar< baz foo xyz')", field)));
+
+          // Note how document '2' has 'bar' that isn't highlighted (because this
+          // document is excluded in the first clause).
+          assertThat(
+              highlights(reader, stdQueryParser.apply("([bar TO baz] -xyz) OR baz", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo >bar< >>baz<< abc')", field),
+                  fmt("1: (%s: '>bar< foo >>baz<< def')", field),
+                  fmt("2: (%s: 'bar >baz< foo xyz')", field)));
+
+          assertThat(highlights(reader, new MatchAllDocsQuery()),
+              Matchers.hasSize(0));
+        });
+
+    withReader(
+        List.of(
+            Map.of(field, values("foo baz foo")),
+            Map.of(field, values("bas baz foo")),
+            Map.of(field, values("bar baz foo xyz"))),
+        reader -> {
+          assertThat(
+              highlights(reader, stdQueryParser.apply("[bar TO baz] -bar", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo >baz< foo')", field), fmt("1: (%s: '>bas< >baz< foo')", field)));
+        });
+  }
+
+  @Test
+  public void testIntervalQueries() throws IOException {
+    String field = FLD_TEXT_POS_OFFS;
+
+    withReader(
+        List.of(
+            Map.of(field, values("foo baz foo")),
+            Map.of(field, values("bas baz foo")),
+            Map.of(field, values("bar baz foo xyz"))),
+        reader -> {
+          assertThat(
+              highlights(reader, new IntervalQuery(field,
+                  Intervals.unordered(
+                      Intervals.term("foo"),
+                      Intervals.term("bas"),
+                      Intervals.term("baz")))),
+              containsInAnyOrder(
+                  fmt("1: (field_text_offs: '>bas baz foo<')", field)
+              ));
+
+          assertThat(
+              highlights(reader, new IntervalQuery(field,
+                  Intervals.maxgaps(1,
+                      Intervals.unordered(
+                          Intervals.term("foo"),
+                          Intervals.term("bar"))))),
+              containsInAnyOrder(
+                  fmt("2: (field_text_offs: '>bar baz foo< xyz')", field)
+              ));
+
+          assertThat(
+              highlights(reader, new IntervalQuery(field,
+                  Intervals.containing(
+                      Intervals.unordered(
+                          Intervals.term("foo"),
+                          Intervals.term("bar")),
+                      Intervals.term("foo")))),
+              containsInAnyOrder(
+                  fmt("2: (field_text_offs: '>bar baz foo< xyz')", field)
+              ));
+
+          assertThat(
+              highlights(reader, new IntervalQuery(field,
+                  Intervals.containedBy(
+                      Intervals.term("foo"),
+                      Intervals.unordered(
+                          Intervals.term("foo"),
+                          Intervals.term("bar"))))),
+              containsInAnyOrder(
+                  fmt("2: (field_text_offs: '>bar baz foo< xyz')", field)
+              ));
+
+          assertThat(
+              highlights(reader, new IntervalQuery(field,
+                  Intervals.overlapping(
+                      Intervals.unordered(
+                          Intervals.term("foo"),
+                          Intervals.term("bar")),
+                      Intervals.term("foo")))),
+              containsInAnyOrder(
+                  fmt("2: (field_text_offs: '>bar baz foo< xyz')", field)
+              ));
+        });
+  }
+
+  @Test
+  public void testMultivaluedFieldsWithOffsets() throws IOException {
+    checkMultivaluedFields(FLD_TEXT_POS_OFFS);
+  }
+
+  @Test
+  public void testMultivaluedFieldsWithPositions() throws IOException {
+    checkMultivaluedFields(FLD_TEXT_POS);
+  }
+
+  public void checkMultivaluedFields(String field) throws IOException {
+    withReader(
+        List.of(
+            Map.of(field, values("foo bar", "baz abc", "bad baz")),
+            Map.of(field, values("bar foo", "baz def")),
+            Map.of(field, values("bar baz", "foo xyz"))),
+        reader -> {
+          assertThat(highlights(reader, stdQueryParser.apply("baz", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>baz< abc | bad >baz<')", field),
+                  fmt("1: (%s: '>baz< def')", field),
+                  fmt("2: (%s: 'bar >baz<')", field)));
+        });
+  }
+
+  @Test
+  public void testMultiFieldHighlights() throws IOException {
+    for (String[] fields :
+        new String[][]{
+            {FLD_TEXT_POS_OFFS1, FLD_TEXT_POS_OFFS2},
+            {FLD_TEXT_POS, FLD_TEXT_POS_OFFS2},
+            {FLD_TEXT_POS_OFFS1, FLD_TEXT_POS}
+        }) {
+      String field1 = fields[0];
+      String field2 = fields[1];
+      withReader(
+          List.of(
+              Map.of(
+                  field1, values("foo bar", "baz abc"),
+                  field2, values("foo baz", "loo bar"))),
+          reader -> {
+            String ordered =
+                Stream.of(fmt("(%s: '>baz< abc')", field1), fmt("(%s: 'loo >bar<')", field2))
+                    .sorted()
+                    .collect(Collectors.joining(""));
+
+            assertThat(
+                highlights(
+                    reader,
+                    stdQueryParser.apply(field1 + ":baz" + " OR " + field2 + ":bar", field1)),
+                containsInAnyOrder(fmt("0: %s", ordered)));
+          });
+    }
+  }
+
+  /**
+   * Rewritten Boolean queries may omit matches from {@link
+   * org.apache.lucene.search.BooleanClause.Occur#SHOULD} clauses. Check that this isn't the case.
+   */
+  @Test
+  public void testNoRewrite() throws IOException {
+    String field1 = FLD_TEXT_POS_OFFS1;
+    String field2 = FLD_TEXT_POS_OFFS2;
+    withReader(
+        List.of(
+            Map.of(
+                field1, values("0100"),
+                field2, values("loo bar")),
+            Map.of(
+                field1, values("0200"),
+                field2, values("foo bar"))),
+        reader -> {
+          String expected = fmt("0: (%s: '>0100<')(%s: 'loo >bar<')", field1, field2);
+          assertThat(
+              highlights(
+                  reader,
+                  stdQueryParser.apply(fmt("+%s:01* OR %s:bar", field1, field2), field1)),
+              containsInAnyOrder(expected));
+
+          assertThat(
+              highlights(
+                  reader,
+                  stdQueryParser.apply(fmt("+%s:01* AND %s:bar", field1, field2), field1)),
+              containsInAnyOrder(expected));
+        });
+  }
+
+  @Test
+  public void testNestedQueryHitsWithOffsets() throws IOException {
+    checkNestedQueryHits(FLD_TEXT_POS_OFFS);
+  }
+
+  @Test
+  public void testNestedQueryHitsWithPositions() throws IOException {
+    checkNestedQueryHits(FLD_TEXT_POS);
+  }
+
+  public void checkNestedQueryHits(String field) throws IOException {
+    withReader(
+        List.of(Map.of(field, values("foo bar baz abc"))),
+        reader -> {
+          assertThat(
+              highlights(
+                  reader,
+                  new BooleanQuery.Builder()
+                      .add(new PhraseQuery(1, field, "foo", "baz"), BooleanClause.Occur.SHOULD)
+                      .add(new TermQuery(new Term(field, "bar")), BooleanClause.Occur.SHOULD)
+                      .build()),
+              containsInAnyOrder(fmt("0: (%s: '>foo >bar< baz< abc')", field)));
+
+          assertThat(
+              highlights(
+                  reader,
+                  new BooleanQuery.Builder()
+                      .add(new PhraseQuery(1, field, "foo", "baz"), BooleanClause.Occur.SHOULD)
+                      .add(new TermQuery(new Term(field, "bar")), BooleanClause.Occur.SHOULD)
+                      .add(new TermQuery(new Term(field, "baz")), BooleanClause.Occur.SHOULD)
+                      .build()),
+              containsInAnyOrder(fmt("0: (%s: '>foo >bar< >baz<< abc')", field)));
+        });
+  }
+
+  @Test
+  public void testGraphQueryWithOffsets() throws Exception {
+    checkGraphQuery(FLD_TEXT_SYNONYMS_POS_OFFS);
+  }
+
+  @Test
+  public void testGraphQueryWithPositions() throws Exception {
+    checkGraphQuery(FLD_TEXT_SYNONYMS_POS);
+  }
+
+  private void checkGraphQuery(String field) throws IOException {
+    withReader(
+        List.of(
+            Map.of(field, values("foo bar baz")),
+            Map.of(field, values("bar foo baz")),
+            Map.of(field, values("bar baz foo")),
+            Map.of(field, values("bar bar bar irrelevant"))),
+        reader -> {
+          assertThat(highlights(reader, new TermQuery(new Term(field, "syn1"))),
+              containsInAnyOrder(fmt("0: (%s: '>foo bar< baz')", field)));
+
+          // [syn2 syn3] = baz
+          // so both these queries highlight baz.
+          assertThat(highlights(reader, new TermQuery(new Term(field, "syn3"))),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo bar >baz<')", field),
+                  fmt("1: (%s: 'bar foo >baz<')", field),
+                  fmt("2: (%s: 'bar >baz< foo')", field)));
+          assertThat(
+              highlights(reader, stdQueryParser.apply(field + ":\"syn2 syn3\"", field)),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo bar >baz<')", field),
+                  fmt("1: (%s: 'bar foo >baz<')", field),
+                  fmt("2: (%s: 'bar >baz< foo')", field)));
+          assertThat(
+              highlights(reader, stdQueryParser.apply(field + ":\"foo syn2 syn3\"", field)),
+              containsInAnyOrder(fmt("1: (%s: 'bar >foo baz<')", field)));
+        });
+  }
+
+  @Test
+  public void testSpanQueryWithOffsets() throws Exception {
+    checkSpanQueries(FLD_TEXT_POS_OFFS);
+  }
+
+  @Test
+  public void testSpanQueryWithPositions() throws Exception {
+    checkSpanQueries(FLD_TEXT_POS);
+  }
+
+  private void checkSpanQueries(String field) throws IOException {
+    withReader(
+        List.of(
+            Map.of(field, values("foo bar baz")),
+            Map.of(field, values("bar foo baz")),
+            Map.of(field, values("bar baz foo")),
+            Map.of(field, values("bar bar bar irrelevant"))),
+        reader -> {
+          assertThat(
+              highlights(
+                  reader,
+                  SpanNearQuery.newOrderedNearQuery(field)
+                      .addClause(new SpanTermQuery(new Term(field, "bar")))
+                      .addClause(new SpanTermQuery(new Term(field, "foo")))
+                      .build()),
+              containsInAnyOrder(fmt("1: (%s: '>bar foo< baz')", field)));
+
+          assertThat(
+              highlights(
+                  reader,
+                  SpanNearQuery.newOrderedNearQuery(field)
+                      .addClause(new SpanTermQuery(new Term(field, "bar")))
+                      .addGap(1)
+                      .addClause(new SpanTermQuery(new Term(field, "foo")))
+                      .build()),
+              containsInAnyOrder(fmt("2: (%s: '>bar baz foo<')", field)));
+
+          assertThat(
+              highlights(
+                  reader,
+                  SpanNearQuery.newUnorderedNearQuery(field)
+                      .addClause(new SpanTermQuery(new Term(field, "foo")))
+                      .addClause(new SpanTermQuery(new Term(field, "bar")))
+                      .build()),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo bar< baz')", field), fmt("1: (%s: '>bar foo< baz')", field)));
+
+          assertThat(
+              highlights(
+                  reader,
+                  SpanNearQuery.newUnorderedNearQuery(field)
+                      .addClause(new SpanTermQuery(new Term(field, "foo")))
+                      .addClause(new SpanTermQuery(new Term(field, "bar")))
+                      .setSlop(1)
+                      .build()),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo bar< baz')", field),
+                  fmt("1: (%s: '>bar foo< baz')", field),
+                  fmt("2: (%s: '>bar baz foo<')", field)));
+        });
+  }
+
+  /**
+   * This test runs a term query against a field with no stored
+   * positions or offsets. This test checks the {@link OffsetsFromValues}
+   * strategy that returns highlights over entire indexed values.
+   */
+  @Test
+  public void testTextFieldNoPositionsOffsetFromValues() throws Exception {
+    String field = FLD_TEXT_NOPOS;
+
+    withReader(
+        List.of(
+            Map.of(FLD_TEXT_NOPOS, values("foo bar")),
+            Map.of(FLD_TEXT_NOPOS, values("foo bar", "baz baz"))
+        ),
+        reader -> {
+          OffsetsRetrievalStrategySupplier defaults = MatchRegionRetriever
+              .computeOffsetRetrievalStrategies(reader, analyzer);
+          OffsetsRetrievalStrategySupplier customSuppliers = (fld) -> {
+            if (fld.equals(field)) {
+              return new OffsetsFromValues(field, analyzer);
+            } else {
+              return defaults.apply(field);
+            }
+          };
+
+          assertThat(
+              highlights(
+                  customSuppliers,
+                  reader,
+                  new TermQuery(new Term(field, "bar"))),
+              containsInAnyOrder(
+                  fmt("0: (%s: '>foo bar<')", field),
+                  fmt("1: (%s: '>foo bar< | >baz baz<')", field)));
+        });
+  }
+
+  /**
+   * This test runs a term query against a field with no stored
+   * positions or offsets.
+   * <p>
+   * Such field structure is often useful for multivalued "keyword-like"
+   * fields.
+   */
+  @Test
+  public void testTextFieldNoPositionsOffsetsFromTokens() throws Exception {
+    String field = FLD_TEXT_NOPOS;
+
+    withReader(
+        List.of(
+            Map.of(FLD_TEXT_NOPOS, values("foo bar"),
+                   FLD_TEXT_POS, values("bar bar")),
+            Map.of(FLD_TEXT_NOPOS, values("foo bar", "baz bar"))
+        ),
+        reader -> {
+          assertThat(
+              highlights(
+                  reader,
+                  new TermQuery(new Term(field, "bar"))),
+              containsInAnyOrder(
+                  fmt("0: (%s: 'foo >bar<')", field),
+                  fmt("1: (%s: 'foo >bar< | baz >bar<')", field)));
+        });
+  }
+
+  private List<String> highlights(IndexReader reader, Query query) throws IOException {
+    return highlights(MatchRegionRetriever.computeOffsetRetrievalStrategies(reader, analyzer),
+        reader, query);
+  }
+
+  private List<String> highlights(OffsetsRetrievalStrategySupplier offsetsStrategySupplier,
+                                  IndexReader reader, Query query) throws IOException {
+    IndexSearcher searcher = new IndexSearcher(reader);
+    int maxDocs = 1000;
+
+    Query rewrittenQuery = searcher.rewrite(query);
+    TopDocs topDocs = searcher.search(rewrittenQuery, maxDocs);
+
+    ArrayList<String> highlights = new ArrayList<>();
+
+    AsciiMatchRangeHighlighter formatter = new AsciiMatchRangeHighlighter(analyzer);
+
+    MatchRegionRetriever.MatchOffsetsConsumer highlightCollector =
+        (docId, leafReader, leafDocId, fieldHighlights) -> {
+          StringBuilder sb = new StringBuilder();
+
+          Document document = leafReader.document(leafDocId);
+          formatter
+              .apply(document, new TreeMap<>(fieldHighlights))
+              .forEach(
+                  (field, snippets) -> {
+                    sb.append(
+                        String.format(
+                            Locale.ROOT, "(%s: '%s')", field, String.join(" | ", snippets)));
+                  });
+
+          if (sb.length() > 0) {
+            sb.insert(0, document.get(FLD_ID) + ": ");
+            highlights.add(sb.toString());
+          }
+        };
+
+    MatchRegionRetriever highlighter = new MatchRegionRetriever(searcher, rewrittenQuery, analyzer,
+        offsetsStrategySupplier);
+    highlighter.highlightDocuments(topDocs, highlightCollector);
+
+    return highlights;
+  }
+
+  private String[] values(String... values) {
+    assertThat(values, not(emptyArray()));
+    return values;
+  }
+
+  private void withReader(
+      Collection<Map<String, String[]>> docs, IOUtils.IOConsumer<DirectoryReader> block)
+      throws IOException {
+    IndexWriterConfig config = new IndexWriterConfig(analyzer);
+
+    try (Directory directory = new ByteBuffersDirectory()) {
+      IndexWriter iw = new IndexWriter(directory, config);
+
+      int seq = 0;
+      for (Map<String, String[]> fields : docs) {
+        Document doc = new Document();
+        doc.add(new StringField(FLD_ID, Integer.toString(seq++), Field.Store.YES));
+        for (Map.Entry<String, String[]> field : fields.entrySet()) {
+          for (String value : field.getValue()) {
+            doc.add(toField(field.getKey(), value));
+          }
+        }
+        iw.addDocument(doc);
+        if (RandomizedTest.randomBoolean()) {
+          iw.commit();
+        }
+      }
+      iw.flush();
+
+      try (DirectoryReader reader = DirectoryReader.open(iw)) {
+        block.accept(reader);
+      }
+    }
+  }
+
+  private IndexableField toField(String name, String value) {
+    switch (name) {
+      case FLD_TEXT_NOPOS:
+        return new Field(name, value, TYPE_STORED_NO_POSITIONS);
+      case FLD_TEXT_POS:
+      case FLD_TEXT_SYNONYMS_POS:
+        return new TextField(name, value, Field.Store.YES);
+      case FLD_TEXT_POS_OFFS:
+      case FLD_TEXT_POS_OFFS1:
+      case FLD_TEXT_POS_OFFS2:
+      case FLD_TEXT_SYNONYMS_POS_OFFS:
+        return new Field(name, value, TYPE_STORED_WITH_OFFSETS);
+      default:
+        throw new AssertionError("Don't know how to handle this field: " + name);
+    }
+  }
+
+  private static String fmt(String string, Object... args) {
+    return String.format(Locale.ROOT, string, args);
+  }
+}
diff --git a/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/TestPassageSelector.java b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/TestPassageSelector.java
new file mode 100644
index 0000000..3d03d6b
--- /dev/null
+++ b/lucene/highlighter/src/test/org/apache/lucene/search/matchhighlight/TestPassageSelector.java
@@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search.matchhighlight;
+
+import static com.carrotsearch.randomizedtesting.RandomizedTest.randomAsciiLettersOfLengthBetween;
+import static com.carrotsearch.randomizedtesting.RandomizedTest.randomBoolean;
+import static com.carrotsearch.randomizedtesting.RandomizedTest.randomIntBetween;
+import static com.carrotsearch.randomizedtesting.RandomizedTest.randomRealisticUnicodeOfCodepointLengthBetween;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Objects;
+
+import org.apache.lucene.util.LuceneTestCase;
+import org.hamcrest.Matchers;
+import org.junit.Test;
+
+public class TestPassageSelector extends LuceneTestCase {
+  @Test
+  public void checkEmptyExtra() {
+    checkPassages(
+        "foo >>bar<< baz abc",
+        "foo bar baz abc",
+        300,
+        100,
+        new OffsetRange(4, 7),
+        new OffsetRange(4, 7));
+
+    checkPassages(
+        ">foo >bar< >baz<< abc",
+        "foo bar baz abc",
+        300,
+        100,
+        new OffsetRange(0, 11),
+        new OffsetRange(4, 7),
+        new OffsetRange(8, 11));
+
+    checkPassages(
+        ">>foo< bar >baz<< abc",
+        "foo bar baz abc",
+        300,
+        100,
+        new OffsetRange(0, 11),
+        new OffsetRange(0, 3),
+        new OffsetRange(8, 11));
+  }
+
+  @Test
+  public void oneMarker() {
+    checkPassages(">0<123456789a", "0123456789a", 300, 1, new OffsetRange(0, 1));
+    checkPassages("0123456789>a<", "0123456789a", 300, 1, new OffsetRange(10, 11));
+    checkPassages(">0123456789a<", "0123456789a", 300, 1, new OffsetRange(0, 11));
+  }
+
+  @Test
+  public void noHighlights() {
+    checkPassages("0123456789a", "0123456789a", 300, 1);
+    checkPassages("01234...", "0123456789a", 5, 1);
+    checkPassages(
+        "0123",
+        "0123456789a",
+        15,
+        2,
+        new OffsetRange[0],
+        new OffsetRange[] {new OffsetRange(0, 4), new OffsetRange(4, 9)});
+  }
+
+  @Test
+  public void oneMarkerTruncated() {
+    checkPassages(">0<12...", "0123456789a", 4, 1, new OffsetRange(0, 1));
+    checkPassages("...789>a<", "0123456789a", 4, 1, new OffsetRange(10, 11));
+    checkPassages("...>3456<...", "0123456789a", 4, 1, new OffsetRange(3, 7));
+    checkPassages("...3>45<6...", "0123456789a", 4, 1, new OffsetRange(4, 6));
+  }
+
+  @Test
+  public void highlightLargerThanWindow() {
+    String value = "0123456789a";
+    checkPassages("0123...", value, 4, 1, new OffsetRange(0, value.length()));
+  }
+
+  @Test
+  public void twoMarkers() {
+    checkPassages(
+        "0>12<3>45<6789a", "0123456789a", 300, 1, new OffsetRange(1, 3), new OffsetRange(4, 6));
+    checkPassages(
+        "0>123<>45<6789a", "0123456789a", 300, 1, new OffsetRange(1, 4), new OffsetRange(4, 6));
+  }
+
+  @Test
+  public void noMarkers() {
+    checkPassages("0123456789a", "0123456789a", 300, 1);
+    checkPassages("0123...", "0123456789a", 4, 1);
+  }
+
+  @Test
+  public void markersOutsideValue() {
+    checkPassages("0123456789a", "0123456789a", 300, 1, new OffsetRange(100, 200));
+  }
+
+  @Test
+  public void twoPassages() {
+    checkPassages(
+        "0>12<3...|...6>78<9...",
+        "0123456789a",
+        4,
+        2,
+        new OffsetRange(1, 3),
+        new OffsetRange(7, 9));
+  }
+
+  @Test
+  public void emptyRanges() {
+    // Empty ranges cover the highlight, so it is omitted.
+    // Instead, the first non-empty range is taken as the default.
+    checkPassages(
+        "6789...",
+        "0123456789a",
+        4,
+        2,
+        ranges(new OffsetRange(0, 1)),
+        ranges(new OffsetRange(0, 0), new OffsetRange(2, 2), new OffsetRange(6, 11)));
+  }
+
+  @Test
+  public void passageScoring() {
+    // More highlights per passage -> better passage
+    checkPassages(
+        ">01<>23<...",
+        "0123456789a",
+        4,
+        1,
+        new OffsetRange(0, 2),
+        new OffsetRange(2, 4),
+        new OffsetRange(8, 10));
+
+    checkPassages(
+        "...>01<23>45<67>89<...",
+        "__________0123456789a__________",
+        10,
+        1,
+        new OffsetRange(10, 12),
+        new OffsetRange(14, 16),
+        new OffsetRange(18, 20));
+
+    // ...if tied, the one with longer highlight length overall.
+    checkPassages(
+        "...6>789<...", "0123456789a", 4, 1, new OffsetRange(0, 2), new OffsetRange(7, 10));
+
+    // ...if tied, the first one in order.
+    checkPassages(">01<23...", "0123456789a", 4, 1, new OffsetRange(0, 2), new OffsetRange(8, 10));
+  }
+
+  @Test
+  public void rangeWindows() {
+    // Add constraint windows to split the three highlights.
+    checkPassages(
+        "..._______>01<2",
+        "__________0123456789a__________",
+        10,
+        3,
+        ranges(new OffsetRange(10, 12), new OffsetRange(14, 16), new OffsetRange(18, 20)),
+        ranges(new OffsetRange(0, 13)));
+
+    checkPassages(
+        ">89<a_______...",
+        "__________0123456789a__________",
+        10,
+        3,
+        ranges(new OffsetRange(10, 12), new OffsetRange(14, 16), new OffsetRange(18, 20)),
+        ranges(new OffsetRange(18, Integer.MAX_VALUE)));
+
+    checkPassages(
+        "...________>01<|23>45<67|>89<a_______...",
+        "__________0123456789a__________",
+        10,
+        3,
+        ranges(new OffsetRange(10, 12), new OffsetRange(14, 16), new OffsetRange(18, 20)),
+        ranges(
+            new OffsetRange(0, 12),
+            new OffsetRange(12, 18),
+            new OffsetRange(18, Integer.MAX_VALUE)));
+  }
+
+  @Test
+  public void randomizedSanityCheck() {
+    PassageSelector selector = new PassageSelector();
+    PassageFormatter formatter = new PassageFormatter("...", ">", "<");
+    ArrayList<OffsetRange> highlights = new ArrayList<>();
+    ArrayList<OffsetRange> ranges = new ArrayList<>();
+    for (int i = 0; i < 5000; i++) {
+      String value =
+          randomBoolean()
+              ? randomAsciiLettersOfLengthBetween(0, 100)
+              : randomRealisticUnicodeOfCodepointLengthBetween(0, 1000);
+
+      ranges.clear();
+      highlights.clear();
+      for (int j = randomIntBetween(0, 10); --j >= 0; ) {
+        int from = randomIntBetween(0, value.length());
+        highlights.add(new OffsetRange(from, from + randomIntBetween(1, 10)));
+      }
+
+      int charWindow = randomIntBetween(1, 100);
+      int maxPassages = randomIntBetween(1, 10);
+
+      if (randomIntBetween(0, 5) == 0) {
+        int increment = value.length() / 10;
+        for (int c = randomIntBetween(0, 20), start = 0; --c >= 0; ) {
+          int step = randomIntBetween(0, increment);
+          ranges.add(new OffsetRange(start, start + step));
+          start += step + randomIntBetween(0, 3);
+        }
+      } else {
+        ranges.add(new OffsetRange(0, value.length()));
+      }
+
+      // Just make sure there are no exceptions.
+      List<Passage> passages =
+          selector.pickBest(value, highlights, charWindow, maxPassages, ranges);
+      formatter.format(value, passages, ranges);
+    }
+  }
+
+  private void checkPassages(
+      String expected, String value, int charWindow, int maxPassages, OffsetRange... highlights) {
+    checkPassages(
+        expected,
+        value,
+        charWindow,
+        maxPassages,
+        highlights,
+        ranges(new OffsetRange(0, value.length())));
+  }
+
+  private void checkPassages(
+      String expected,
+      String value,
+      int charWindow,
+      int maxPassages,
+      OffsetRange[] highlights,
+      OffsetRange[] ranges) {
+    String result = getPassages(value, charWindow, maxPassages, highlights, ranges);
+    if (!Objects.equals(result, expected)) {
+      System.out.println("Value:  " + value);
+      System.out.println("Result: " + result);
+      System.out.println("Expect: " + expected);
+    }
+    assertThat(result, Matchers.equalTo(expected));
+  }
+
+  protected String getPassages(
+      String value,
+      int charWindow,
+      int maxPassages,
+      OffsetRange[] highlights,
+      OffsetRange[] ranges) {
+    PassageFormatter passageFormatter = new PassageFormatter("...", ">", "<");
+    PassageSelector selector = new PassageSelector();
+    List<OffsetRange> hlist = Arrays.asList(highlights);
+    List<OffsetRange> rangeList = Arrays.asList(ranges);
+    List<Passage> passages = selector.pickBest(value, hlist, charWindow, maxPassages, rangeList);
+    return String.join("|", passageFormatter.format(value, passages, rangeList));
+  }
+
+  protected OffsetRange[] ranges(OffsetRange... offsets) {
+    return offsets;
+  }
+}
diff --git a/lucene/highlighter/src/test/org/apache/lucene/search/uhighlight/TestUnifiedHighlighterMTQ.java b/lucene/highlighter/src/test/org/apache/lucene/search/uhighlight/TestUnifiedHighlighterMTQ.java
index 098a89b..e0b5deb 100644
--- a/lucene/highlighter/src/test/org/apache/lucene/search/uhighlight/TestUnifiedHighlighterMTQ.java
+++ b/lucene/highlighter/src/test/org/apache/lucene/search/uhighlight/TestUnifiedHighlighterMTQ.java
@@ -240,7 +240,7 @@
     ir.close();
   }
 
-  public void testOneFuzzy() throws Exception {
+  public void testFuzzy() throws Exception {
     RandomIndexWriter iw = new RandomIndexWriter(random(), dir, indexAnalyzer);
 
     Field body = new Field("body", "", fieldType);
@@ -274,6 +274,15 @@
     assertEquals("This is a <b>test</b>.", snippets[0]);
     assertEquals("<b>Test</b> a one sentence document.", snippets[1]);
 
+    // with zero max edits
+    query = new FuzzyQuery(new Term("body", "test"), 0, 2);
+    topDocs = searcher.search(query, 10, Sort.INDEXORDER);
+    assertEquals(2, topDocs.totalHits.value);
+    snippets = highlighter.highlight("body", query, topDocs);
+    assertEquals(2, snippets.length);
+    assertEquals("This is a <b>test</b>.", snippets[0]);
+    assertEquals("<b>Test</b> a one sentence document.", snippets[1]);
+
     // wrong field
     highlighter.setFieldMatcher(null);//default
     BooleanQuery bq = new BooleanQuery.Builder()
diff --git a/lucene/ivy-ignore-conflicts.properties b/lucene/ivy-ignore-conflicts.properties
deleted file mode 100644
index df3a2e5..0000000
--- a/lucene/ivy-ignore-conflicts.properties
+++ /dev/null
@@ -1,14 +0,0 @@
-# The /org/name keys in this file must be kept lexically sorted.
-# Blank lines, comment lines, and keys that aren't in /org/name format are ignored
-# when the lexical sort check is performed by the ant check-lib-versions target.
-#
-# The format is:
-#
-# /org/name = <version1> [, <version2> [ ... ] ]
-#
-# where each <versionX> is an indirect dependency version to ignore (i.e., not
-# trigger a conflict) when the ant check-lib-versions target is run.
-
-/com.google.guava/guava = 16.0.1
-/org.ow2.asm/asm = 5.0_BETA
-
diff --git a/lucene/ivy-versions.properties b/lucene/ivy-versions.properties
index 6e90ffd..9057499 100644
--- a/lucene/ivy-versions.properties
+++ b/lucene/ivy-versions.properties
@@ -47,9 +47,6 @@
 com.sun.jersey.version = 1.19
 /com.sun.jersey/jersey-servlet = ${com.sun.jersey.version}
 
-/com.sun.mail/gimap = 1.5.1
-/com.sun.mail/javax.mail = 1.5.1
-
 /com.tdunning/t-digest = 3.1
 /com.vaadin.external.google/android-json = 0.0.20131108.vaadin1
 /commons-cli/commons-cli = 1.4
@@ -96,7 +93,6 @@
 
 /io.sgr/s2-geometry-library-java = 1.0.0
 
-/javax.activation/activation = 1.1.1
 /javax.servlet/javax.servlet-api = 3.1.0
 /junit/junit = 4.12
 
@@ -141,8 +137,6 @@
 /org.apache.curator/curator-framework = ${org.apache.curator.version}
 /org.apache.curator/curator-recipes = ${org.apache.curator.version}
 
-/org.apache.derby/derby = 10.9.1.0
-
 org.apache.hadoop.version = 3.2.0
 /org.apache.hadoop/hadoop-annotations = ${org.apache.hadoop.version}
 /org.apache.hadoop/hadoop-auth = ${org.apache.hadoop.version}
@@ -290,9 +284,8 @@
 /org.gagravarr/vorbis-java-core = ${org.gagravarr.vorbis.java.version}
 /org.gagravarr/vorbis-java-tika = ${org.gagravarr.vorbis.java.version}
 
-/org.hamcrest/hamcrest-core = 1.3
+/org.hamcrest/hamcrest = 2.2
 
-/org.hsqldb/hsqldb = 2.4.0
 /org.jdom/jdom2 = 2.0.6
 
 /org.jsoup/jsoup = 1.12.1
diff --git a/lucene/join/build.xml b/lucene/join/build.xml
deleted file mode 100644
index c411dbe..0000000
--- a/lucene/join/build.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-<?xml version="1.0"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<project name="join" default="default">
-  <description>
-    Index-time and Query-time joins for normalized content
-  </description>
-
-  <import file="../module-build.xml"/>
-
-</project>
diff --git a/lucene/join/ivy.xml b/lucene/join/ivy.xml
deleted file mode 100644
index 09ae9bd..0000000
--- a/lucene/join/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="join"/>
-</ivy-module>
diff --git a/lucene/licenses/ant-1.8.2.jar.sha1 b/lucene/licenses/ant-1.8.2.jar.sha1
deleted file mode 100644
index 564db78..0000000
--- a/lucene/licenses/ant-1.8.2.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-fc33bf7cd8c5309dd7b81228e8626515ee42efd9
diff --git a/lucene/licenses/ant-LICENSE-ASL.txt b/lucene/licenses/ant-LICENSE-ASL.txt
deleted file mode 100644
index ab3182e..0000000
--- a/lucene/licenses/ant-LICENSE-ASL.txt
+++ /dev/null
@@ -1,272 +0,0 @@
-/*
- *                                 Apache License
- *                           Version 2.0, January 2004
- *                        http://www.apache.org/licenses/
- *
- *   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
- *
- *   1. Definitions.
- *
- *      "License" shall mean the terms and conditions for use, reproduction,
- *      and distribution as defined by Sections 1 through 9 of this document.
- *
- *      "Licensor" shall mean the copyright owner or entity authorized by
- *      the copyright owner that is granting the License.
- *
- *      "Legal Entity" shall mean the union of the acting entity and all
- *      other entities that control, are controlled by, or are under common
- *      control with that entity. For the purposes of this definition,
- *      "control" means (i) the power, direct or indirect, to cause the
- *      direction or management of such entity, whether by contract or
- *      otherwise, or (ii) ownership of fifty percent (50%) or more of the
- *      outstanding shares, or (iii) beneficial ownership of such entity.
- *
- *      "You" (or "Your") shall mean an individual or Legal Entity
- *      exercising permissions granted by this License.
- *
- *      "Source" form shall mean the preferred form for making modifications,
- *      including but not limited to software source code, documentation
- *      source, and configuration files.
- *
- *      "Object" form shall mean any form resulting from mechanical
- *      transformation or translation of a Source form, including but
- *      not limited to compiled object code, generated documentation,
- *      and conversions to other media types.
- *
- *      "Work" shall mean the work of authorship, whether in Source or
- *      Object form, made available under the License, as indicated by a
- *      copyright notice that is included in or attached to the work
- *      (an example is provided in the Appendix below).
- *
- *      "Derivative Works" shall mean any work, whether in Source or Object
- *      form, that is based on (or derived from) the Work and for which the
- *      editorial revisions, annotations, elaborations, or other modifications
- *      represent, as a whole, an original work of authorship. For the purposes
- *      of this License, Derivative Works shall not include works that remain
- *      separable from, or merely link (or bind by name) to the interfaces of,
- *      the Work and Derivative Works thereof.
- *
- *      "Contribution" shall mean any work of authorship, including
- *      the original version of the Work and any modifications or additions
- *      to that Work or Derivative Works thereof, that is intentionally
- *      submitted to Licensor for inclusion in the Work by the copyright owner
- *      or by an individual or Legal Entity authorized to submit on behalf of
- *      the copyright owner. For the purposes of this definition, "submitted"
- *      means any form of electronic, verbal, or written communication sent
- *      to the Licensor or its representatives, including but not limited to
- *      communication on electronic mailing lists, source code control systems,
- *      and issue tracking systems that are managed by, or on behalf of, the
- *      Licensor for the purpose of discussing and improving the Work, but
- *      excluding communication that is conspicuously marked or otherwise
- *      designated in writing by the copyright owner as "Not a Contribution."
- *
- *      "Contributor" shall mean Licensor and any individual or Legal Entity
- *      on behalf of whom a Contribution has been received by Licensor and
- *      subsequently incorporated within the Work.
- *
- *   2. Grant of Copyright License. Subject to the terms and conditions of
- *      this License, each Contributor hereby grants to You a perpetual,
- *      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- *      copyright license to reproduce, prepare Derivative Works of,
- *      publicly display, publicly perform, sublicense, and distribute the
- *      Work and such Derivative Works in Source or Object form.
- *
- *   3. Grant of Patent License. Subject to the terms and conditions of
- *      this License, each Contributor hereby grants to You a perpetual,
- *      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- *      (except as stated in this section) patent license to make, have made,
- *      use, offer to sell, sell, import, and otherwise transfer the Work,
- *      where such license applies only to those patent claims licensable
- *      by such Contributor that are necessarily infringed by their
- *      Contribution(s) alone or by combination of their Contribution(s)
- *      with the Work to which such Contribution(s) was submitted. If You
- *      institute patent litigation against any entity (including a
- *      cross-claim or counterclaim in a lawsuit) alleging that the Work
- *      or a Contribution incorporated within the Work constitutes direct
- *      or contributory patent infringement, then any patent licenses
- *      granted to You under this License for that Work shall terminate
- *      as of the date such litigation is filed.
- *
- *   4. Redistribution. You may reproduce and distribute copies of the
- *      Work or Derivative Works thereof in any medium, with or without
- *      modifications, and in Source or Object form, provided that You
- *      meet the following conditions:
- *
- *      (a) You must give any other recipients of the Work or
- *          Derivative Works a copy of this License; and
- *
- *      (b) You must cause any modified files to carry prominent notices
- *          stating that You changed the files; and
- *
- *      (c) You must retain, in the Source form of any Derivative Works
- *          that You distribute, all copyright, patent, trademark, and
- *          attribution notices from the Source form of the Work,
- *          excluding those notices that do not pertain to any part of
- *          the Derivative Works; and
- *
- *      (d) If the Work includes a "NOTICE" text file as part of its
- *          distribution, then any Derivative Works that You distribute must
- *          include a readable copy of the attribution notices contained
- *          within such NOTICE file, excluding those notices that do not
- *          pertain to any part of the Derivative Works, in at least one
- *          of the following places: within a NOTICE text file distributed
- *          as part of the Derivative Works; within the Source form or
- *          documentation, if provided along with the Derivative Works; or,
- *          within a display generated by the Derivative Works, if and
- *          wherever such third-party notices normally appear. The contents
- *          of the NOTICE file are for informational purposes only and
- *          do not modify the License. You may add Your own attribution
- *          notices within Derivative Works that You distribute, alongside
- *          or as an addendum to the NOTICE text from the Work, provided
- *          that such additional attribution notices cannot be construed
- *          as modifying the License.
- *
- *      You may add Your own copyright statement to Your modifications and
- *      may provide additional or different license terms and conditions
- *      for use, reproduction, or distribution of Your modifications, or
- *      for any such Derivative Works as a whole, provided Your use,
- *      reproduction, and distribution of the Work otherwise complies with
- *      the conditions stated in this License.
- *
- *   5. Submission of Contributions. Unless You explicitly state otherwise,
- *      any Contribution intentionally submitted for inclusion in the Work
- *      by You to the Licensor shall be under the terms and conditions of
- *      this License, without any additional terms or conditions.
- *      Notwithstanding the above, nothing herein shall supersede or modify
- *      the terms of any separate license agreement you may have executed
- *      with Licensor regarding such Contributions.
- *
- *   6. Trademarks. This License does not grant permission to use the trade
- *      names, trademarks, service marks, or product names of the Licensor,
- *      except as required for reasonable and customary use in describing the
- *      origin of the Work and reproducing the content of the NOTICE file.
- *
- *   7. Disclaimer of Warranty. Unless required by applicable law or
- *      agreed to in writing, Licensor provides the Work (and each
- *      Contributor provides its Contributions) on an "AS IS" BASIS,
- *      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- *      implied, including, without limitation, any warranties or conditions
- *      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- *      PARTICULAR PURPOSE. You are solely responsible for determining the
- *      appropriateness of using or redistributing the Work and assume any
- *      risks associated with Your exercise of permissions under this License.
- *
- *   8. Limitation of Liability. In no event and under no legal theory,
- *      whether in tort (including negligence), contract, or otherwise,
- *      unless required by applicable law (such as deliberate and grossly
- *      negligent acts) or agreed to in writing, shall any Contributor be
- *      liable to You for damages, including any direct, indirect, special,
- *      incidental, or consequential damages of any character arising as a
- *      result of this License or out of the use or inability to use the
- *      Work (including but not limited to damages for loss of goodwill,
- *      work stoppage, computer failure or malfunction, or any and all
- *      other commercial damages or losses), even if such Contributor
- *      has been advised of the possibility of such damages.
- *
- *   9. Accepting Warranty or Additional Liability. While redistributing
- *      the Work or Derivative Works thereof, You may choose to offer,
- *      and charge a fee for, acceptance of support, warranty, indemnity,
- *      or other liability obligations and/or rights consistent with this
- *      License. However, in accepting such obligations, You may act only
- *      on Your own behalf and on Your sole responsibility, not on behalf
- *      of any other Contributor, and only if You agree to indemnify,
- *      defend, and hold each Contributor harmless for any liability
- *      incurred by, or claims asserted against, such Contributor by reason
- *      of your accepting any such warranty or additional liability.
- *
- *   END OF TERMS AND CONDITIONS
- *
- *   APPENDIX: How to apply the Apache License to your work.
- *
- *      To apply the Apache License to your work, attach the following
- *      boilerplate notice, with the fields enclosed by brackets "[]"
- *      replaced with your own identifying information. (Don't include
- *      the brackets!)  The text should be enclosed in the appropriate
- *      comment syntax for the file format. We also recommend that a
- *      file or class name and description of purpose be included on the
- *      same "printed page" as the copyright notice for easier
- *      identification within third-party archives.
- *
- *   Copyright [yyyy] [name of copyright owner]
- *
- *   Licensed under the Apache License, Version 2.0 (the "License");
- *   you may not use this file except in compliance with the License.
- *   You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *   Unless required by applicable law or agreed to in writing, software
- *   distributed under the License is distributed on an "AS IS" BASIS,
- *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *   See the License for the specific language governing permissions and
- *   limitations under the License.
- */
-
-W3C® SOFTWARE NOTICE AND LICENSE
-http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231
-
-This work (and included software, documentation such as READMEs, or other
-related items) is being provided by the copyright holders under the following
-license. By obtaining, using and/or copying this work, you (the licensee) agree
-that you have read, understood, and will comply with the following terms and
-conditions.
-
-Permission to copy, modify, and distribute this software and its documentation,
-with or without modification, for any purpose and without fee or royalty is
-hereby granted, provided that you include the following on ALL copies of the
-software and documentation or portions thereof, including modifications:
-
-  1. The full text of this NOTICE in a location viewable to users of the
-     redistributed or derivative work. 
-  2. Any pre-existing intellectual property disclaimers, notices, or terms
-     and conditions. If none exist, the W3C Software Short Notice should be
-     included (hypertext is preferred, text is permitted) within the body
-     of any redistributed or derivative code.
-  3. Notice of any changes or modifications to the files, including the date
-     changes were made. (We recommend you provide URIs to the location from
-     which the code is derived.)
-     
-THIS SOFTWARE AND DOCUMENTATION IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE
-NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
-TO, WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT
-THE USE OF THE SOFTWARE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY
-PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
-
-COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR
-CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE SOFTWARE OR DOCUMENTATION.
-
-The name and trademarks of copyright holders may NOT be used in advertising or
-publicity pertaining to the software without specific, written prior permission.
-Title to copyright in this software and any associated documentation will at
-all times remain with copyright holders.
-
-____________________________________
-
-This formulation of W3C's notice and license became active on December 31 2002.
-This version removes the copyright ownership notice such that this license can
-be used with materials other than those owned by the W3C, reflects that ERCIM
-is now a host of the W3C, includes references to this specific dated version of
-the license, and removes the ambiguous grant of "use". Otherwise, this version
-is the same as the previous version and is written so as to preserve the Free
-Software Foundation's assessment of GPL compatibility and OSI's certification
-under the Open Source Definition. Please see our Copyright FAQ for common
-questions about using materials from our site, including specific terms and
-conditions for packages like libwww, Amaya, and Jigsaw. Other questions about
-this notice can be directed to site-policy@w3.org.
- 
-Joseph Reagle <site-policy@w3.org> 
-
-This license came from: http://www.megginson.com/SAX/copying.html
-  However please note future versions of SAX may be covered 
-  under http://saxproject.org/?selected=pd
-
-SAX2 is Free!
-
-I hereby abandon any property rights to SAX 2.0 (the Simple API for
-XML), and release all of the SAX 2.0 source code, compiled code, and
-documentation contained in this distribution into the Public Domain.
-SAX comes with NO WARRANTY or guarantee of fitness for any
-purpose.
-
-David Megginson, david@megginson.com
-2000-05-05
diff --git a/lucene/licenses/ant-NOTICE.txt b/lucene/licenses/ant-NOTICE.txt
deleted file mode 100644
index 4c88cc6..0000000
--- a/lucene/licenses/ant-NOTICE.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-   =========================================================================
-   ==  NOTICE file corresponding to the section 4 d of                    ==
-   ==  the Apache License, Version 2.0,                                   ==
-   ==  in this case for the Apache Ant distribution.                      ==
-   =========================================================================
-
-   Apache Ant
-   Copyright 1999-2008 The Apache Software Foundation
-
-   This product includes software developed by
-   The Apache Software Foundation (http://www.apache.org/).
-
-   This product includes also software developed by :
-     - the W3C consortium (http://www.w3c.org) ,
-     - the SAX project (http://www.saxproject.org)
-
-   The <sync> task is based on code Copyright (c) 2002, Landmark
-   Graphics Corp that has been kindly donated to the Apache Software
-   Foundation.
-
-   Portions of this software were originally based on the following:
-     - software copyright (c) 1999, IBM Corporation., http://www.ibm.com.
-     - software copyright (c) 1999, Sun Microsystems., http://www.sun.com.
-     - voluntary contributions made by Paul Eng on behalf of the 
-       Apache Software Foundation that were originally developed at iClick, Inc.,
-       software copyright (c) 1999.
diff --git a/lucene/licenses/hamcrest-2.2.jar.sha1 b/lucene/licenses/hamcrest-2.2.jar.sha1
new file mode 100644
index 0000000..820b1fb
--- /dev/null
+++ b/lucene/licenses/hamcrest-2.2.jar.sha1
@@ -0,0 +1 @@
+1820c0968dba3a11a1b30669bb1f01978a91dedc
diff --git a/lucene/licenses/hamcrest-core-LICENSE-BSD.txt b/lucene/licenses/hamcrest-LICENSE-BSD.txt
similarity index 100%
rename from lucene/licenses/hamcrest-core-LICENSE-BSD.txt
rename to lucene/licenses/hamcrest-LICENSE-BSD.txt
diff --git a/lucene/licenses/hamcrest-core-NOTICE.txt b/lucene/licenses/hamcrest-NOTICE.txt
similarity index 100%
rename from lucene/licenses/hamcrest-core-NOTICE.txt
rename to lucene/licenses/hamcrest-NOTICE.txt
diff --git a/lucene/licenses/hamcrest-core-1.3.jar.sha1 b/lucene/licenses/hamcrest-core-1.3.jar.sha1
deleted file mode 100644
index 67add77..0000000
--- a/lucene/licenses/hamcrest-core-1.3.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-42a25dc3219429f0e5d060061f71acb49bf010a0
diff --git a/lucene/licenses/ivy-2.4.0.jar.sha1 b/lucene/licenses/ivy-2.4.0.jar.sha1
deleted file mode 100644
index 3863b25..0000000
--- a/lucene/licenses/ivy-2.4.0.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-5abe4c24bbe992a9ac07ca563d5bd3e8d569e9ed
diff --git a/lucene/licenses/ivy-LICENSE-ASL.txt b/lucene/licenses/ivy-LICENSE-ASL.txt
deleted file mode 100644
index eb06170..0000000
--- a/lucene/licenses/ivy-LICENSE-ASL.txt
+++ /dev/null
@@ -1,258 +0,0 @@
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-
-------------------------------------------------------------------------------
-License for JCraft JSch package
-------------------------------------------------------------------------------
-Copyright (c) 2002,2003,2004,2005,2006,2007 Atsuhiko Yamanaka, JCraft,Inc. 
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-  1. Redistributions of source code must retain the above copyright notice,
-     this list of conditions and the following disclaimer.
-
-  2. Redistributions in binary form must reproduce the above copyright 
-     notice, this list of conditions and the following disclaimer in 
-     the documentation and/or other materials provided with the distribution.
-
-  3. The names of the authors may not be used to endorse or promote products
-     derived from this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES,
-INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
-FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL JCRAFT,
-INC. OR ANY CONTRIBUTORS TO THIS SOFTWARE BE LIABLE FOR ANY DIRECT, INDIRECT,
-INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
-OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
-LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
-NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
-EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- 
-
-------------------------------------------------------------------------------
-License for jQuery
-------------------------------------------------------------------------------
-Copyright (c) 2007 John Resig, http://jquery.com/
-
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-"Software"), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- 
\ No newline at end of file
diff --git a/lucene/licenses/ivy-NOTICE.txt b/lucene/licenses/ivy-NOTICE.txt
deleted file mode 100644
index 33d7e07..0000000
--- a/lucene/licenses/ivy-NOTICE.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-Apache Ivy (TM)
-Copyright 2007-2013 The Apache Software Foundation
-
-This product includes software developed by
-The Apache Software Foundation (http://www.apache.org/).
-   
-Portions of Ivy were originally developed by
-Jayasoft SARL (http://www.jayasoft.fr/)
-and are licensed to the Apache Software Foundation under the
-"Software Grant License Agreement"
-   
-SSH and SFTP support is provided by the JCraft JSch package, 
-which is open source software, available under
-the terms of a BSD style license.  
-The original software and related information is available
-at http://www.jcraft.com/jsch/. 
\ No newline at end of file
diff --git a/lucene/luke/build.gradle b/lucene/luke/build.gradle
index 6e32b1b..9b6f47b 100644
--- a/lucene/luke/build.gradle
+++ b/lucene/luke/build.gradle
@@ -14,14 +14,28 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+import org.apache.tools.ant.taskdefs.condition.Os
+import org.apache.tools.ant.filters.*
+import java.nio.file.Files
 
 apply plugin: 'java-library'
 
 description = 'Luke - Lucene Toolbox'
 
+ext {
+  standaloneDistDir = file("$buildDir/${archivesBaseName}-${project.version}")
+}
+
+configurations {
+  standalone
+  implementation.extendsFrom standalone
+}
+
 dependencies {
   api project(':lucene:core')
 
+  implementation 'org.apache.logging.log4j:log4j-core'
+
   implementation project(':lucene:codecs')
   implementation project(':lucene:backward-codecs')
   implementation project(':lucene:analysis:common')
@@ -29,7 +43,123 @@
   implementation project(':lucene:queryparser')
   implementation project(':lucene:misc')
 
-  implementation 'org.apache.logging.log4j:log4j-core'
+  standalone project(":lucene:highlighter")
+  standalone project(':lucene:analysis:icu')
+  standalone project(':lucene:analysis:kuromoji')
+  standalone project(':lucene:analysis:morfologik')
+  standalone project(':lucene:analysis:nori')
+  standalone project(':lucene:analysis:opennlp')
+  standalone project(':lucene:analysis:phonetic')
+  standalone project(':lucene:analysis:smartcn')
+  standalone project(':lucene:analysis:stempel')
+  standalone project(':lucene:suggest')
 
   testImplementation project(':lucene:test-framework')
 }
+
+// Configure main class name for all JARs.
+tasks.withType(Jar) {
+  manifest {
+    attributes("Main-Class": "org.apache.lucene.luke.app.desktop.LukeMain")
+  }
+}
+
+// Configure the default JAR without any class path information
+// (this may actually be wrong - perhaps we should add the
+// "distribution" paths here.
+jar {
+  manifest {
+  }
+}
+
+// Configure "stand-alone" JAR with proper dependency classpath links.
+task standaloneJar(type: Jar) {
+  dependsOn classes
+
+  archiveFileName = "${archivesBaseName}-${project.version}-standalone.jar"
+
+  from(sourceSets.main.output)
+
+  // manifest attributes are resolved eagerly and we can't access runtimeClasspath
+  // at configuration time so push it until execution.
+  doFirst {
+    manifest {
+      attributes("Class-Path": configurations.runtimeClasspath.collect {
+        "${it.getName()}"
+      }.join(' '))
+    }
+  }
+}
+
+task standaloneAssemble(type: Sync) {
+  def antHelper = new org.apache.tools.ant.Project()
+  afterEvaluate {
+    def substituteProperties = [
+        "required.java.version": project.java.targetCompatibility,
+        "luke.cmd": "${standaloneJar.archiveFileName.get()}"
+    ]
+    substituteProperties.each { k, v -> antHelper.setProperty(k.toString(), v.toString()) }
+  }
+
+  from standaloneJar
+  from configurations.runtimeClasspath
+
+  from(file("src/distribution"), {
+    filesMatching("README.md", {
+      filteringCharset = 'UTF-8'
+      filter(ExpandProperties, project: antHelper)
+    })
+  })
+
+  into standaloneDistDir
+
+  doLast {
+    logger.lifecycle("Standalone Luke distribution assembled. You can run it with:\n"
+        + "java -jar " + file("${standaloneDistDir}/${standaloneJar.archiveFileName.get()}"))
+  }
+}
+
+// Attach standalone distribution assembly to main assembly.
+assemble.dependsOn standaloneAssemble
+
+// Create a standalone package bundle.
+task standalonePackage(type: Tar) {
+  from standaloneAssemble
+
+  into "${archivesBaseName}-${project.version}/"
+
+  compression = Compression.GZIP
+  archiveFileName = "${archivesBaseName}-${project.version}-standalone.tgz"
+}
+
+// Create a set of artifacts for standalone distribution in a specific
+// exported configuration so that it can be pulled in by other projects.
+artifacts {
+  standalone standaloneDistDir, {
+    builtBy standaloneAssemble
+  }
+}
+
+// Utility to launch Luke (and fork it from the build).
+task run() {
+  dependsOn standaloneAssemble
+  description "Launches (spawns) Luke directly from the build process."
+  group "Utility launchers"
+
+  doFirst {
+    logger.lifecycle("Launching Luke ${project.version} right now...")
+    def javaExecutable = {
+      def registry = project.extensions.getByType(JavaInstallationRegistry)
+      def currentJvm = registry.installationForCurrentVirtualMachine.get()
+      currentJvm.getJavaExecutable().asFile
+    }()
+    ant.exec(
+        executable: javaExecutable,
+        spawn: true,
+        vmlauncher: true
+    ) {
+      arg(value: '-jar')
+      arg(value: file("${standaloneDistDir}/${standaloneJar.archiveFileName.get()}").absolutePath)
+    }
+  }
+}
diff --git a/lucene/luke/build.xml b/lucene/luke/build.xml
deleted file mode 100644
index 1584ef7..0000000
--- a/lucene/luke/build.xml
+++ /dev/null
@@ -1,82 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="luke" default="default">
-
-  <description>
-    Luke - Lucene Toolbox
-  </description>
-
-  <!-- use full Java SE API (project default 'compact2' does not include Swing) -->
-  <property name="javac.profile.args" value=""/>
-
-  <import file="../module-build.xml"/>
-
-  <target name="init" depends="module-build.init,jar-lucene-core"/>
-
-  <path id="classpath">
-    <pathelement path="${lucene-core.jar}"/>
-    <pathelement path="${codecs.jar}"/>
-    <pathelement path="${backward-codecs.jar}"/>
-    <pathelement path="${analyzers-common.jar}"/>
-    <pathelement path="${misc.jar}"/>
-    <pathelement path="${queryparser.jar}"/>
-    <pathelement path="${queries.jar}"/>
-    <fileset dir="lib"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="javadocs" depends="compile-core,javadocs-lucene-core,javadocs-analyzers-common,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../analyzers-common"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-  <target name="build-artifacts-and-tests" depends="jar, compile-test">
-    <!-- copy start scripts -->
-    <copy todir="${build.dir}">
-      <fileset dir="${common.dir}/luke/bin">
-        <include name="**/*.sh"/>
-        <include name="**/*.bat"/>
-      </fileset>
-    </copy>
-  </target>
-
-  <!-- launch Luke -->
-  <target name="run" depends="compile-core" description="Launch Luke GUI">
-    <java classname="org.apache.lucene.luke.app.desktop.LukeMain"
-          classpath="${build.dir}/classes/java"
-          fork="true"
-          maxmemory="512m">
-      <classpath refid="classpath"/>
-    </java>
-  </target>
-  
-  <target name="compile-core"
-          depends="jar-codecs,jar-backward-codecs,jar-analyzers-common,jar-misc,jar-queryparser,jar-queries,jar-misc,common.compile-core"/>
-
-  <!-- Luke has no Maven artifacts -->
-  <target name="-dist-maven"/>
-  <target name="-install-to-maven-local-repo"/>
-  <target name="-validate-maven-dependencies"/>
-  <target name="-append-module-dependencies-properties"/>
-</project>
diff --git a/lucene/luke/ivy.xml b/lucene/luke/ivy.xml
deleted file mode 100644
index 88d9d8c..0000000
--- a/lucene/luke/ivy.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="luke"/>
-
-  <configurations defaultconfmapping="compile->default;logging->default">
-    <conf name="compile" transitive="false"/>
-    <conf name="logging" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="org.apache.logging.log4j" name="log4j-api" rev="${/org.apache.logging.log4j/log4j-api}"
-                conf="logging"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-core" rev="${/org.apache.logging.log4j/log4j-core}"
-                conf="logging"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/luke/src/distribution/README.md b/lucene/luke/src/distribution/README.md
new file mode 100644
index 0000000..da12b97
--- /dev/null
+++ b/lucene/luke/src/distribution/README.md
@@ -0,0 +1,8 @@
+# Luke
+
+This is Luke, Apache Lucene low-level index inspection and repair utility.
+
+Luke requires Java ${required.java.version}. You can start it with:
+java -jar ${luke.cmd}
+
+Happy index hacking!
diff --git a/lucene/luke/src/test/org/apache/lucene/luke/models/commits/CommitsImplTest.java b/lucene/luke/src/test/org/apache/lucene/luke/models/commits/CommitsImplTest.java
index 1c6e513..5588754 100644
--- a/lucene/luke/src/test/org/apache/lucene/luke/models/commits/CommitsImplTest.java
+++ b/lucene/luke/src/test/org/apache/lucene/luke/models/commits/CommitsImplTest.java
@@ -39,7 +39,7 @@
 
 // See: https://github.com/DmitryKey/luke/issues/111
 @LuceneTestCase.SuppressCodecs({
-   "SimpleText", "DummyCompressingStoredFieldsData", "HighCompressionCompressingStoredFieldsData", "FastCompressingStoredFieldsData", "FastDecompressionCompressingStoredFieldsData"
+   "SimpleText", "DeflateWithPresetCompressingStoredFieldsData", "DummyCompressingStoredFieldsData", "HighCompressionCompressingStoredFieldsData", "FastCompressingStoredFieldsData", "FastDecompressionCompressingStoredFieldsData"
 })
 public class CommitsImplTest extends LuceneTestCase {
 
diff --git a/lucene/memory/build.xml b/lucene/memory/build.xml
deleted file mode 100644
index cae5677..0000000
--- a/lucene/memory/build.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="memory" default="default">
-
-  <description>
-    Single-document in-memory index implementation
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="test.classpath">
-    <pathelement path="${queryparser.jar}"/>
-    <path refid="test.base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-queryparser,common.compile-core" />
-</project>
diff --git a/lucene/memory/ivy.xml b/lucene/memory/ivy.xml
deleted file mode 100644
index 61f06f6..0000000
--- a/lucene/memory/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="memory"/>
-</ivy-module>
diff --git a/lucene/misc/build.xml b/lucene/misc/build.xml
deleted file mode 100644
index f076eba..0000000
--- a/lucene/misc/build.xml
+++ /dev/null
@@ -1,52 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="misc" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-
-  <description>
-    Index tools and other miscellaneous code
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <target name="install-cpptasks" unless="cpptasks.uptodate" depends="ivy-availability-check,ivy-fail,ivy-configure">
-    <property name="cpptasks.uptodate" value="true"/>
-    <ivy:cachepath organisation="ant-contrib" module="cpptasks" revision="1.0b5"
-             inline="true" conf="master" type="jar" pathid="cpptasks.classpath"/>
-    <taskdef resource="cpptasks.tasks" classpathref="cpptasks.classpath"/>
-  </target>
-
-  <target name="build-native-unix" depends="install-cpptasks">
-    <mkdir dir="${common.build.dir}/native"/>
-
-    <cc outtype="shared" name="c++" subsystem="console" outfile="${common.build.dir}/native/NativePosixUtil" >
-      <fileset file="${src.dir}/org/apache/lucene/store/NativePosixUtil.cpp" />  
-      <includepath>
-        <pathelement location="${java.home}/../include"/>
-        <pathelement location="${java.home}/include"/>
-        <pathelement location="${java.home}/../include/linux"/>
-        <pathelement location="${java.home}/../include/solaris"/>
-      </includepath>
-
-      <compilerarg value="-fPIC" />
-      <syslibset libs="stdc++"/>
-    </cc>
-  </target>
-
-</project>
diff --git a/lucene/misc/ivy.xml b/lucene/misc/ivy.xml
deleted file mode 100644
index c339fee..0000000
--- a/lucene/misc/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="misc"/>
-</ivy-module>
diff --git a/lucene/module-build.xml b/lucene/module-build.xml
deleted file mode 100644
index 947b4b5..0000000
--- a/lucene/module-build.xml
+++ /dev/null
@@ -1,721 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="module-build" xmlns:artifact="antlib:org.apache.maven.artifact.ant">
-  <!-- TODO: adjust build.dir/dist.dir appropriately when a module is run individually -->
-  <dirname file="${ant.file.module-build}" property="module-build.dir"/>
-  <property name="build.dir" location="${module-build.dir}/build/${ant.project.name}"/>
-  <property name="dist.dir" location="${module-build.dir}/dist/${ant.project.name}"/>
-  <property name="maven.dist.dir" location="${module-build.dir}/dist/maven"/>
-
-  <import file="common-build.xml"/>
-
-  <!-- if you extend the classpath refid in one contrib's build.xml (add JARs), use this as basis: -->
-  <path id="base.classpath">
-   <pathelement location="${common.dir}/build/core/classes/java"/>
-  </path>
-  
-  <!-- default classpath refid, can be overridden by contrib's build.xml (use the above base.classpath as basis): -->
-  <path id="classpath" refid="base.classpath"/>
-  
-  <path id="test.base.classpath">
-    <pathelement location="${common.dir}/build/test-framework/classes/java"/>
-    <pathelement location="${common.dir}/build/codecs/classes/java"/>
-    <path refid="classpath"/>
-    <path refid="junit-path"/>
-    <pathelement location="${build.dir}/classes/java"/>
-  </path>
-
-  <path id="test.classpath" refid="test.base.classpath"/>
-
-  <path id="junit.classpath">
-    <pathelement location="${build.dir}/classes/test"/>
-    <path refid="test.classpath"/>
-  </path>
-
-  <target name="init" depends="common.init,compile-lucene-core"/>
-  <target name="compile-test" depends="init" if="module.has.tests">
-    <antcall target="common.compile-test" inheritRefs="true" />
-  </target>
-  <target name="test" depends="init" if="module.has.tests">
-    <antcall target="common.test" inheritRefs="true" />
-  </target>
-  <target name="build-artifacts-and-tests" depends="jar, compile-test" />
-
-  <!-- TODO: why does this previous depend on compile-core? -->
-  <target name="javadocs" depends="compile-core,javadocs-lucene-core,check-javadocs-uptodate"
-                          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc/>
-  </target>
-
-  <macrodef name="invoke-module-javadoc">
-    <!-- additional links for dependencies to other modules -->
-      <element name="links" optional="yes"/>
-    <!-- link source (don't do this unless it's example code) -->
-      <attribute name="linksource" default="no"/>
-    <sequential>
-      <mkdir dir="${javadoc.dir}/${name}"/>
-      <invoke-javadoc
-         destdir="${javadoc.dir}/${name}"
-         title="${Name} ${version} ${name} API"
-         linksource="@{linksource}">
-         <sources>
-           <link href="../core/"/>
-           <links/>
-           <link href=""/>
-           <packageset dir="${src.dir}"/>
-        </sources>
-      </invoke-javadoc>
-      
-      <!-- fix for Java 11 Javadoc tool that cannot handle split packages between modules correctly (by removing all the packages which are part of lucene-core): -->
-      <!-- problem description: [https://issues.apache.org/jira/browse/LUCENE-8738?focusedCommentId=16818106&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16818106] -->
-      <local name="element-list-regex"/><!-- contains a regex for all package names which are in lucene-core's javadoc! -->
-      <loadfile property="element-list-regex" srcFile="${javadoc.dir}/core/element-list" encoding="utf-8">
-        <filterchain>
-          <tokenfilter delimoutput="|">
-            <replacestring from="." to="\."/>
-          </tokenfilter>
-        </filterchain>
-      </loadfile>
-      <!--<echo>Regex: ^(${element-list-regex})$</echo>-->
-      <replaceregexp encoding="utf-8" file="${javadoc.dir}/${name}/element-list" byline="true" match="^(${element-list-regex})$" replace=""/>
-      
-      <jarify basedir="${javadoc.dir}/${name}" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-    </sequential>
-  </macrodef>
-
-  <property name="test-framework.jar" value="${common.dir}/build/test-framework/lucene-test-framework-${version}.jar"/>
-  <target name="check-test-framework-uptodate" unless="test-framework.uptodate">
-    <module-uptodate name="test-framework" jarfile="${test-framework.jar}" property="test-framework.uptodate"/>
-  </target>
-  <target name="jar-test-framework" unless="test-framework.uptodate" depends="check-test-framework-uptodate">
-    <ant dir="${common.dir}/test-framework" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="test-framework.uptodate" value="true"/>
-  </target>
-
-  <property name="test-framework-javadoc.jar" value="${common.dir}/build/test-framework/lucene-test-framework-${version}-javadoc.jar"/>
-  <target name="check-test-framework-javadocs-uptodate" unless="test-framework-javadocs.uptodate">
-    <module-uptodate name="test-framework" jarfile="${test-framework-javadoc.jar}" property="test-framework-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-test-framework" unless="test-framework-javadocs.uptodate" depends="check-test-framework-javadocs-uptodate">
-    <ant dir="${common.dir}/test-framework" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="test-framework-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="queryparser.jar" value="${common.dir}/build/queryparser/lucene-queryparser-${version}.jar"/>
-  <target name="check-queryparser-uptodate" unless="queryparser.uptodate">
-    <module-uptodate name="queryparser" jarfile="${queryparser.jar}" property="queryparser.uptodate"/>
-  </target>
-  <target name="jar-queryparser" unless="queryparser.uptodate" depends="check-queryparser-uptodate">
-    <ant dir="${common.dir}/queryparser" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="queryparser.uptodate" value="true"/>
-  </target>
-
-  <property name="queryparser-javadoc.jar" value="${common.dir}/build/queryparser/lucene-queryparser-${version}-javadoc.jar"/>
-  <target name="check-queryparser-javadocs-uptodate" unless="queryparser-javadocs.uptodate">
-    <module-uptodate name="queryparser" jarfile="${queryparser-javadoc.jar}" property="queryparser-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-queryparser" unless="queryparser-javadocs.uptodate" depends="check-queryparser-javadocs-uptodate">
-    <ant dir="${common.dir}/queryparser" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="queryparser-javadocs.uptodate" value="true"/>
-  </target>
-  
-  <property name="join.jar" value="${common.dir}/build/join/lucene-join-${version}.jar"/>
-  <target name="check-join-uptodate" unless="join.uptodate">
-    <module-uptodate name="join" jarfile="${join.jar}" property="join.uptodate"/>
-  </target>
-  <target name="jar-join" unless="join.uptodate" depends="check-join-uptodate">
-    <ant dir="${common.dir}/join" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="join.uptodate" value="true"/>
-  </target>  
-  
-  <property name="join-javadoc.jar" value="${common.dir}/build/join/lucene-join-${version}-javadoc.jar"/>
-  <target name="check-join-javadocs-uptodate" unless="join-javadocs.uptodate">
-    <module-uptodate name="join" jarfile="${join-javadoc.jar}" property="join-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-join" unless="join-javadocs.uptodate" depends="check-join-javadocs-uptodate">
-    <ant dir="${common.dir}/join" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="join-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-common.jar" value="${common.dir}/build/analysis/common/lucene-analyzers-common-${version}.jar"/>
-  <target name="check-analyzers-common-uptodate" unless="analyzers-common.uptodate">
-    <module-uptodate name="analysis/common" jarfile="${analyzers-common.jar}" property="analyzers-common.uptodate"/>
-  </target>
-  <target name="jar-analyzers-common" unless="analyzers-common.uptodate" depends="check-analyzers-common-uptodate">
-    <ant dir="${common.dir}/analysis/common" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-common.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-common-javadoc.jar" value="${common.dir}/build/analysis/common/lucene-analyzers-common-${version}-javadoc.jar"/>
-  <target name="check-analyzers-common-javadocs-uptodate" unless="analyzers-common-javadocs.uptodate">
-    <module-uptodate name="analysis/common" jarfile="${analyzers-common-javadoc.jar}" property="analyzers-common-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-common" unless="analyzers-common-javadocs.uptodate" depends="check-analyzers-common-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/common" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-common-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="queries.jar" value="${common.dir}/build/queries/lucene-queries-${version}.jar"/>
-  <target name="check-queries-uptodate" unless="queries.uptodate">
-    <module-uptodate name="queries" jarfile="${queries.jar}" property="queries.uptodate"/>
-  </target>
-  <target name="jar-queries" unless="queries.uptodate" depends="check-queries-uptodate">
-    <ant dir="${common.dir}/queries" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="queries.uptodate" value="true"/>
-  </target>
-
-  <property name="queries-javadoc.jar" value="${common.dir}/build/queries/lucene-queries-${version}-javadoc.jar"/>
-  <target name="check-queries-javadocs-uptodate" unless="queries-javadocs.uptodate">
-    <module-uptodate name="queries" jarfile="${queries-javadoc.jar}" property="queries-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-queries" unless="queries-javadocs.uptodate" depends="check-queries-javadocs-uptodate">
-    <ant dir="${common.dir}/queries" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="queries-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="classification.jar" value="${common.dir}/build/classification/lucene-classification-${version}.jar"/>
-  <target name="check-classification-uptodate" unless="classification.uptodate">
-    <module-uptodate name="classification" jarfile="${classification.jar}" property="classification.uptodate"/>
-  </target>
-  <target name="jar-classification" unless="classification.uptodate" depends="check-classification-uptodate">
-    <ant dir="${common.dir}/classification" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="classification.uptodate" value="true"/>
-  </target>
-
-  <property name="classification-javadoc.jar" value="${common.dir}/build/classification/lucene-classification-${version}-javadoc.jar"/>
-  <target name="check-classification-javadocs-uptodate" unless="classification-javadocs.uptodate">
-    <module-uptodate name="classification" jarfile="${classification-javadoc.jar}" property="classification-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-classification" unless="classification-javadocs.uptodate" depends="check-classification-javadocs-uptodate">
-    <ant dir="${common.dir}/classification" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="classification-javadocs.uptodate" value="true"/>
-  </target>
-  
-  <property name="facet.jar" value="${common.dir}/build/facet/lucene-facet-${version}.jar"/>
-  <target name="check-facet-uptodate" unless="facet.uptodate">
-    <module-uptodate name="facet" jarfile="${facet.jar}" property="facet.uptodate"/>
-  </target>
-  <target name="jar-facet" unless="facet.uptodate" depends="check-facet-uptodate">
-    <ant dir="${common.dir}/facet" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="facet.uptodate" value="true"/>
-  </target>
-
-  <property name="facet-javadoc.jar" value="${common.dir}/build/facet/lucene-facet-${version}-javadoc.jar"/>
-  <target name="check-facet-javadocs-uptodate" unless="facet-javadocs.uptodate">
-    <module-uptodate name="facet" jarfile="${facet-javadoc.jar}" property="facet-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-facet" unless="facet-javadocs.uptodate" depends="check-facet-javadocs-uptodate">
-    <ant dir="${common.dir}/facet" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="facet-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="replicator.jar" value="${common.dir}/build/replicator/lucene-replicator-${version}.jar"/>
-  <target name="check-replicator-uptodate" unless="replicator.uptodate">
-    <module-uptodate name="replicator" jarfile="${replicator.jar}" property="replicator.uptodate"/>
-  </target>
-  <target name="jar-replicator" unless="replicator.uptodate" depends="check-replicator-uptodate">
-    <ant dir="${common.dir}/replicator" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="replicator.uptodate" value="true"/>
-  </target>
-
-  <property name="replicator-javadoc.jar" value="${common.dir}/build/replicator/lucene-replicator-${version}-javadoc.jar"/>
-  <target name="check-replicator-javadocs-uptodate" unless="replicator-javadocs.uptodate">
-    <module-uptodate name="replicator" jarfile="${replicator-javadoc.jar}" property="replicator-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-replicator" unless="replicator-javadocs.uptodate" depends="check-replicator-javadocs-uptodate">
-    <ant dir="${common.dir}/replicator" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="replicator-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-icu.jar" value="${common.dir}/build/analysis/icu/lucene-analyzers-icu-${version}.jar"/>
-  <target name="check-analyzers-icu-uptodate" unless="analyzers-icu.uptodate">
-    <module-uptodate name="analysis/icu" jarfile="${analyzers-icu.jar}" property="analyzers-icu.uptodate"/>
-  </target>
-  <target name="jar-analyzers-icu" unless="analyzers-icu.uptodate" depends="check-analyzers-icu-uptodate">
-    <ant dir="${common.dir}/analysis/icu" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-icu.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-icu-javadoc.jar" value="${common.dir}/build/analysis/icu/lucene-analyzers-icu-${version}-javadoc.jar"/>
-  <target name="check-analyzers-icu-javadocs-uptodate" unless="analyzers-icu-javadocs.uptodate">
-    <module-uptodate name="analysis/icu" jarfile="${analyzers-icu-javadoc.jar}" property="analyzers-icu-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-icu" unless="analyzers-icu-javadocs.uptodate" depends="check-analyzers-icu-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/icu" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-icu-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-opennlp.jar" value="${common.dir}/build/analysis/opennlp/lucene-analyzers-opennlp-${version}.jar"/>
-  <target name="check-analyzers-opennlp-uptodate" unless="analyzers-opennlp.uptodate">
-    <module-uptodate name="analysis/opennlp" jarfile="${analyzers-opennlp.jar}" property="analyzers-opennlp.uptodate"/>
-  </target>
-  <target name="jar-analyzers-opennlp" unless="analyzers-opennlp.uptodate" depends="check-analyzers-opennlp-uptodate">
-    <ant dir="${common.dir}/analysis/opennlp" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-opennlp.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-opennlp-javadoc.jar" value="${common.dir}/build/analysis/opennlp/lucene-analyzers-opennlp-${version}-javadoc.jar"/>
-  <target name="check-analyzers-opennlp-javadocs-uptodate" unless="analyzers-opennlp-javadocs.uptodate">
-    <module-uptodate name="analysis/opennlp" jarfile="${analyzers-opennlp-javadoc.jar}" property="analyzers-opennlp-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-opennlp" unless="analyzers-opennlp-javadocs.uptodate" depends="check-analyzers-opennlp-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/opennlp" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-opennlp-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-phonetic.jar" value="${common.dir}/build/analysis/phonetic/lucene-analyzers-phonetic-${version}.jar"/>
-  <target name="check-analyzers-phonetic-uptodate" unless="analyzers-phonetic.uptodate">
-    <module-uptodate name="analysis/phonetic" jarfile="${analyzers-phonetic.jar}" property="analyzers-phonetic.uptodate"/>
-  </target>
-  <target name="jar-analyzers-phonetic" unless="analyzers-phonetic.uptodate" depends="check-analyzers-phonetic-uptodate">
-    <ant dir="${common.dir}/analysis/phonetic" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <property name="analyzers-phonetic-javadoc.jar" value="${common.dir}/build/analysis/phonetic/lucene-analyzers-phonetic-${version}-javadoc.jar"/>
-  <target name="check-analyzers-phonetic-javadocs-uptodate" unless="analyzers-phonetic-javadocs.uptodate">
-    <module-uptodate name="analysis/phonetic" jarfile="${analyzers-phonetic-javadoc.jar}" property="analyzers-phonetic-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-phonetic" unless="analyzers-phonetic-javadocs.uptodate" depends="check-analyzers-phonetic-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/phonetic" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-phonetic-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-smartcn.jar" value="${common.dir}/build/analysis/smartcn/lucene-analyzers-smartcn-${version}.jar"/>
-  <target name="check-analyzers-smartcn-uptodate" unless="analyzers-smartcn.uptodate">
-    <module-uptodate name="analysis/smartcn" jarfile="${analyzers-smartcn.jar}" property="analyzers-smartcn.uptodate"/>
-  </target>
-  <target name="jar-analyzers-smartcn" unless="analyzers-smartcn.uptodate" depends="check-analyzers-smartcn-uptodate">
-    <ant dir="${common.dir}/analysis/smartcn" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-smartcn.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-smartcn-javadoc.jar" value="${common.dir}/build/analysis/smartcn/lucene-analyzers-smartcn-${version}-javadoc.jar"/>
-  <target name="check-analyzers-smartcn-javadocs-uptodate" unless="analyzers-smartcn-javadocs.uptodate">
-    <module-uptodate name="analysis/smartcn" jarfile="${analyzers-smartcn-javadoc.jar}" property="analyzers-smartcn-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-smartcn" unless="analyzers-smartcn-javadocs.uptodate" depends="check-analyzers-smartcn-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/smartcn" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-smartcn-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-stempel.jar" value="${common.dir}/build/analysis/stempel/lucene-analyzers-stempel-${version}.jar"/>
-  <target name="check-analyzers-stempel-uptodate" unless="analyzers-stempel.uptodate">
-    <module-uptodate name="analysis/stempel" jarfile="${analyzers-stempel.jar}" property="analyzers-stempel.uptodate"/>
-  </target>
-  <target name="jar-analyzers-stempel" unless="analyzers-stempel.uptodate" depends="check-analyzers-stempel-uptodate">
-    <ant dir="${common.dir}/analysis/stempel" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-stempel.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-stempel-javadoc.jar" value="${common.dir}/build/analysis/stempel/lucene-analyzers-stempel-${version}-javadoc.jar"/>
-  <target name="check-analyzers-stempel-javadocs-uptodate" unless="analyzers-stempel-javadocs.uptodate">
-    <module-uptodate name="analysis/stempel" jarfile="${analyzers-stempel-javadoc.jar}" property="analyzers-stempel-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-stempel" unless="analyzers-stempel-javadocs.uptodate" depends="check-analyzers-stempel-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/stempel" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-stempel-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-kuromoji.jar" value="${common.dir}/build/analysis/kuromoji/lucene-analyzers-kuromoji-${version}.jar"/>
-  <target name="check-analyzers-kuromoji-uptodate" unless="analyzers-kuromoji.uptodate">
-    <module-uptodate name="analysis/kuromoji" jarfile="${analyzers-kuromoji.jar}" property="analyzers-kuromoji.uptodate"/>
-  </target>
-  <target name="jar-analyzers-kuromoji" unless="analyzers-kuromoji.uptodate" depends="check-analyzers-kuromoji-uptodate">
-    <ant dir="${common.dir}/analysis/kuromoji" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-kuromoji.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-kuromoji-javadoc.jar" value="${common.dir}/build/analysis/kuromoji/lucene-analyzers-kuromoji-${version}-javadoc.jar"/>
-  <target name="check-analyzers-kuromoji-javadocs-uptodate" unless="analyzers-kuromoji-javadocs.uptodate">
-    <module-uptodate name="analysis/kuromoji" jarfile="${analyzers-kuromoji-javadoc.jar}" property="analyzers-kuromoji-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-kuromoji" unless="analyzers-kuromoji-javadocs.uptodate" depends="check-analyzers-kuromoji-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/kuromoji" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-kuromoji-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-morfologik.jar" value="${common.dir}/build/analysis/morfologik/lucene-analyzers-morfologik-${version}.jar"/>
-  <fileset id="analyzers-morfologik.fileset" dir="${common.dir}">
-    <include name="build/analysis/morfologik/lucene-analyzers-morfologik-${version}.jar" />
-    <include name="analysis/morfologik/lib/morfologik-*.jar" />
-  </fileset>
-  <target name="check-analyzers-morfologik-uptodate" unless="analyzers-morfologik.uptodate">
-    <module-uptodate name="analysis/morfologik" jarfile="${analyzers-morfologik.jar}" property="analyzers-morfologik.uptodate"/>
-  </target>
-  <target name="jar-analyzers-morfologik" unless="analyzers-morfologik.uptodate" depends="check-analyzers-morfologik-uptodate">
-    <ant dir="${common.dir}/analysis/morfologik" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-morfologik.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-morfologik-javadoc.jar" value="${common.dir}/build/analysis/morfologik/lucene-analyzers-morfologik-${version}-javadoc.jar"/>
-  <target name="check-analyzers-morfologik-javadocs-uptodate" unless="analyzers-morfologik-javadocs.uptodate">
-    <module-uptodate name="analysis/morfologik" jarfile="${analyzers-morfologik-javadoc.jar}" property="analyzers-morfologik-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-morfologik" unless="analyzers-morfologik-javadocs.uptodate" depends="check-analyzers-morfologik-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/morfologik" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-morfologik-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-nori.jar" value="${common.dir}/build/analysis/nori/lucene-analyzers-nori-${version}.jar"/>
-  <target name="check-analyzers-nori-uptodate" unless="analyzers-nori.uptodate">
-    <module-uptodate name="analysis/nori" jarfile="${analyzers-nori.jar}" property="analyzers-nori.uptodate"/>
-  </target>
-  <target name="jar-analyzers-nori" unless="analyzers-nori.uptodate" depends="check-analyzers-nori-uptodate">
-    <ant dir="${common.dir}/analysis/nori" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-nori.uptodate" value="true"/>
-  </target>
-
-  <property name="analyzers-nori-javadoc.jar" value="${common.dir}/build/analysis/nori/lucene-analyzers-nori-${version}-javadoc.jar"/>
-  <target name="check-analyzers-nori-javadocs-uptodate" unless="analyzers-nori-javadocs.uptodate">
-    <module-uptodate name="analysis/nori" jarfile="${analyzers-nori-javadoc.jar}" property="analyzers-nori-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-analyzers-nori" unless="analyzers-nori-javadocs.uptodate" depends="check-analyzers-nori-javadocs-uptodate">
-    <ant dir="${common.dir}/analysis/nori" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="analyzers-nori-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="codecs.jar" value="${common.dir}/build/codecs/lucene-codecs-${version}.jar"/>
-  <target name="check-codecs-uptodate" unless="codecs.uptodate">
-    <module-uptodate name="codecs" jarfile="${codecs.jar}" property="codecs.uptodate"/>
-  </target>
-  <target name="jar-codecs" unless="codecs.uptodate" depends="check-codecs-uptodate">
-    <ant dir="${common.dir}/codecs" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="codecs.uptodate" value="true"/>
-  </target>
-
-  <property name="codecs-javadoc.jar" value="${common.dir}/build/codecs/lucene-codecs-${version}-javadoc.jar"/>
-  <target name="check-codecs-javadocs-uptodate" unless="codecs-javadocs.uptodate">
-    <module-uptodate name="codecs" jarfile="${codecs-javadoc.jar}" property="codecs-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-codecs" unless="codecs-javadocs.uptodate" depends="check-codecs-javadocs-uptodate">
-    <ant dir="${common.dir}/codecs" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="codecs-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="backward-codecs.jar" value="${common.dir}/build/backward-codecs/lucene-backward-codecs-${version}.jar"/>
-  <target name="check-backward-codecs-uptodate" unless="backward-codecs.uptodate">
-    <module-uptodate name="backward-codecs" jarfile="${backward-codecs.jar}" property="backward-codecs.uptodate"/>
-  </target>
-  <target name="jar-backward-codecs" unless="backward-codecs.uptodate" depends="check-backward-codecs-uptodate">
-    <ant dir="${common.dir}/backward-codecs" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="backward-codecs.uptodate" value="true"/>
-  </target>
-
-  <property name="backward-codecs-javadoc.jar" value="${common.dir}/build/backward-codecs/lucene-backward-codecs-${version}-javadoc.jar"/>
-  <target name="check-backward-codecs-javadocs-uptodate" unless="backward-codecs-javadocs.uptodate">
-    <module-uptodate name="backward-codecs" jarfile="${backward-codecs-javadoc.jar}" property="backward-codecs-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-backward-codecs" unless="backward-codecs-javadocs.uptodate" depends="check-backward-codecs-javadocs-uptodate">
-    <ant dir="${common.dir}/backward-codecs" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="backward-codecs-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="expressions.jar" value="${common.dir}/build/expressions/lucene-expressions-${version}.jar"/>
-  <target name="check-expressions-uptodate" unless="expressions.uptodate">
-    <module-uptodate name="expressions" jarfile="${expressions.jar}" property="expressions.uptodate"/>
-  </target>
-  <target name="jar-expressions" unless="expressions.uptodate" depends="check-expressions-uptodate">
-    <ant dir="${common.dir}/expressions" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="expressions.uptodate" value="true"/>
-  </target>
-
-  <property name="expressions-javadoc.jar" value="${common.dir}/build/expressions/lucene-expressions-${version}-javadoc.jar"/>
-  <target name="check-expressions-javadocs-uptodate" unless="expressions-javadocs.uptodate">
-    <module-uptodate name="expressions" jarfile="${expressions-javadoc.jar}" property="expressions-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-expressions" unless="expressions-javadocs.uptodate" depends="check-expressions-javadocs-uptodate">
-    <ant dir="${common.dir}/expressions" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="expressions-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="grouping.jar" value="${common.dir}/build/grouping/lucene-grouping-${version}.jar"/>
-  <target name="check-grouping-uptodate" unless="grouping.uptodate">
-    <module-uptodate name="grouping" jarfile="${grouping.jar}" property="grouping.uptodate"/>
-  </target>
-  <target name="jar-grouping" unless="grouping.uptodate" depends="check-grouping-uptodate">
-    <ant dir="${common.dir}/grouping" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="grouping.uptodate" value="true"/>
-  </target>
-
-  <property name="grouping-javadoc.jar" value="${common.dir}/build/grouping/lucene-grouping-${version}-javadoc.jar"/>
-  <target name="check-grouping-javadocs-uptodate" unless="grouping-javadocs.uptodate">
-    <module-uptodate name="grouping" jarfile="${grouping-javadoc.jar}" property="grouping-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-grouping" unless="grouping-javadocs.uptodate" depends="check-grouping-javadocs-uptodate">
-    <ant dir="${common.dir}/grouping" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="grouping-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="highlighter.jar" value="${common.dir}/build/highlighter/lucene-highlighter-${version}.jar"/>
-  <target name="check-highlighter-uptodate" unless="highlighter.uptodate">
-    <module-uptodate name="highlighter" jarfile="${highlighter.jar}" property="highlighter.uptodate"/>
-  </target>
-  <target name="jar-highlighter" unless="highlighter.uptodate" depends="check-highlighter-uptodate">
-    <ant dir="${common.dir}/highlighter" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="highlighter.uptodate" value="true"/>
-  </target>
-
-  <property name="highlighter-javadoc.jar" value="${common.dir}/build/highlighter/lucene-highlighter-${version}-javadoc.jar"/>
-  <target name="check-highlighter-javadocs-uptodate" unless="highlighter-javadocs.uptodate">
-    <module-uptodate name="highlighter" jarfile="${highlighter-javadoc.jar}" property="highlighter-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-highlighter" unless="highlighter-javadocs.uptodate" depends="check-highlighter-javadocs-uptodate">
-    <ant dir="${common.dir}/highlighter" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="highlighter-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="memory.jar" value="${common.dir}/build/memory/lucene-memory-${version}.jar"/>
-  <target name="check-memory-uptodate" unless="memory.uptodate">
-    <module-uptodate name="memory" jarfile="${memory.jar}" property="memory.uptodate"/>
-  </target>
-  <target name="jar-memory" unless="memory.uptodate" depends="check-memory-uptodate">
-    <ant dir="${common.dir}/memory" target="jar-core" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="memory.uptodate" value="true"/>
-  </target>
-
-  <property name="memory-javadoc.jar" value="${common.dir}/build/memory/lucene-memory-${version}-javadoc.jar"/>
-  <target name="check-memory-javadocs-uptodate" unless="memory-javadocs.uptodate">
-    <module-uptodate name="memory" jarfile="${memory-javadoc.jar}" property="memory-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-memory" unless="memory-javadocs.uptodate" depends="check-memory-javadocs-uptodate">
-    <ant dir="${common.dir}/memory" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="memory-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="misc.jar" value="${common.dir}/build/misc/lucene-misc-${version}.jar"/>
-  <target name="check-misc-uptodate" unless="misc.uptodate">
-    <module-uptodate name="misc" jarfile="${misc.jar}" property="misc.uptodate"/>
-  </target>
-  <target name="jar-misc" unless="misc.uptodate" depends="check-misc-uptodate">
-    <ant dir="${common.dir}/misc" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="misc.uptodate" value="true"/>
-  </target>
-
-  <property name="misc-javadoc.jar" value="${common.dir}/build/misc/lucene-misc-${version}-javadoc.jar"/>
-  <target name="check-misc-javadocs-uptodate" unless="misc-javadocs.uptodate">
-    <module-uptodate name="misc" jarfile="${misc-javadoc.jar}" property="misc-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-misc" unless="misc-javadocs.uptodate" depends="check-misc-javadocs-uptodate">
-    <ant dir="${common.dir}/misc" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="misc-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="sandbox.jar" value="${common.dir}/build/sandbox/lucene-sandbox-${version}.jar"/>
-  <target name="check-sandbox-uptodate" unless="sandbox.uptodate">
-    <module-uptodate name="sandbox" jarfile="${sandbox.jar}" property="sandbox.uptodate"/>
-  </target>
-  <target name="jar-sandbox" unless="sandbox.uptodate" depends="check-sandbox-uptodate">
-    <ant dir="${common.dir}/sandbox" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="sandbox.uptodate" value="true"/>
-  </target>
-
-  <property name="sandbox-javadoc.jar" value="${common.dir}/build/sandbox/lucene-sandbox-${version}-javadoc.jar"/>
-  <target name="check-sandbox-javadocs-uptodate" unless="sandbox-javadocs.uptodate">
-    <module-uptodate name="sandbox" jarfile="${sandbox-javadoc.jar}" property="sandbox-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-sandbox" unless="sandbox-javadocs.uptodate" depends="check-sandbox-javadocs-uptodate">
-    <ant dir="${common.dir}/sandbox" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="sandbox-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="spatial3d.jar" value="${common.dir}/build/spatial3d/lucene-spatial3d-${version}.jar"/>
-  <target name="check-spatial3d-uptodate" unless="spatial3d.uptodate">
-    <module-uptodate name="spatial3d" jarfile="${spatial3d.jar}" property="spatial3d.uptodate"/>
-  </target>
-  <target name="jar-spatial3d" unless="spatial3d.uptodate" depends="check-spatial3d-uptodate">
-    <ant dir="${common.dir}/spatial3d" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="spatial3d.uptodate" value="true"/>
-  </target>
-
-  <property name="spatial3d-javadoc.jar" value="${common.dir}/build/spatial3d/lucene-spatial3d-${version}-javadoc.jar"/>
-  <target name="check-spatial3d-javadocs-uptodate" unless="spatial3d-javadocs.uptodate">
-    <module-uptodate name="spatial3d" jarfile="${spatial3d-javadoc.jar}" property="spatial3d-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-spatial3d" unless="spatial3d-javadocs.uptodate" depends="check-spatial3d-javadocs-uptodate">
-    <ant dir="${common.dir}/spatial3d" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="spatial3d-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="spatial-extras.jar" value="${common.dir}/build/spatial-extras/lucene-spatial-extras-${version}.jar"/>
-  <target name="check-spatial-extras-uptodate" unless="spatial-extras.uptodate">
-    <module-uptodate name="spatial-extras" jarfile="${spatial-extras.jar}" property="spatial-extras.uptodate"/>
-  </target>
-  <target name="jar-spatial-extras" unless="spatial-extras.uptodate" depends="check-spatial-extras-uptodate">
-    <ant dir="${common.dir}/spatial-extras" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="spatial-extras.uptodate" value="true"/>
-  </target>
-
-  <property name="spatial-extras-javadoc.jar" value="${common.dir}/build/spatial-extras/lucene-spatial-extras-${version}-javadoc.jar"/>
-  <target name="check-spatial-extras-javadocs-uptodate" unless="spatial-extras-javadocs.uptodate">
-    <module-uptodate name="spatial-extras" jarfile="${spatial-extras-javadoc.jar}" property="spatial-extras-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-spatial-extras" unless="spatial-extras-javadocs.uptodate" depends="check-spatial-extras-javadocs-uptodate">
-    <ant dir="${common.dir}/spatial-extras" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="spatial-extras-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="suggest.jar" value="${common.dir}/build/suggest/lucene-suggest-${version}.jar"/>
-  <target name="check-suggest-uptodate" unless="suggest.uptodate">
-    <module-uptodate name="suggest" jarfile="${suggest.jar}" property="suggest.uptodate"/>
-  </target>
-  <target name="jar-suggest" unless="suggest.uptodate" depends="check-suggest-uptodate">
-    <ant dir="${common.dir}/suggest" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="suggest.uptodate" value="true"/>
-  </target>
-
-  <property name="suggest-javadoc.jar" value="${common.dir}/build/suggest/lucene-suggest-${version}-javadoc.jar"/>
-  <target name="check-suggest-javadocs-uptodate" unless="suggest-javadocs.uptodate">
-    <module-uptodate name="suggest" jarfile="${suggest-javadoc.jar}" property="suggest-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-suggest" unless="suggest-javadocs.uptodate" depends="check-suggest-javadocs-uptodate">
-    <ant dir="${common.dir}/suggest" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="suggest-javadocs.uptodate" value="true"/>
-  </target>
-
-  <property name="luke.jar" value="${common.dir}/build/luke/lucene-luke-${version}.jar"/>
-  <target name="check-luke-uptodate" unless="luke.uptodate">
-    <module-uptodate name="luke" jarfile="${luke.jar}" property="luke.uptodate"/>
-  </target>
-  <target name="jar-luke" unless="luke.uptodate" depends="check-luke-uptodate">
-    <ant dir="${common.dir}/luke" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="luke.uptodate" value="true"/>
-  </target>
-
-  <property name="luke-javadoc.jar" value="${common.dir}/build/luke/lucene-luke-${version}-javadoc.jar"/>
-  <target name="check-luke-javadocs-uptodate" unless="luke-javadocs.uptodate">
-    <module-uptodate name="luke" jarfile="${luke-javadoc.jar}" property="luke-javadocs.uptodate"/>
-  </target>
-  <target name="javadocs-luke" unless="luke-javadocs.uptodate" depends="check-luke-javadocs-uptodate">
-    <ant dir="${common.dir}/luke" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="luke-javadocs.uptodate" value="true"/>
-  </target>
-</project>
diff --git a/lucene/monitor/build.xml b/lucene/monitor/build.xml
deleted file mode 100644
index c378c1d..0000000
--- a/lucene/monitor/build.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="monitor" default="default">
-
-  <description>
-    Reverse-search implementation for monitoring and classification
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="test.classpath">
-    <path refid="test.base.classpath"/>
-    <pathelement path="${memory.jar}"/>
-  </path>
-
-  <path id="classpath">
-    <pathelement path="${memory.jar}"/>
-    <pathelement path="${analyzers-common.jar}"/>
-    <pathelement path="${queryparser.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="init" depends="module-build.init,jar-analyzers-common,jar-queryparser,jar-memory"/>
-
-  <target name="javadocs" depends="javadocs-memory,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../memory"/>
-        <link href="../analyzers-common"/>
-        <link href="../queryparser"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-</project>
diff --git a/lucene/monitor/ivy.xml b/lucene/monitor/ivy.xml
deleted file mode 100644
index 9485d48..0000000
--- a/lucene/monitor/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="luwak"/>
-</ivy-module>
diff --git a/lucene/queries/build.xml b/lucene/queries/build.xml
deleted file mode 100644
index 20f9c4f..0000000
--- a/lucene/queries/build.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<project name="queries" default="default">
-  <description>
-    Filters and Queries that add to core Lucene
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="test.classpath">
-    <pathelement path="${expressions.jar}"/>
-    <fileset dir="../expressions/lib"/>
-    <path refid="test.base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-expressions,common.compile-core" />
-</project>
diff --git a/lucene/queries/ivy.xml b/lucene/queries/ivy.xml
deleted file mode 100644
index 403c495..0000000
--- a/lucene/queries/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="queries"/>
-</ivy-module>
diff --git a/lucene/queries/src/java/org/apache/lucene/queries/function/FunctionMatchQuery.java b/lucene/queries/src/java/org/apache/lucene/queries/function/FunctionMatchQuery.java
index 39a51c1..bd2c2f5 100644
--- a/lucene/queries/src/java/org/apache/lucene/queries/function/FunctionMatchQuery.java
+++ b/lucene/queries/src/java/org/apache/lucene/queries/function/FunctionMatchQuery.java
@@ -44,17 +44,32 @@
  */
 public final class FunctionMatchQuery extends Query {
 
+  static final float DEFAULT_MATCH_COST = 100;
+
   private final DoubleValuesSource source;
   private final DoublePredicate filter;
+  private final float matchCost; // not used in equals/hashCode
 
   /**
-   * Create a FunctionMatchQuery
+   * Create a FunctionMatchQuery with default TwoPhaseIterator matchCost -
+   * {@link #DEFAULT_MATCH_COST} = {@value #DEFAULT_MATCH_COST}
    * @param source  a {@link DoubleValuesSource} to use for values
    * @param filter  the predicate to match against
    */
   public FunctionMatchQuery(DoubleValuesSource source, DoublePredicate filter) {
+    this(source, filter, DEFAULT_MATCH_COST);
+  }
+
+  /**
+   * Create a FunctionMatchQuery
+   * @param source     a {@link DoubleValuesSource} to use for values
+   * @param filter     the predicate to match against
+   * @param matchCost  to be returned by {@link TwoPhaseIterator#matchCost()}
+   */
+  public FunctionMatchQuery(DoubleValuesSource source, DoublePredicate filter, float matchCost) {
     this.source = source;
     this.filter = filter;
+    this.matchCost = matchCost;
   }
 
   @Override
@@ -83,7 +98,7 @@
 
           @Override
           public float matchCost() {
-            return 100; // TODO maybe DoubleValuesSource should have a matchCost?
+            return matchCost; // TODO maybe DoubleValuesSource should have a matchCost?
           }
         };
         return new ConstantScoreScorer(this, score(), scoreMode, twoPhase);
diff --git a/lucene/queries/src/test/org/apache/lucene/queries/function/TestFunctionMatchQuery.java b/lucene/queries/src/test/org/apache/lucene/queries/function/TestFunctionMatchQuery.java
index 38d8d8a..95bdada 100644
--- a/lucene/queries/src/test/org/apache/lucene/queries/function/TestFunctionMatchQuery.java
+++ b/lucene/queries/src/test/org/apache/lucene/queries/function/TestFunctionMatchQuery.java
@@ -18,20 +18,26 @@
 package org.apache.lucene.queries.function;
 
 import java.io.IOException;
+import java.util.function.DoublePredicate;
 
 import org.apache.lucene.index.DirectoryReader;
 import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.LeafReaderContext;
 import org.apache.lucene.search.DoubleValuesSource;
 import org.apache.lucene.search.IndexSearcher;
 import org.apache.lucene.search.QueryUtils;
+import org.apache.lucene.search.ScoreMode;
 import org.apache.lucene.search.TopDocs;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 
+import static org.apache.lucene.queries.function.FunctionMatchQuery.DEFAULT_MATCH_COST;
+
 public class TestFunctionMatchQuery extends FunctionTestSetup {
 
   static IndexReader reader;
   static IndexSearcher searcher;
+  private static final DoubleValuesSource in = DoubleValuesSource.fromFloatField(FLOAT_FIELD);
 
   @BeforeClass
   public static void beforeClass() throws Exception {
@@ -46,7 +52,6 @@
   }
 
   public void testRangeMatching() throws IOException {
-    DoubleValuesSource in = DoubleValuesSource.fromFloatField(FLOAT_FIELD);
     FunctionMatchQuery fmq = new FunctionMatchQuery(in, d -> d >= 2 && d < 4);
     TopDocs docs = searcher.search(fmq, 10);
 
@@ -58,4 +63,23 @@
 
   }
 
+  public void testTwoPhaseIteratorMatchCost() throws IOException {
+    DoublePredicate predicate = d -> true;
+
+    // should use default match cost
+    FunctionMatchQuery fmq = new FunctionMatchQuery(in, predicate);
+    assertEquals(DEFAULT_MATCH_COST, getMatchCost(fmq), 0.1);
+
+    // should use client defined match cost
+    fmq = new FunctionMatchQuery(in, predicate, 200);
+    assertEquals(200, getMatchCost(fmq), 0.1);
+  }
+
+  private static float getMatchCost(FunctionMatchQuery fmq) throws IOException {
+    LeafReaderContext ctx = reader.leaves().get(0);
+    return fmq.createWeight(searcher, ScoreMode.TOP_DOCS, 1)
+      .scorer(ctx)
+      .twoPhaseIterator()
+      .matchCost();
+  }
 }
diff --git a/lucene/queryparser/build.xml b/lucene/queryparser/build.xml
deleted file mode 100644
index b6e43c2..0000000
--- a/lucene/queryparser/build.xml
+++ /dev/null
@@ -1,178 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<project name="queryparser" default="default">
-  <description>
-    Query parsers and parsing framework
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${queries.jar}"/>
-    <pathelement path="${sandbox.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="jar-queries,jar-sandbox,common.compile-core"/>
-
-  <target name="javadocs" depends="javadocs-queries,javadocs-sandbox,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../queries"/>
-        <link href="../sandbox"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-  <target name="javacc" depends="javacc-QueryParser,javacc-surround,javacc-flexible"/>
-  
-  <macrodef name="generalReplaces">
-    <attribute name="dir"/>
-    <sequential>
-      <!-- StringBuffer -> StringBuilder -->
-      <replace token="StringBuffer" value="StringBuilder" encoding="UTF-8">
-         <fileset dir="@{dir}" includes="ParseException.java TokenMgrError.java"/>
-      </replace>
-      <!-- Remove debug stream (violates forbidden-apis) -->
-      <replaceregexp match="/\*\* Debug.*debugStream\s*=\s*ds;\s*}" replace="" flags="s" encoding="UTF-8">
-         <fileset dir="@{dir}" includes="*TokenManager.java"/>
-      </replaceregexp>
-      <!-- Add warnings supression -->
-      <replaceregexp match="^\Qpublic class\E" replace="@SuppressWarnings(&quot;cast&quot;)${line.separator}\0" flags="m" encoding="UTF-8">
-         <fileset dir="@{dir}" includes="*TokenManager.java"/>
-      </replaceregexp>
-    </sequential>
-  </macrodef>
-  
-  <target name="javacc-QueryParser" depends="resolve-javacc">
-    <sequential>
-      <invoke-javacc target="src/java/org/apache/lucene/queryparser/classic/QueryParser.jj"
-                     outputDir="src/java/org/apache/lucene/queryparser/classic"/>
-
-      <!-- Change the incorrect public ctors for QueryParser to be protected instead -->
-      <replaceregexp file="src/java/org/apache/lucene/queryparser/classic/QueryParser.java"
-         byline="true"
-         match="public QueryParser\(CharStream "
-         replace="protected QueryParser(CharStream "/>
-      <replaceregexp file="src/java/org/apache/lucene/queryparser/classic/QueryParser.java"
-         byline="true"
-         match="public QueryParser\(QueryParserTokenManager "
-         replace="protected QueryParser(QueryParserTokenManager "/>
-      <generalReplaces dir="src/java/org/apache/lucene/queryparser/classic"/>
-    </sequential>
-  </target>
-
-  <target name="javacc-surround" depends="resolve-javacc" description="generate surround query parser">
-    <invoke-javacc target="src/java/org/apache/lucene/queryparser/surround/parser/QueryParser.jj"
-                   outputDir="src/java/org/apache/lucene/queryparser/surround/parser"
-    />
-    <generalReplaces dir="src/java/org/apache/lucene/queryparser/surround/parser"/>
-  </target>
-
-  <target name="javacc-flexible" depends="resolve-javacc">
-    <invoke-javacc target="src/java/org/apache/lucene/queryparser/flexible/standard/parser/StandardSyntaxParser.jj"
-                   outputDir="src/java/org/apache/lucene/queryparser/flexible/standard/parser"
-    />
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="public class ParseException extends Exception"
-                             replace="public class ParseException extends QueryNodeParseException"
-                             flags="g"
-                             byline="false"/>
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="package org.apache.lucene.queryparser.flexible.standard.parser;"
-                             replace="package org.apache.lucene.queryparser.flexible.standard.parser;${line.separator}
-${line.separator}
-import org.apache.lucene.queryparser.flexible.messages.Message;${line.separator}
-import org.apache.lucene.queryparser.flexible.messages.MessageImpl;${line.separator}
-import org.apache.lucene.queryparser.flexible.core.*;${line.separator}
-import org.apache.lucene.queryparser.flexible.core.messages.*;"
-                             flags="g"
-                             byline="false"/>
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="^  public ParseException\(Token currentTokenVal.*$(\s\s[^}].*\n)*  \}"
-                             replace="  public ParseException(Token currentTokenVal,${line.separator}
-    int[][] expectedTokenSequencesVal, String[] tokenImageVal) {${line.separator}
-    super(new MessageImpl(QueryParserMessages.INVALID_SYNTAX, initialise(${line.separator}
-    currentTokenVal, expectedTokenSequencesVal, tokenImageVal)));${line.separator}
-    this.currentToken = currentTokenVal;${line.separator}
-    this.expectedTokenSequences = expectedTokenSequencesVal;${line.separator}
-    this.tokenImage = tokenImageVal;${line.separator}
-  }"
-                             flags="gm"
-                             byline="false"/>
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="^  public ParseException\(String message.*$(\s\s[^}].*\n)*  \}"
-                             replace="  public ParseException(Message message) {${line.separator}
-    super(message);${line.separator}
-  }"
-                             flags="gm"
-                             byline="false"/>
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="^  public ParseException\(\).*$(\s\s[^}].*\n)*  \}"
-                             replace="  public ParseException() {${line.separator}
-    super(new MessageImpl(QueryParserMessages.INVALID_SYNTAX, &quot;Error&quot;));${line.separator}
-  }"
-                             flags="gm"
-                             byline="false"/>
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="^  public String getMessage\(\).*$(\s\s\s\s[^}].*\n)*    \}"
-                             replace="  private static String initialise(Token currentToken, int[][] expectedTokenSequences, String[] tokenImage) {${line.separator}
-    String eol = System.getProperty(&quot;line.separator&quot;, &quot;\n&quot;);"
-                             flags="gm"
-                             byline="false"/>
-        <replaceregexp file="src/java/org/apache/lucene/queryparser/flexible/standard/parser/ParseException.java"
-                             match="\s*protected String add_escapes.*"
-                             replace="  static private String add_escapes(String str) {"
-                             flags="g"
-                             byline="true"/>
-        <generalReplaces dir="src/java/org/apache/lucene/queryparser/flexible/standard/parser"/>
-  </target>
-
-  <target name="resolve-javacc" xmlns:ivy="antlib:org.apache.ivy.ant">
-    <!-- setup a "fake" JavaCC distribution folder in ${build.dir} to make JavaCC ANT task happy: -->
-    <ivy:retrieve organisation="net.java.dev.javacc" module="javacc" revision="5.0" symlink="${ivy.symlink}"
-      inline="true" conf="default" transitive="false" type="jar" sync="true"
-      pattern="${build.dir}/javacc/bin/lib/[artifact].[ext]"/>
-  </target>
-
-  <macrodef name="invoke-javacc">
-    <attribute name="target"/>
-    <attribute name="outputDir"/>
-    <sequential>
-      <mkdir dir="@{outputDir}"/>
-      <delete>
-        <fileset dir="@{outputDir}" includes="*.java">
-          <containsregexp expression="Generated.*By.*JavaCC"/>
-        </fileset>
-      </delete>
-      <javacc
-          target="@{target}"
-          outputDirectory="@{outputDir}"
-          javacchome="${build.dir}/javacc"
-          jdkversion="1.${javac.release}"
-      />
-      <fixcrlf srcdir="@{outputDir}" includes="*.java" encoding="UTF-8">
-        <containsregexp expression="Generated.*By.*JavaCC"/>
-      </fixcrlf>
-    </sequential>
-  </macrodef>
-
-  <target name="regenerate" depends="javacc"/>
-
-</project>
diff --git a/lucene/queryparser/ivy.xml b/lucene/queryparser/ivy.xml
deleted file mode 100644
index 9337401..0000000
--- a/lucene/queryparser/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="queryparser"/>
-</ivy-module>
diff --git a/lucene/queryparser/xmldtddocbuild.xml b/lucene/queryparser/xmldtddocbuild.xml
deleted file mode 100644
index c36a6ad..0000000
--- a/lucene/queryparser/xmldtddocbuild.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="DTDDocAnt" default="main">
-
-  <import file="../../lucene/module-build.xml"/>
-
-    <description>
-    This file generates DTDdocumentation
-    </description>
-
-    <!-- Tell ant where to find the code of the DTDDoc task.
-         Set dtddoc.home property to the directory where DTDdoc is installed on your system
-         -->
-
-    <taskdef name="DTDDoc"
-             classname="DTDDoc.DTDDocTask"
-             classpath="${dtddoc.home}/DTDDoc.jar"/>
-
-    <!-- Execute DTDDoc -->
-
-    <target name="main">
-
-
-        <DTDDoc showHiddenTags="false"
-                showFixmeTags="false"
-                sourceDir="."
-                destDir="docs"
-                docTitle = "Lucene XML Query syntax">
-                <include name="org/apache/lucene/queryparser/xml/*.dtd"/>
-        </DTDDoc>
-
-    </target>
-
-
-
-</project>
diff --git a/lucene/replicator/build.xml b/lucene/replicator/build.xml
deleted file mode 100644
index 796bf27..0000000
--- a/lucene/replicator/build.xml
+++ /dev/null
@@ -1,53 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="replicator" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-
-  <description>
-    Files replication utility
-  </description>
-
-  <!-- TODO: go fix this in jetty, its stupid -->
-  <property name="tests.policy" location="../tools/junit4/replicator-tests.policy"/>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <fileset dir="lib" />
-    <pathelement path="${facet.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="resolve" depends="common.resolve">
-    <sequential>
-      <!-- javax.servlet jar -->
-      <ivy:retrieve conf="servlet" log="download-only" type="orbit" symlink="${ivy.symlink}"/>
-    </sequential>
-  </target>
-
-  <target name="init" depends="module-build.init,jar-facet"/>
-
-  <target name="javadocs" depends="javadocs-facet,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../facet"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-</project>
diff --git a/lucene/replicator/ivy.xml b/lucene/replicator/ivy.xml
deleted file mode 100644
index 24b053c..0000000
--- a/lucene/replicator/ivy.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="replicator"/>
-
-  <configurations defaultconfmapping="http->master;jetty->master;start->master;servlet->master;logging->master">
-    <conf name="http" description="httpclient jars" transitive="false"/>
-    <conf name="jetty" description="jetty jars" transitive="false"/>
-    <conf name="start" description="jetty start jar" transitive="false"/>
-    <conf name="servlet" description="servlet-api jar" transitive="false"/>
-    <conf name="logging" description="logging setup" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="org.apache.httpcomponents" name="httpclient" rev="${/org.apache.httpcomponents/httpclient}" conf="http"/>
-    <dependency org="org.apache.httpcomponents" name="httpcore" rev="${/org.apache.httpcomponents/httpcore}" conf="http"/>
-    
-    <dependency org="org.eclipse.jetty" name="jetty-server" rev="${/org.eclipse.jetty/jetty-server}" conf="jetty"/>
-    <dependency org="javax.servlet" name="javax.servlet-api" rev="${/javax.servlet/javax.servlet-api}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-servlet" rev="${/org.eclipse.jetty/jetty-servlet}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-util" rev="${/org.eclipse.jetty/jetty-util}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-io" rev="${/org.eclipse.jetty/jetty-io}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-continuation" rev="${/org.eclipse.jetty/jetty-continuation}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-http" rev="${/org.eclipse.jetty/jetty-http}" conf="jetty"/>
-
-    <dependency org="commons-logging" name="commons-logging" rev="${/commons-logging/commons-logging}" conf="logging"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-
-</ivy-module>
diff --git a/lucene/sandbox/build.xml b/lucene/sandbox/build.xml
deleted file mode 100644
index 93bc275..0000000
--- a/lucene/sandbox/build.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<project name="sandbox" default="default">
-
-  <description>
-    Various third party contributions and new ideas
-  </description>
-
-  <import file="../module-build.xml"/>
-
-</project>
diff --git a/lucene/sandbox/ivy.xml b/lucene/sandbox/ivy.xml
deleted file mode 100644
index 4c5a43e..0000000
--- a/lucene/sandbox/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="sandbox"/>
-</ivy-module>
diff --git a/lucene/sandbox/src/test/org/apache/lucene/document/TestFloatPointNearestNeighbor.java b/lucene/sandbox/src/test/org/apache/lucene/document/TestFloatPointNearestNeighbor.java
index a14204c..f77d594 100644
--- a/lucene/sandbox/src/test/org/apache/lucene/document/TestFloatPointNearestNeighbor.java
+++ b/lucene/sandbox/src/test/org/apache/lucene/document/TestFloatPointNearestNeighbor.java
@@ -18,7 +18,6 @@
 
 import java.util.Arrays;
 
-import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.index.DirectoryReader;
 import org.apache.lucene.index.IndexWriter;
 import org.apache.lucene.index.IndexWriterConfig;
@@ -243,7 +242,7 @@
 
   private IndexWriterConfig getIndexWriterConfig() {
     IndexWriterConfig iwc = newIndexWriterConfig();
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     return iwc;
   }
 }
diff --git a/lucene/sandbox/src/test/org/apache/lucene/search/TestNearest.java b/lucene/sandbox/src/test/org/apache/lucene/search/TestNearest.java
index a149ace..98a3de1 100644
--- a/lucene/sandbox/src/test/org/apache/lucene/search/TestNearest.java
+++ b/lucene/sandbox/src/test/org/apache/lucene/search/TestNearest.java
@@ -19,7 +19,6 @@
 import java.util.Arrays;
 import java.util.Comparator;
 
-import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.document.Document;
 import org.apache.lucene.document.Field;
 import org.apache.lucene.document.LatLonDocValuesField;
@@ -246,7 +245,7 @@
 
   private IndexWriterConfig getIndexWriterConfig() {
     IndexWriterConfig iwc = newIndexWriterConfig();
-    iwc.setCodec(Codec.forName("Lucene86"));
+    iwc.setCodec(TestUtil.getDefaultCodec());
     return iwc;
   }
 }
diff --git a/lucene/spatial-extras/build.xml b/lucene/spatial-extras/build.xml
deleted file mode 100644
index d95d935..0000000
--- a/lucene/spatial-extras/build.xml
+++ /dev/null
@@ -1,62 +0,0 @@
-<?xml version="1.0"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<project name="spatial-extras" default="default">
-  <description>
-    Geospatial search
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="spatialjar">
-     <fileset dir="lib"/>
-  </path>
-
-  <path id="classpath">
-    <path refid="base.classpath"/>
-    <path refid="spatialjar"/>
-    <pathelement path="${spatial3d.jar}" />
-  </path>
-
-  <path id="test.classpath">
-    <path refid="test.base.classpath" />
-    <path refid="spatialjar"/>
-    <pathelement path="src/test-files" />
-    <pathelement path="${common.dir}/build/spatial3d/classes/test" />
-  </path>
-
-  <target name="compile-core" depends="jar-spatial3d,common.compile-core" />
-
-  <target name="compile-test" depends="compile-spatial3d-tests,common.compile-test" />
-
-  <target name="compile-spatial3d-tests">
-    <ant dir="${common.dir}/spatial3d" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="javadocs" depends="javadocs-spatial3d,compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../spatial3d"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-</project>
diff --git a/lucene/spatial-extras/ivy.xml b/lucene/spatial-extras/ivy.xml
deleted file mode 100644
index f8dba4e..0000000
--- a/lucene/spatial-extras/ivy.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<ivy-module version="2.0"  xmlns:maven="http://ant.apache.org/ivy/maven">
-  <info organisation="org.apache.lucene" module="spatial-extras"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.locationtech.spatial4j" name="spatial4j" rev="${/org.locationtech.spatial4j/spatial4j}" conf="compile"/>
-
-    <dependency org="io.sgr" name="s2-geometry-library-java" rev="${/io.sgr/s2-geometry-library-java}" conf="compile"/>
-
-    <dependency org="org.locationtech.spatial4j" name="spatial4j" rev="${/org.locationtech.spatial4j/spatial4j}" conf="test">
-      <artifact name="spatial4j" type="test" ext="jar" maven:classifier="tests" />
-    </dependency>
-
-    <dependency org="org.locationtech.jts" name="jts-core" rev="${/org.locationtech.jts/jts-core}" conf="test" />
-
-    <!-- <dependency org="org.slf4j" name="slf4j-api" rev="${/org.slf4j/slf4j-api}" conf="test"/> -->
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/spatial3d/build.xml b/lucene/spatial3d/build.xml
deleted file mode 100644
index fbe9e06..0000000
--- a/lucene/spatial3d/build.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<project name="spatial3d" default="default">
-  <description>
-    3D spatial planar geometry APIs
-  </description>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <path refid="base.classpath"/>
-  </path>
-</project>
diff --git a/lucene/spatial3d/ivy.xml b/lucene/spatial3d/ivy.xml
deleted file mode 100644
index e9d2ee7..0000000
--- a/lucene/spatial3d/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="spatial3d"/>
-</ivy-module>
diff --git a/lucene/suggest/build.xml b/lucene/suggest/build.xml
deleted file mode 100644
index 47d4a63..0000000
--- a/lucene/suggest/build.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="suggest" default="default">
-
-  <description>
-    Auto-suggest and Spellchecking support
-  </description>
-  
-  <!-- just a list of words for testing suggesters -->
-  <property name="rat.excludes" value="**/Top50KWiki.utf8,**/stop-snowball.txt"/>
-
-  <import file="../module-build.xml"/>
-
-  <path id="classpath">
-    <pathelement path="${analyzers-common.jar}"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <target name="javadocs" depends="compile-core,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <invoke-module-javadoc>
-      <links>
-        <link href="../analyzers-common"/>
-      </links>
-    </invoke-module-javadoc>
-  </target>
-
-  <target name="compile-core" depends="jar-expressions, jar-analyzers-common, common.compile-core" />
-
-</project>
diff --git a/lucene/suggest/ivy.xml b/lucene/suggest/ivy.xml
deleted file mode 100644
index 343853a..0000000
--- a/lucene/suggest/ivy.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="suggest"/>
-</ivy-module>
diff --git a/lucene/suggest/src/java/org/apache/lucene/search/spell/SuggestWord.java b/lucene/suggest/src/java/org/apache/lucene/search/spell/SuggestWord.java
index f61eead..6b584d1 100644
--- a/lucene/suggest/src/java/org/apache/lucene/search/spell/SuggestWord.java
+++ b/lucene/suggest/src/java/org/apache/lucene/search/spell/SuggestWord.java
@@ -44,4 +44,9 @@
    */
   public String string;
 
+  @Override
+  public String toString() {
+    return "SuggestWord(string=" + string + ", score=" + score + ", freq=" + freq + ")";
+  }
+
 }
\ No newline at end of file
diff --git a/lucene/suggest/src/test/org/apache/lucene/search/suggest/document/TestSuggestField.java b/lucene/suggest/src/test/org/apache/lucene/search/suggest/document/TestSuggestField.java
index 12c8902..f4a7c99 100644
--- a/lucene/suggest/src/test/org/apache/lucene/search/suggest/document/TestSuggestField.java
+++ b/lucene/suggest/src/test/org/apache/lucene/search/suggest/document/TestSuggestField.java
@@ -39,7 +39,7 @@
 import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
 import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.codecs.PostingsFormat;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
+import org.apache.lucene.codecs.lucene87.Lucene87Codec;
 import org.apache.lucene.document.Document;
 import org.apache.lucene.document.Field;
 import org.apache.lucene.document.IntPoint;
@@ -887,7 +887,7 @@
   static IndexWriterConfig iwcWithSuggestField(Analyzer analyzer, final Set<String> suggestFields) {
     IndexWriterConfig iwc = newIndexWriterConfig(random(), analyzer);
     iwc.setMergePolicy(newLogMergePolicy());
-    Codec filterCodec = new Lucene86Codec() {
+    Codec filterCodec = new Lucene87Codec() {
       CompletionPostingsFormat.FSTLoadMode fstLoadMode =
           RandomPicks.randomFrom(random(), CompletionPostingsFormat.FSTLoadMode.values());
       PostingsFormat postingsFormat = new Completion84PostingsFormat(fstLoadMode);
diff --git a/lucene/test-framework/build.gradle b/lucene/test-framework/build.gradle
index 4b0cade..ce9355b 100644
--- a/lucene/test-framework/build.gradle
+++ b/lucene/test-framework/build.gradle
@@ -22,9 +22,13 @@
 dependencies {
   api project(':lucene:core')
 
-  api ("com.carrotsearch.randomizedtesting:randomizedtesting-runner")
-  api ('org.hamcrest:hamcrest-core')
-  api ("junit:junit")
+  api ("com.carrotsearch.randomizedtesting:randomizedtesting-runner", {
+    exclude group: "junit"
+  })
+  api ("junit:junit", {
+    exclude group: "org.hamcrest"
+  })
+  api ('org.hamcrest:hamcrest')
 
   implementation project(':lucene:codecs')
 }
diff --git a/lucene/test-framework/build.xml b/lucene/test-framework/build.xml
deleted file mode 100644
index 844c557..0000000
--- a/lucene/test-framework/build.xml
+++ /dev/null
@@ -1,82 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="test-framework" default="default">
-  <description>Framework for testing Lucene-based applications</description>
-
-  <property name="build.dir" location="../build/test-framework"/>
-
-  <import file="../common-build.xml"/>
-
-  <path id="classpath">
-    <pathelement location="${common.dir}/build/core/classes/java"/>
-    <pathelement location="${common.dir}/build/codecs/classes/java"/>
-    <fileset dir="lib"/>
-  </path>
-
-  <path id="test.classpath"> 
-    <pathelement location="${build.dir}/classes/java"/>
-    <pathelement location="${build.dir}/classes/test"/>
-    <path refid="classpath"/>
-    <path refid="junit-path"/>
-  </path>
-
-  <path id="junit.classpath">
-    <path refid="test.classpath"/>
-  </path>
-
-  <!-- 
-      Specialize compile-core to depend on lucene-core and lucene-codecs compilation.
-   -->
-  <target name="compile-core" depends="init,compile-lucene-core,compile-codecs,common.compile-core"
-          description="Compiles test-framework classes"/>
-
-  <!-- redefine the forbidden apis for tests, as we check ourselves - no sysout testing -->
-  <target name="-check-forbidden-tests" depends="-init-forbidden-apis,compile-core">
-    <forbidden-apis suppressAnnotation="**.SuppressForbidden" signaturesFile="${common.dir}/tools/forbiddenApis/tests.txt" classpathref="forbidden-apis.allclasses.classpath"> 
-      <fileset dir="${build.dir}/classes/java"/>
-      <fileset dir="${build.dir}/classes/test"/>
-    </forbidden-apis>
-  </target>
-  <target name="-check-forbidden-sysout"/>
-
-  <target name="javadocs-core" depends="javadocs"/>
-  <target name="javadocs" depends="init,javadocs-lucene-core,javadocs-lucene-codecs,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <sequential>
-      <mkdir dir="${javadoc.dir}/test-framework"/>
-      <invoke-javadoc overview="${src.dir}/overview.html"
-                      destdir="${javadoc.dir}/test-framework"
-                      title="${Name} ${version} Test Framework API">
-        <sources>
-          <packageset dir="${src.dir}"/>
-          <link offline="true" href="${javadoc.link.junit}"
-                packagelistLoc="${javadoc.packagelist.dir}/junit"/>
-          <link href="../core/"/>
-          <link href="../codecs/"/>
-          <link href=""/>
-        </sources>
-      </invoke-javadoc>
-      <mkdir dir="${build.dir}"/>
-      <jarify basedir="${javadoc.dir}/test-framework" 
-              destfile="${build.dir}/${final.name}-javadoc.jar"
-              title="Lucene Search Engine: Test Framework" />
-    </sequential>
-  </target>
-</project>
diff --git a/lucene/test-framework/ivy.xml b/lucene/test-framework/ivy.xml
deleted file mode 100644
index 9b2932d..0000000
--- a/lucene/test-framework/ivy.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="test-framework"/>
-
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="junit" name="junit" rev="${/junit/junit}" conf="compile"/>
-    <dependency org="org.hamcrest" name="hamcrest-core" rev="${/org.hamcrest/hamcrest-core}" conf="compile"/>
-    <dependency org="com.carrotsearch.randomizedtesting" name="randomizedtesting-runner" rev="${/com.carrotsearch.randomizedtesting/randomizedtesting-runner}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/CompressingCodec.java b/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/CompressingCodec.java
index 4f334ae..9fd243f 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/CompressingCodec.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/CompressingCodec.java
@@ -37,7 +37,7 @@
    * Create a random instance.
    */
   public static CompressingCodec randomInstance(Random random, int chunkSize, int maxDocsPerChunk, boolean withSegmentSuffix, int blockShift) {
-    switch (random.nextInt(4)) {
+    switch (random.nextInt(5)) {
     case 0:
       return new FastCompressingCodec(chunkSize, maxDocsPerChunk, withSegmentSuffix, blockShift);
     case 1:
@@ -46,6 +46,8 @@
       return new HighCompressionCompressingCodec(chunkSize, maxDocsPerChunk, withSegmentSuffix, blockShift);
     case 3:
       return new DummyCompressingCodec(chunkSize, maxDocsPerChunk, withSegmentSuffix, blockShift);
+    case 4:
+      return new DeflateWithPresetCompressingCodec(chunkSize, maxDocsPerChunk, withSegmentSuffix, blockShift);
     default:
       throw new AssertionError();
     }
diff --git a/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/DeflateWithPresetCompressingCodec.java b/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/DeflateWithPresetCompressingCodec.java
new file mode 100644
index 0000000..9d1791e
--- /dev/null
+++ b/lucene/test-framework/src/java/org/apache/lucene/codecs/compressing/DeflateWithPresetCompressingCodec.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.compressing;
+
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.DeflateWithPresetDict;
+
+/** CompressionCodec that uses {@link DeflateWithPresetDict}. */
+public class DeflateWithPresetCompressingCodec extends CompressingCodec {
+
+  /** Constructor that allows to configure the chunk size. */
+  public DeflateWithPresetCompressingCodec(int chunkSize, int maxDocsPerChunk, boolean withSegmentSuffix, int blockSize) {
+    super("DeflateWithPresetCompressingStoredFieldsData", 
+          withSegmentSuffix ? "DeflateWithPresetCompressingStoredFields" : "",
+          new DeflateWithPresetDict(chunkSize/10, chunkSize/3+1), chunkSize, maxDocsPerChunk, blockSize);
+  }
+
+  /** No-arg constructor. */
+  public DeflateWithPresetCompressingCodec() {
+    this(1<<18, 512, false, 10);
+  }
+
+}
diff --git a/lucene/test-framework/src/java/org/apache/lucene/geo/BaseGeoPointTestCase.java b/lucene/test-framework/src/java/org/apache/lucene/geo/BaseGeoPointTestCase.java
index f556c0d..c080db1 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/geo/BaseGeoPointTestCase.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/geo/BaseGeoPointTestCase.java
@@ -26,6 +26,7 @@
 import java.util.Set;
 
 import org.apache.lucene.analysis.MockAnalyzer;
+import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.codecs.FilterCodec;
 import org.apache.lucene.codecs.PointsFormat;
 import org.apache.lucene.codecs.PointsReader;
@@ -1276,7 +1277,8 @@
     // Else seeds may not reproduce:
     iwc.setMergeScheduler(new SerialMergeScheduler());
     int pointsInLeaf = 2 + random().nextInt(4);
-    iwc.setCodec(new FilterCodec("Lucene86", TestUtil.getDefaultCodec()) {
+    final Codec in = TestUtil.getDefaultCodec();
+    iwc.setCodec(new FilterCodec(in.getName(), in) {
       @Override
       public PointsFormat pointsFormat() {
         return new PointsFormat() {
diff --git a/lucene/test-framework/src/java/org/apache/lucene/geo/BaseXYPointTestCase.java b/lucene/test-framework/src/java/org/apache/lucene/geo/BaseXYPointTestCase.java
index f60bd4c..c9240d7 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/geo/BaseXYPointTestCase.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/geo/BaseXYPointTestCase.java
@@ -26,6 +26,7 @@
 import java.util.Set;
 
 import org.apache.lucene.analysis.MockAnalyzer;
+import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.codecs.FilterCodec;
 import org.apache.lucene.codecs.PointsFormat;
 import org.apache.lucene.codecs.PointsReader;
@@ -1190,7 +1191,8 @@
     // Else seeds may not reproduce:
     iwc.setMergeScheduler(new SerialMergeScheduler());
     int pointsInLeaf = 2 + random().nextInt(4);
-    iwc.setCodec(new FilterCodec("Lucene86", TestUtil.getDefaultCodec()) {
+    Codec in = TestUtil.getDefaultCodec();
+    iwc.setCodec(new FilterCodec(in.getName(), in) {
       @Override
       public PointsFormat pointsFormat() {
         return new PointsFormat() {
diff --git a/lucene/test-framework/src/java/org/apache/lucene/store/MockDirectoryWrapper.java b/lucene/test-framework/src/java/org/apache/lucene/store/MockDirectoryWrapper.java
index 12d5075..c0cbe57 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/store/MockDirectoryWrapper.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/store/MockDirectoryWrapper.java
@@ -249,7 +249,7 @@
     maybeYield();
     maybeThrowDeterministicException();
     if (crashed) {
-      throw new IOException("cannot rename after crash");
+      throw new IOException("cannot sync metadata after crash");
     }
     in.syncMetaData();
   }
diff --git a/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java b/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
index 2adf318..5343eae 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/util/LuceneTestCase.java
@@ -1003,7 +1003,7 @@
     if (rarely(r)) {
       c.setCheckPendingFlushUpdate(false);
     }
-    c.setMaxCommitMergeWaitMillis(rarely() ?  atLeast(r, 1000) : atLeast(r, 200));
+    c.setMaxFullFlushMergeWaitMillis(rarely() ?  atLeast(r, 1000) : atLeast(r, 200));
     return c;
   }
 
diff --git a/lucene/test-framework/src/java/org/apache/lucene/util/TestRuleSetupAndRestoreClassEnv.java b/lucene/test-framework/src/java/org/apache/lucene/util/TestRuleSetupAndRestoreClassEnv.java
index aef11ac..81cb328 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/util/TestRuleSetupAndRestoreClassEnv.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/util/TestRuleSetupAndRestoreClassEnv.java
@@ -33,8 +33,8 @@
 import org.apache.lucene.codecs.asserting.AssertingPostingsFormat;
 import org.apache.lucene.codecs.cheapbastard.CheapBastardCodec;
 import org.apache.lucene.codecs.compressing.CompressingCodec;
-import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat;
+import org.apache.lucene.codecs.lucene87.Lucene87Codec;
 import org.apache.lucene.codecs.mockrandom.MockRandomPostingsFormat;
 import org.apache.lucene.codecs.simpletext.SimpleTextCodec;
 import org.apache.lucene.index.RandomCodec;
@@ -187,8 +187,8 @@
       codec = new AssertingCodec();
     } else if ("Compressing".equals(TEST_CODEC) || ("random".equals(TEST_CODEC) && randomVal == 6 && !shouldAvoidCodec("Compressing"))) {
       codec = CompressingCodec.randomInstance(random);
-    } else if ("Lucene84".equals(TEST_CODEC) || ("random".equals(TEST_CODEC) && randomVal == 5 && !shouldAvoidCodec("Lucene84"))) {
-      codec = new Lucene86Codec(RandomPicks.randomFrom(random, Lucene50StoredFieldsFormat.Mode.values())
+    } else if ("Lucene87".equals(TEST_CODEC) || ("random".equals(TEST_CODEC) && randomVal == 5 && !shouldAvoidCodec("Lucene87"))) {
+      codec = new Lucene87Codec(RandomPicks.randomFrom(random, Lucene87StoredFieldsFormat.Mode.values())
       );
     } else if (!"random".equals(TEST_CODEC)) {
       codec = Codec.forName(TEST_CODEC);
diff --git a/lucene/test-framework/src/java/org/apache/lucene/util/TestUtil.java b/lucene/test-framework/src/java/org/apache/lucene/util/TestUtil.java
index 2dc9ead..7104a85 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/util/TestUtil.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/util/TestUtil.java
@@ -54,7 +54,7 @@
 import org.apache.lucene.codecs.blocktreeords.BlockTreeOrdsPostingsFormat;
 import org.apache.lucene.codecs.lucene80.Lucene80DocValuesFormat;
 import org.apache.lucene.codecs.lucene84.Lucene84PostingsFormat;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
+import org.apache.lucene.codecs.lucene87.Lucene87Codec;
 import org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat;
 import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;
 import org.apache.lucene.document.BinaryDocValuesField;
@@ -919,7 +919,7 @@
    * This may be different than {@link Codec#getDefault()} because that is randomized. 
    */
   public static Codec getDefaultCodec() {
-    return new Lucene86Codec();
+    return new Lucene87Codec();
   }
   
   /** 
diff --git a/lucene/test-framework/src/resources/META-INF/services/org.apache.lucene.codecs.Codec b/lucene/test-framework/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
index 282f5dd..5892cb0 100644
--- a/lucene/test-framework/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
+++ b/lucene/test-framework/src/resources/META-INF/services/org.apache.lucene.codecs.Codec
@@ -15,6 +15,7 @@
 
 org.apache.lucene.codecs.asserting.AssertingCodec
 org.apache.lucene.codecs.cheapbastard.CheapBastardCodec
+org.apache.lucene.codecs.compressing.DeflateWithPresetCompressingCodec
 org.apache.lucene.codecs.compressing.FastCompressingCodec
 org.apache.lucene.codecs.compressing.FastDecompressionCompressingCodec
 org.apache.lucene.codecs.compressing.HighCompressionCompressingCodec
diff --git a/lucene/tools/build.xml b/lucene/tools/build.xml
deleted file mode 100644
index b245dce..0000000
--- a/lucene/tools/build.xml
+++ /dev/null
@@ -1,64 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="tools" default="default">
-  <description>Lucene Tools</description>
-
-  <property name="build.dir" location="../build/tools"/>
-  
-  <property name="rat.additional-includes" value="forbiddenApis/**,prettify/**"/>
-
-  <import file="../common-build.xml"/>
-
-  <path id="classpath">
-    <!-- TODO: we need this for forbidden-apis to be happy, because it does not support "includeantruntime": -->
-    <fileset dir="lib"/>
-  </path>
-
-  <path id="test.classpath"/>
-
-  <!-- redefine the test compilation, -test and -check-totals, so these are just no-ops -->
-  <target name="compile-test"/>
-  <target name="-test"/>
-  <target name="-check-totals"/>
-  
-  <!-- redefine the forbidden apis to be no-ops -->
-  <target name="-check-forbidden-tests"/>
-  <target name="-check-forbidden-sysout"/>
-
-  <!-- disable clover -->
-  <target name="-clover.setup" if="run.clover"/>
-
-  <!--  
-      Specialize compile-core to not depend on clover, to exclude a 
-      classpath reference when compiling, and to not attempt to copy
-      non-existent resource files to the build output directory.
-   -->
-  <target name="compile-core" depends="init" description="Compiles tools classes.">
-    <compile srcdir="${src.dir}" destdir="${build.dir}/classes/java" includeantruntime="true">
-      <classpath refid="classpath"/>
-    </compile>
-    <copy todir="${build.dir}/classes/java">
-      <fileset dir="${src.dir}" excludes="**/*.java" />
-    </copy>
-  </target>
-
-  <target name="javadocs"/> <!-- to make common-build.xml happy -->
-  <target name="pitest"/> <!-- to make common-build.xml happy -->
-</project>
diff --git a/lucene/tools/custom-tasks.xml b/lucene/tools/custom-tasks.xml
deleted file mode 100644
index 4b5c3ea..0000000
--- a/lucene/tools/custom-tasks.xml
+++ /dev/null
@@ -1,149 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<project name="custom-tasks">
-  <description>
-    This file is designed for importing into a main build file, and not intended
-    for standalone use.
-  </description>
-
-  <target name="load-custom-tasks" unless="custom-tasks.loaded">
-    <dirname file="${ant.file.custom-tasks}" property="custom-tasks.dir"/>
-    <taskdef resource="lucene-solr.antlib.xml">
-      <classpath>
-        <pathelement location="${custom-tasks.dir}/../build/tools/classes/java" />
-      </classpath>
-    </taskdef>
-    <property name="custom-tasks.loaded" value="true"/>
-  </target>
-
-  <filtermapper id="license-mapper-defaults">
-    <!-- Normalize input paths. -->
-    <replacestring from="\" to="/" />
-    <replaceregex pattern="\.jar$" replace="" flags="gi" />
-    
-    <!-- Some typical snapshot/minimalized JAR suffixes. -->
-    <replaceregex pattern="-min$" replace="" flags="gi" />
-    <replaceregex pattern="SNAPSHOT" replace="" flags="gi" />
-
-    <!-- Typical version patterns. -->
-    <replaceregex pattern="\.rc[0-9]+" replace="" flags="gi" />
-    <replaceregex pattern="(?&lt;!log4j)\-(r)?([0-9\-\_\.])+(([a-zA-Z]+)([0-9\-\.])*)?" replace="" flags="gi" />
-    
-    <replaceregex pattern="[-]tests$" replace="-tests" flags="gi" />
-
-    <!-- git hashcode pattern: it's always 40 chars right? -->
-    <replaceregex pattern="\-[a-z0-9]{40,40}$" replace="" flags="gi" />
-  </filtermapper>
-
-  <macrodef name="license-check-macro">
-    <attribute name="dir" />
-    <attribute name="licensedir" />
-    <element name="additional-excludes" optional="true" />
-    <element name="additional-filters"  optional="true" />
-    <sequential>
-      <!-- LICENSE and NOTICE verification macro. -->
-      <echo>License check under: @{dir}</echo>
-      <licenses licenseDirectory="@{licensedir}" skipChecksum="${skipChecksum}" skipRegexChecksum="${skipRegexChecksum}" skipSnapshotsChecksum="${skipSnapshotsChecksum}">
-        <fileset dir="@{dir}">
-          <include name="**/*.jar" />
-          <!-- Speed up scanning a bit. -->
-          <exclude name="**/.git/**" />
-          <exclude name="**/.svn/**" />
-          <exclude name="**/bin/**" />
-          <exclude name="**/build/**" />
-          <exclude name="**/dist/**" />
-          <exclude name="**/src/**" />
-          <additional-excludes />
-        </fileset>
-
-        <licenseMapper>
-          <chainedmapper>
-            <filtermapper refid="license-mapper-defaults"/>
-            <filtermapper>
-              <!-- Non-typical version patterns. -->
-              <additional-filters />
-            </filtermapper>
-          </chainedmapper>
-        </licenseMapper>
-      </licenses>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="lib-versions-check-macro">
-    <attribute name="dir"/>
-    <attribute name="centralized.versions.file"/>
-    <attribute name="top.level.ivy.settings.file"/>
-    <attribute name="ivy.resolution-cache.dir"/>
-    <attribute name="ivy.lock-strategy"/>
-    <attribute name="common.build.dir"/>
-    <attribute name="ignore.conflicts.file"/>
-    <sequential>
-      <!-- 
-        Verify that the '/org/name' keys in ivy-versions.properties are sorted
-        lexically and are neither duplicates nor orphans, and that all
-         dependencies in all ivy.xml files use rev="${/org/name}" format.
-        -->
-      <echo>Lib versions check under: @{dir}</echo>
-      <libversions centralizedVersionsFile="@{centralized.versions.file}"
-                   topLevelIvySettingsFile="@{top.level.ivy.settings.file}"
-                   ivyResolutionCacheDir="@{ivy.resolution-cache.dir}"
-                   ivyLockStrategy="@{ivy.lock-strategy}"
-                   commonBuildDir="@{common.build.dir}"
-                   ignoreConflictsFile="@{ignore.conflicts.file}">
-        <fileset dir="@{dir}">
-          <include name="**/ivy.xml" />
-          <!-- Speed up scanning a bit. -->
-          <exclude name="**/.git/**" />
-          <exclude name="**/.svn/**" />
-          <exclude name="**/bin/**" />
-          <exclude name="**/build/**" />
-          <exclude name="**/dist/**" />
-          <exclude name="**/src/**" />
-        </fileset>
-      </libversions>
-    </sequential>
-  </macrodef>
-
-  <macrodef name="get-maven-dependencies-macro">
-    <attribute name="dir"/>
-    <attribute name="centralized.versions.file"/>
-    <attribute name="module.dependencies.properties.file"/>
-    <attribute name="maven.dependencies.filters.file"/>
-    <sequential>
-      <echo>Get maven dependencies called under: @{dir}</echo>
-      <mvndeps centralizedVersionsFile="@{centralized.versions.file}"
-               moduleDependenciesPropertiesFile="@{module.dependencies.properties.file}"
-               mavenDependenciesFiltersFile="@{maven.dependencies.filters.file}">
-        <fileset dir="@{dir}">
-          <include name="**/ivy.xml" />
-          <!-- Speed up scanning a bit. -->
-          <exclude name="**/.git/**" />
-          <exclude name="**/.svn/**" />
-          <exclude name="**/bin/**" />
-          <exclude name="**/build/**" />
-          <exclude name="**/dist/**" />
-          <exclude name="**/src/**" />
-          <exclude name="**/maven-build/**" />
-          <exclude name="**/idea-build/**" />
-          <exclude name="**/dev-tools/**" />
-        </fileset>
-      </mvndeps>
-    </sequential>
-  </macrodef>
-</project>
diff --git a/lucene/tools/forbiddenApis/base.txt b/lucene/tools/forbiddenApis/base.txt
deleted file mode 100644
index 6ff080b..0000000
--- a/lucene/tools/forbiddenApis/base.txt
+++ /dev/null
@@ -1,64 +0,0 @@
-#  Licensed to the Apache Software Foundation (ASF) under one or more
-#  contributor license agreements.  See the NOTICE file distributed with
-#  this work for additional information regarding copyright ownership.
-#  The ASF licenses this file to You under the Apache License, Version 2.0
-#  (the "License"); you may not use this file except in compliance with
-#  the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
-#  limitations under the License.
-
-@defaultMessage Spawns threads with vague names; use a custom thread factory (Lucene's NamedThreadFactory, Solr's SolrNamedThreadFactory) and name threads so that you can tell (by its name) which executor it is associated with
-java.util.concurrent.Executors#newFixedThreadPool(int)
-java.util.concurrent.Executors#newSingleThreadExecutor()
-java.util.concurrent.Executors#newCachedThreadPool()
-java.util.concurrent.Executors#newSingleThreadScheduledExecutor()
-java.util.concurrent.Executors#newScheduledThreadPool(int)
-java.util.concurrent.Executors#defaultThreadFactory()
-java.util.concurrent.Executors#privilegedThreadFactory()
-
-@defaultMessage Properties files should be read/written with Reader/Writer, using UTF-8 charset. This allows reading older files with unicode escapes, too.
-java.util.Properties#load(java.io.InputStream)
-java.util.Properties#save(java.io.OutputStream,java.lang.String)
-java.util.Properties#store(java.io.OutputStream,java.lang.String)
-
-@defaultMessage The context classloader should never be used for resource lookups, unless there is a 3rd party library that needs it. Always pass a classloader down as method parameters.
-java.lang.Thread#getContextClassLoader()
-java.lang.Thread#setContextClassLoader(java.lang.ClassLoader)
-
-java.lang.Character#codePointBefore(char[],int) @ Implicit start offset is error-prone when the char[] is a buffer and the first chars are random chars
-java.lang.Character#codePointAt(char[],int) @ Implicit end offset is error-prone when the char[] is a buffer and the last chars are random chars
-
-java.io.File#delete() @ use Files.delete for real exception, IOUtils.deleteFilesIgnoringExceptions if you dont care
-
-java.util.Collections#shuffle(java.util.List) @ Use shuffle(List, Random) instead so that it can be reproduced
-
-java.util.Locale#forLanguageTag(java.lang.String) @ use new Locale.Builder().setLanguageTag(...).build() which has error handling
-java.util.Locale#toString() @ use Locale#toLanguageTag() for a standardized BCP47 locale name
-
-@defaultMessage Constructors for wrapper classes of Java primitives should be avoided in favor of the public static methods available or autoboxing
-java.lang.Integer#<init>(int)
-java.lang.Integer#<init>(java.lang.String)
-java.lang.Byte#<init>(byte)
-java.lang.Byte#<init>(java.lang.String)
-java.lang.Short#<init>(short)
-java.lang.Short#<init>(java.lang.String)
-java.lang.Long#<init>(long)
-java.lang.Long#<init>(java.lang.String)
-java.lang.Boolean#<init>(boolean)
-java.lang.Boolean#<init>(java.lang.String)
-java.lang.Character#<init>(char)
-java.lang.Float#<init>(float)
-java.lang.Float#<init>(double)
-java.lang.Float#<init>(java.lang.String)
-java.lang.Double#<init>(double)
-java.lang.Double#<init>(java.lang.String)
-
-@defaultMessage Java deserialization is unsafe when the data is untrusted. The java developer is powerless: no checks or casts help, exploitation can happen in places such as clinit or finalize!
-java.io.ObjectInputStream
-java.io.ObjectOutputStream
diff --git a/lucene/tools/forbiddenApis/lucene.txt b/lucene/tools/forbiddenApis/lucene.txt
deleted file mode 100644
index 0cc4edd..0000000
--- a/lucene/tools/forbiddenApis/lucene.txt
+++ /dev/null
@@ -1,49 +0,0 @@
-#  Licensed to the Apache Software Foundation (ASF) under one or more
-#  contributor license agreements.  See the NOTICE file distributed with
-#  this work for additional information regarding copyright ownership.
-#  The ASF licenses this file to You under the Apache License, Version 2.0
-#  (the "License"); you may not use this file except in compliance with
-#  the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
-#  limitations under the License.
-
-@defaultMessage Use NIO.2 instead
-java.io.File
-java.io.FileInputStream
-java.io.FileOutputStream
-java.io.PrintStream#<init>(java.lang.String,java.lang.String)
-java.io.PrintWriter#<init>(java.lang.String,java.lang.String)
-java.util.Formatter#<init>(java.lang.String,java.lang.String,java.util.Locale)
-java.io.RandomAccessFile
-java.nio.file.Path#toFile()
-java.util.jar.JarFile
-java.util.zip.ZipFile
-@defaultMessage Prefer using ArrayUtil as Arrays#copyOfRange fills zeros for bad bounds
-java.util.Arrays#copyOfRange(byte[],int,int)
-java.util.Arrays#copyOfRange(char[],int,int)
-java.util.Arrays#copyOfRange(short[],int,int)
-java.util.Arrays#copyOfRange(int[],int,int)
-java.util.Arrays#copyOfRange(long[],int,int)
-java.util.Arrays#copyOfRange(float[],int,int)
-java.util.Arrays#copyOfRange(double[],int,int)
-java.util.Arrays#copyOfRange(boolean[],int,int)
-java.util.Arrays#copyOfRange(java.lang.Object[],int,int)
-java.util.Arrays#copyOfRange(java.lang.Object[],int,int,java.lang.Class)
-
-@defaultMessage Prefer using ArrayUtil as Arrays#copyOf fills zeros for bad bounds
-java.util.Arrays#copyOf(byte[],int)
-java.util.Arrays#copyOf(char[],int)
-java.util.Arrays#copyOf(short[],int)
-java.util.Arrays#copyOf(int[],int)
-java.util.Arrays#copyOf(long[],int)
-java.util.Arrays#copyOf(float[],int)
-java.util.Arrays#copyOf(double[],int)
-java.util.Arrays#copyOf(boolean[],int)
-java.util.Arrays#copyOf(java.lang.Object[],int)
-java.util.Arrays#copyOf(java.lang.Object[],int,java.lang.Class)
diff --git a/lucene/tools/forbiddenApis/servlet-api.txt b/lucene/tools/forbiddenApis/servlet-api.txt
deleted file mode 100644
index dc82e8f..0000000
--- a/lucene/tools/forbiddenApis/servlet-api.txt
+++ /dev/null
@@ -1,43 +0,0 @@
-#  Licensed to the Apache Software Foundation (ASF) under one or more
-#  contributor license agreements.  See the NOTICE file distributed with
-#  this work for additional information regarding copyright ownership.
-#  The ASF licenses this file to You under the Apache License, Version 2.0
-#  (the "License"); you may not use this file except in compliance with
-#  the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
-#  limitations under the License.
-
-@defaultMessage Servlet API method is parsing request parameters without using the correct encoding if no extra configuration is given in the servlet container
-
-javax.servlet.ServletRequest#getParameter(java.lang.String) 
-javax.servlet.ServletRequest#getParameterMap() 
-javax.servlet.ServletRequest#getParameterNames() 
-javax.servlet.ServletRequest#getParameterValues(java.lang.String) 
-
-javax.servlet.http.HttpServletRequest#getSession() @ Servlet API getter has side effect of creating sessions
-
-@defaultMessage Servlet API method is broken and slow in some environments (e.g., Jetty's UTF-8 readers)
-
-javax.servlet.ServletRequest#getReader()
-javax.servlet.ServletResponse#getWriter()
-javax.servlet.ServletInputStream#readLine(byte[],int,int) 
-javax.servlet.ServletOutputStream#print(boolean)
-javax.servlet.ServletOutputStream#print(char)
-javax.servlet.ServletOutputStream#print(double)
-javax.servlet.ServletOutputStream#print(float)
-javax.servlet.ServletOutputStream#print(int)
-javax.servlet.ServletOutputStream#print(long)
-javax.servlet.ServletOutputStream#print(java.lang.String)
-javax.servlet.ServletOutputStream#println(boolean)
-javax.servlet.ServletOutputStream#println(char)
-javax.servlet.ServletOutputStream#println(double)
-javax.servlet.ServletOutputStream#println(float)
-javax.servlet.ServletOutputStream#println(int)
-javax.servlet.ServletOutputStream#println(long)
-javax.servlet.ServletOutputStream#println(java.lang.String)
diff --git a/lucene/tools/forbiddenApis/solr.txt b/lucene/tools/forbiddenApis/solr.txt
deleted file mode 100644
index bb177be..0000000
--- a/lucene/tools/forbiddenApis/solr.txt
+++ /dev/null
@@ -1,61 +0,0 @@
-#  Licensed to the Apache Software Foundation (ASF) under one or more
-#  contributor license agreements.  See the NOTICE file distributed with
-#  this work for additional information regarding copyright ownership.
-#  The ASF licenses this file to You under the Apache License, Version 2.0
-#  (the "License"); you may not use this file except in compliance with
-#  the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
-#  limitations under the License.
-
-@defaultMessage Spawns threads without MDC logging context; use ExecutorUtil.newMDCAwareFixedThreadPool instead
-java.util.concurrent.Executors#newFixedThreadPool(int,java.util.concurrent.ThreadFactory)
-
-@defaultMessage Spawns threads without MDC logging context; use ExecutorUtil.newMDCAwareSingleThreadExecutor instead
-java.util.concurrent.Executors#newSingleThreadExecutor(java.util.concurrent.ThreadFactory)
-
-@defaultMessage Spawns threads without MDC logging context; use ExecutorUtil.newMDCAwareCachedThreadPool instead
-java.util.concurrent.Executors#newCachedThreadPool(java.util.concurrent.ThreadFactory)
-
-@defaultMessage Use ExecutorUtil.MDCAwareThreadPoolExecutor instead of ThreadPoolExecutor
-java.util.concurrent.ThreadPoolExecutor#<init>(int,int,long,java.util.concurrent.TimeUnit,java.util.concurrent.BlockingQueue,java.util.concurrent.ThreadFactory,java.util.concurrent.RejectedExecutionHandler)
-java.util.concurrent.ThreadPoolExecutor#<init>(int,int,long,java.util.concurrent.TimeUnit,java.util.concurrent.BlockingQueue)
-java.util.concurrent.ThreadPoolExecutor#<init>(int,int,long,java.util.concurrent.TimeUnit,java.util.concurrent.BlockingQueue,java.util.concurrent.ThreadFactory)
-java.util.concurrent.ThreadPoolExecutor#<init>(int,int,long,java.util.concurrent.TimeUnit,java.util.concurrent.BlockingQueue,java.util.concurrent.RejectedExecutionHandler)
-
-@defaultMessage Use slf4j classes instead
-org.apache.log4j.**
-org.apache.logging.log4j.**
-java.util.logging.**
-
-@defaultMessage Use RTimer/TimeOut/System.nanoTime for time comparisons, and `new Date()` output/debugging/stats of timestamps. If for some miscellaneous reason, you absolutely need to use this, use a SuppressForbidden.
-java.lang.System#currentTimeMillis()
-
-@defaultMessage Use corresponding Java 8 functional/streaming interfaces
-com.google.common.base.Function
-com.google.common.base.Joiner
-com.google.common.base.Predicate
-com.google.common.base.Supplier
-
-@defaultMessage Use java.nio.charset.StandardCharsets instead
-com.google.common.base.Charsets
-org.apache.commons.codec.Charsets
-
-@defaultMessage Use methods in java.util.Objects instead
-com.google.common.base.Objects#equal(java.lang.Object,java.lang.Object)
-com.google.common.base.Objects#hashCode(java.lang.Object[])
-com.google.common.base.Preconditions#checkNotNull(java.lang.Object)
-com.google.common.base.Preconditions#checkNotNull(java.lang.Object,java.lang.Object)
-
-@defaultMessage Use methods in java.util.Comparator instead
-com.google.common.collect.Ordering
-
-@defaultMessage Use org.apache.solr.common.annotation.JsonProperty  instead
-com.fasterxml.jackson.annotation.JsonProperty
-
-
diff --git a/lucene/tools/forbiddenApis/tests.txt b/lucene/tools/forbiddenApis/tests.txt
deleted file mode 100644
index fbcc0dd..0000000
--- a/lucene/tools/forbiddenApis/tests.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-#  Licensed to the Apache Software Foundation (ASF) under one or more
-#  contributor license agreements.  See the NOTICE file distributed with
-#  this work for additional information regarding copyright ownership.
-#  The ASF licenses this file to You under the Apache License, Version 2.0
-#  (the "License"); you may not use this file except in compliance with
-#  the License.  You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#  Unless required by applicable law or agreed to in writing, software
-#  distributed under the License is distributed on an "AS IS" BASIS,
-#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#  See the License for the specific language governing permissions and
-#  limitations under the License.
-
-junit.framework.TestCase @ All classes should derive from LuceneTestCase
-
-java.util.Random#<init>() @ Use RandomizedRunner's random() instead
-java.lang.Math#random() @ Use RandomizedRunner's random().nextDouble() instead
-
-# TODO: fix tests that do this!
-#java.lang.System#currentTimeMillis() @ Don't depend on wall clock times
-#java.lang.System#nanoTime() @ Don't depend on wall clock times
-
-com.carrotsearch.randomizedtesting.annotations.Seed @ Don't commit hardcoded seeds
-
-@defaultMessage Use LuceneTestCase.collate instead, which can avoid JDK-8071862
-java.text.Collator#compare(java.lang.Object,java.lang.Object)
-java.text.Collator#compare(java.lang.String,java.lang.String)
diff --git a/lucene/tools/ivy.xml b/lucene/tools/ivy.xml
deleted file mode 100644
index 1fa2974..0000000
--- a/lucene/tools/ivy.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.lucene" module="tools"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.apache.ant" name="ant" rev="${/org.apache.ant/ant}" conf="compile"/>
-    <dependency org="org.apache.ivy" name="ivy" rev="${/org.apache.ivy/ivy}" conf="compile"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/lucene/tools/junit4/cached-timehints.txt b/lucene/tools/junit4/cached-timehints.txt
deleted file mode 100644
index e69de29..0000000
--- a/lucene/tools/junit4/cached-timehints.txt
+++ /dev/null
diff --git a/lucene/tools/src/groovy/check-source-patterns.groovy b/lucene/tools/src/groovy/check-source-patterns.groovy
deleted file mode 100644
index 774023d..0000000
--- a/lucene/tools/src/groovy/check-source-patterns.groovy
+++ /dev/null
@@ -1,229 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/** Task script that is called by Ant's build.xml file:
- * Checks that there are no @author javadoc tags, tabs, 
- * svn keywords, javadoc-style licenses, or nocommits.
- */
-
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.Project;
-import org.apache.rat.Defaults;
-import org.apache.rat.document.impl.FileDocument;
-import org.apache.rat.api.MetaData;
-
-def extensions = [
-  'java', 'jflex', 'py', 'pl', 'g4', 'jj', 'html', 'js',
-  'css', 'xml', 'xsl', 'vm', 'sh', 'cmd', 'bat', 'policy',
-  'properties', 'mdtext', 'groovy',
-  'template', 'adoc', 'json',
-];
-def invalidPatterns = [
-  (~$/@author\b/$) : '@author javadoc tag',
-  (~$/(?i)\bno(n|)commit\b/$) : 'nocommit',
-  (~$/\bTOOD:/$) : 'TOOD instead TODO',
-  (~$/\t/$) : 'tabs instead spaces',
-  (~$/\Q/**\E((?:\s)|(?:\*))*\Q{@inheritDoc}\E((?:\s)|(?:\*))*\Q*/\E/$) : '{@inheritDoc} on its own is unnecessary',
-  (~$/\$$(?:LastChanged)?Date\b/$) : 'svn keyword',
-  (~$/\$$(?:(?:LastChanged)?Revision|Rev)\b/$) : 'svn keyword',
-  (~$/\$$(?:LastChangedBy|Author)\b/$) : 'svn keyword',
-  (~$/\$$(?:Head)?URL\b/$) : 'svn keyword',
-  (~$/\$$Id\b/$) : 'svn keyword',
-  (~$/\$$Header\b/$) : 'svn keyword',
-  (~$/\$$Source\b/$) : 'svn keyword',
-  (~$/^\uFEFF/$) : 'UTF-8 byte order mark',
-  (~$/import java\.lang\.\w+;/$) : 'java.lang import is unnecessary'
-]
-
-// Python and others merrily use var declarations, this is a problem _only_ in Java at least for 8x where we're forbidding var declarations
-def invalidJavaOnlyPatterns = [
-  (~$/\n\s*var\s+.*=.*<>.*/$) : 'Diamond operators should not be used with var',
-  (~$/\n\s*var\s+/$) : 'var is not allowed in until we stop development on the 8x code line'
-]
-
-
-def baseDir = properties['basedir'];
-def baseDirLen = baseDir.length() + 1;
-
-def found = 0;
-def violations = new TreeSet();
-def reportViolation = { f, name ->
-  task.log(name + ': ' + f.toString().substring(baseDirLen).replace(File.separatorChar, (char)'/'), Project.MSG_ERR);
-  violations.add(name);
-  found++;
-}
-
-def javadocsPattern = ~$/(?sm)^\Q/**\E(.*?)\Q*/\E/$;
-def javaCommentPattern = ~$/(?sm)^\Q/*\E(.*?)\Q*/\E/$;
-def xmlCommentPattern = ~$/(?sm)\Q<!--\E(.*?)\Q-->\E/$;
-def lineSplitter = ~$/[\r\n]+/$;
-def singleLineSplitter = ~$/\r?\n/$;
-def licenseMatcher = Defaults.createDefaultMatcher();
-def validLoggerPattern = ~$/(?s)\b(private\s|static\s|final\s){3}+\s*Logger\s+\p{javaJavaIdentifierStart}+\s+=\s+\QLoggerFactory.getLogger(MethodHandles.lookup().lookupClass());\E/$;
-def validLoggerNamePattern = ~$/(?s)\b(private\s|static\s|final\s){3}+\s*Logger\s+log+\s+=\s+\QLoggerFactory.getLogger(MethodHandles.lookup().lookupClass());\E/$;
-def packagePattern = ~$/(?m)^\s*package\s+org\.apache.*;/$;
-def xmlTagPattern = ~$/(?m)\s*<[a-zA-Z].*/$;
-def sourceHeaderPattern = ~$/\[source\b.*/$;
-def blockBoundaryPattern = ~$/----\s*/$;
-def blockTitlePattern = ~$/\..*/$;
-def unescapedSymbolPattern = ~$/(?<=[^\\]|^)([-=]>|<[-=])/$; // SOLR-10883
-def extendsLuceneTestCasePattern = ~$/public.*?class.*?extends.*?LuceneTestCase[^\n]*?\n/$;
-def validSPINameJavadocTag = ~$/(?s)\s*\*\s*@lucene\.spi\s+\{@value #NAME\}/$;
-
-def isLicense = { matcher, ratDocument ->
-  licenseMatcher.reset();
-  return lineSplitter.split(matcher.group(1)).any{ licenseMatcher.match(ratDocument, it) };
-}
-
-def checkLicenseHeaderPrecedes = { f, description, contentPattern, commentPattern, text, ratDocument ->
-  def contentMatcher = contentPattern.matcher(text);
-  if (contentMatcher.find()) {
-    def contentStartPos = contentMatcher.start();
-    def commentMatcher = commentPattern.matcher(text);
-    while (commentMatcher.find()) {
-      if (isLicense(commentMatcher, ratDocument)) {
-        if (commentMatcher.start() < contentStartPos) {
-          break; // This file is all good, so break loop: license header precedes 'description' definition
-        } else {
-          reportViolation(f, description+' declaration precedes license header');
-        }
-      }
-    }
-  }
-}
-
-def checkMockitoAssume = { f, text ->
-  if (text.contains("mockito") && !text.contains("assumeWorkingMockito()")) {
-    reportViolation(f, 'File uses Mockito but has no assumeWorkingMockito() call');
-  }
-}
-
-def checkForUnescapedSymbolSubstitutions = { f, text ->
-  def inCodeBlock = false;
-  def underSourceHeader = false;
-  def lineNumber = 0;
-  singleLineSplitter.split(text).each {
-    ++lineNumber;
-    if (underSourceHeader) { // This line is either a single source line, or the boundary of a code block
-      inCodeBlock = blockBoundaryPattern.matcher(it).matches();
-      if ( ! blockTitlePattern.matcher(it).matches()) {
-        underSourceHeader = false;
-      }
-    } else {
-      if (inCodeBlock) {
-        inCodeBlock = ! blockBoundaryPattern.matcher(it).matches();
-      } else {
-        underSourceHeader = sourceHeaderPattern.matcher(it).lookingAt();
-        if ( ! underSourceHeader) {
-          def unescapedSymbolMatcher = unescapedSymbolPattern.matcher(it);
-          if (unescapedSymbolMatcher.find()) {
-            reportViolation(f, 'Unescaped symbol "' + unescapedSymbolMatcher.group(1) + '" on line #' + lineNumber);
-          }
-        }
-      }
-    }
-  }
-}
-
-ant.fileScanner{
-  fileset(dir: baseDir){
-    extensions.each{
-      include(name: 'lucene/**/*.' + it)
-      include(name: 'solr/**/*.' + it)
-      include(name: 'dev-tools/**/*.' + it)
-      include(name: '*.' + it)
-    }
-    // TODO: For now we don't scan txt / md files, so we
-    // check licenses in top-level folders separately:
-    include(name: '*.txt')
-    include(name: '*/*.txt')
-    include(name: '*.md')
-    include(name: '*/*.md')
-    // excludes:
-    exclude(name: '**/build/**')
-    exclude(name: '**/dist/**')
-    exclude(name: 'lucene/benchmark/work/**')
-    exclude(name: 'lucene/benchmark/temp/**')
-    exclude(name: '**/CheckLoggingConfiguration.java')
-    exclude(name: 'lucene/tools/src/groovy/check-source-patterns.groovy') // ourselves :-)
-    exclude(name: 'solr/core/src/test/org/apache/hadoop/**')
-  }
-}.each{ f ->
-  task.log('Scanning file: ' + f, Project.MSG_VERBOSE);
-  def text = f.getText('UTF-8');
-  invalidPatterns.each{ pattern,name ->
-    if (pattern.matcher(text).find()) {
-      reportViolation(f, name);
-    }
-  }
-  def javadocsMatcher = javadocsPattern.matcher(text);
-  def ratDocument = new FileDocument(f);
-  while (javadocsMatcher.find()) {
-    if (isLicense(javadocsMatcher, ratDocument)) {
-      reportViolation(f, String.format(Locale.ENGLISH, 'javadoc-style license header [%s]',
-        ratDocument.getMetaData().value(MetaData.RAT_URL_LICENSE_FAMILY_NAME)));
-    }
-  }
-  if (f.name.endsWith('.java')) {
-    if (text.contains('org.slf4j.LoggerFactory')) {
-      if (!validLoggerPattern.matcher(text).find()) {
-        reportViolation(f, 'invalid logging pattern [not private static final, uses static class name]');
-      }
-      if (!validLoggerNamePattern.matcher(text).find()) {
-        reportViolation(f, 'invalid logger name [log, uses static class name, not specialized logger]')
-      }
-    }
-    // make sure that SPI names of all tokenizers/charfilters/tokenfilters are documented
-    if (!f.name.contains("Test") && !f.name.contains("Mock") && !text.contains("abstract class") &&
-        !f.name.equals("TokenizerFactory.java") && !f.name.equals("CharFilterFactory.java") && !f.name.equals("TokenFilterFactory.java") &&
-        (f.name.contains("TokenizerFactory") && text.contains("extends TokenizerFactory") ||
-            f.name.contains("CharFilterFactory") && text.contains("extends CharFilterFactory") ||
-            f.name.contains("FilterFactory") && text.contains("extends TokenFilterFactory"))) {
-      if (!validSPINameJavadocTag.matcher(text).find()) {
-        reportViolation(f, 'invalid spi name documentation')
-      }
-    }
-    checkLicenseHeaderPrecedes(f, 'package', packagePattern, javaCommentPattern, text, ratDocument);
-    if (f.name.contains("Test")) {
-      checkMockitoAssume(f, text);
-    }
-
-    if (f.path.substring(baseDirLen).contains("solr/")
-        && f.name.equals("SolrTestCase.java") == false
-        && f.name.equals("TestXmlQParser.java") == false) {
-      if (extendsLuceneTestCasePattern.matcher(text).find()) {
-        reportViolation(f, "Solr test cases should extend SolrTestCase rather than LuceneTestCase");
-      }
-    }
-    invalidJavaOnlyPatterns.each { pattern,name ->
-      if (pattern.matcher(text).find()) {
-        reportViolation(f, name);
-      }
-    }
-  }
-  if (f.name.endsWith('.xml') || f.name.endsWith('.xml.template')) {
-    checkLicenseHeaderPrecedes(f, '<tag>', xmlTagPattern, xmlCommentPattern, text, ratDocument);
-  }
-  if (f.name.endsWith('.adoc')) {
-    checkForUnescapedSymbolSubstitutions(f, text);
-  }
-};
-
-if (found) {
-  throw new BuildException(String.format(Locale.ENGLISH, 'Found %d violations in source files (%s).',
-    found, violations.join(', ')));
-}
diff --git a/lucene/tools/src/groovy/check-working-copy.groovy b/lucene/tools/src/groovy/check-working-copy.groovy
deleted file mode 100644
index 079a18b..0000000
--- a/lucene/tools/src/groovy/check-working-copy.groovy
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/** Task script that is called by Ant's build.xml file:
- * Checks GIT working copy for unversioned or modified files.
- */
-
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.Project;
-import org.eclipse.jgit.api.Git;
-import org.eclipse.jgit.api.Status;
-import org.eclipse.jgit.lib.Repository;
-import org.eclipse.jgit.storage.file.FileRepositoryBuilder;
-import org.eclipse.jgit.errors.*;
-
-def setProjectPropertyFromSet = { prop, set ->
-  if (set) {
-    properties[prop] = '* ' + set.join(properties['line.separator'] + '* ');
-  }
-};
-
-try {
-  task.log('Initializing working copy...', Project.MSG_INFO);
-  final Repository repository = new FileRepositoryBuilder()
-    .setWorkTree(project.getBaseDir())
-    .setMustExist(true)
-    .build();
-
-  task.log('Checking working copy status...', Project.MSG_INFO);
-  final Status status = new Git(repository).status().call();
-  if (!status.isClean()) {
-    final SortedSet unversioned = new TreeSet(), modified = new TreeSet();
-    status.properties.each{ prop, val ->
-      if (val instanceof Set) {
-        if (prop in ['untracked', 'untrackedFolders', 'missing']) {
-          unversioned.addAll(val);
-        } else if (prop != 'ignoredNotInIndex') {
-          modified.addAll(val);
-        }
-      }
-    };
-    setProjectPropertyFromSet('wc.unversioned.files', unversioned);
-    setProjectPropertyFromSet('wc.modified.files', modified);
-  }
-} catch (RepositoryNotFoundException | NoWorkTreeException | NotSupportedException e) {
-  task.log('WARNING: Development directory is not a valid GIT checkout! Disabling checks...', Project.MSG_WARN);
-}
diff --git a/lucene/tools/src/groovy/install-markdown-filter.groovy b/lucene/tools/src/groovy/install-markdown-filter.groovy
deleted file mode 100644
index bd5ceb3..0000000
--- a/lucene/tools/src/groovy/install-markdown-filter.groovy
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/** Task script that is called by Ant's common-build.xml file:
- * Installs markdown filter into Ant.
- */
-
-import org.apache.tools.ant.AntTypeDefinition;
-import org.apache.tools.ant.ComponentHelper;
-import org.apache.tools.ant.filters.TokenFilter.ChainableReaderFilter;
-import com.vladsch.flexmark.util.ast.Document;
-import com.vladsch.flexmark.ast.Heading;
-import com.vladsch.flexmark.html.HtmlRenderer;
-import com.vladsch.flexmark.parser.Parser;
-import com.vladsch.flexmark.parser.ParserEmulationProfile;
-import com.vladsch.flexmark.util.html.Escaping;
-import com.vladsch.flexmark.util.options.MutableDataSet;
-import com.vladsch.flexmark.ext.abbreviation.AbbreviationExtension;
-import com.vladsch.flexmark.ext.autolink.AutolinkExtension;
-
-public final class MarkdownFilter extends ChainableReaderFilter {
-  @Override
-  public String filter(String markdownSource) {
-    MutableDataSet options = new MutableDataSet();
-    options.setFrom(ParserEmulationProfile.MARKDOWN);
-    options.set(Parser.EXTENSIONS, [ AbbreviationExtension.create(), AutolinkExtension.create() ]);
-    options.set(HtmlRenderer.RENDER_HEADER_ID, true);
-    options.set(HtmlRenderer.MAX_TRAILING_BLANK_LINES, 0);
-    Document parsed = Parser.builder(options).build().parse(markdownSource);
-
-    StringBuilder html = new StringBuilder('<html>\n<head>\n');
-    CharSequence title = parsed.getFirstChildAny(Heading.class)?.getText();          
-    if (title != null) {
-      html.append('<title>').append(Escaping.escapeHtml(title, false)).append('</title>\n');
-    }
-    html.append('<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">\n')
-      .append('</head>\n<body>\n');
-    HtmlRenderer.builder(options).build().render(parsed, html);
-    html.append('</body>\n</html>\n');
-    return html;
-  }
-}
-
-AntTypeDefinition t = new AntTypeDefinition();
-t.setName('markdownfilter');
-t.setClass(MarkdownFilter.class);
-ComponentHelper.getComponentHelper(project).addDataTypeDefinition(t);
diff --git a/lucene/tools/src/groovy/run-beaster.groovy b/lucene/tools/src/groovy/run-beaster.groovy
deleted file mode 100644
index 057c147..0000000
--- a/lucene/tools/src/groovy/run-beaster.groovy
+++ /dev/null
@@ -1,121 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/** Task script that is called by Ant's common-build.xml file:
- * Runs test beaster.
- */
-
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.BuildLogger;
-import org.apache.tools.ant.Project;
-
-
-static boolean logFailOutput(Object task, String outdir) {
-  def logFile = new File(outdir, "tests-failures.txt");
-  if (logFile.exists()) {
-    logFile.eachLine("UTF-8", { line ->
-      task.log(line, Project.MSG_ERR);
-    });
-  }
-}
-
-int iters = (properties['beast.iters'] ?: '1') as int;
-if (iters <= 1) {
-  throw new BuildException("Please give -Dbeast.iters with an int value > 1.");
-}
-
-def antcall = project.createTask('antcallback');
-
-def junitOutDir = properties["junit.output.dir"];
-
-def failed = false;
-
-(1..iters).each { i ->
-
-  def outdir = junitOutDir + "/" + i;
-  task.log('Beast round ' + i + " results: " + outdir, Project.MSG_INFO);
-  
-  try {
-    // disable verbose build logging:
-    project.buildListeners.each { listener ->
-      if (listener instanceof BuildLogger) {
-        listener.messageOutputLevel = Project.MSG_WARN;
-      }
-    };
-    
-    new File(outdir).mkdirs();
-    
-    properties["junit.output.dir"] = outdir;
-    
-    antcall.setReturn("tests.failed");
-    antcall.setTarget("-test");
-    antcall.setInheritAll(true);
-    antcall.setInheritRefs(true);
-    
-    antcall.with {
-
-      createParam().with {
-        name = "tests.isbeasting";
-        value = "true";
-      };
-      createParam().with {
-        name = "tests.timeoutSuite";
-        value = "900000";
-      };
-      createParam().with {
-        name = "junit.output.dir";
-        value = outdir;
-      };
-
-    };
-    
-    properties["junit.output.dir"] = outdir;
-
-    antcall.execute();
-
-    def antcallResult = project.properties.'tests.failed' as boolean;
-
-    if (antcallResult) {
-      failed = true;
-      logFailOutput(task, outdir)
-    }
-    
-  } catch (BuildException be) {
-    task.log(be.getMessage(), Project.MSG_ERR);
-    logFailOutput(task, outdir)
-    throw be;
-  } finally {
-    // restore build logging (unfortunately there is no way to get the original logging level (write-only property):
-    project.buildListeners.each { listener ->
-      if (listener instanceof BuildLogger) {
-        listener.messageOutputLevel = Project.MSG_INFO;
-      }
-    };
-  }
-};
-
-// restore junit output dir
-properties["junit.output.dir"] = junitOutDir;
-
-
-if (failed) {
-  task.log('Beasting finished with failure.', Project.MSG_INFO);
-  throw new BuildException("Beasting Failed!");
-} else {
-  task.log('Beasting finished Successfully.', Project.MSG_INFO);
-}
-
diff --git a/lucene/tools/src/groovy/run-maven-build.groovy b/lucene/tools/src/groovy/run-maven-build.groovy
deleted file mode 100644
index e241837..0000000
--- a/lucene/tools/src/groovy/run-maven-build.groovy
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/** Task script that is called by Ant's build.xml file:
- * Runs maven build from within Ant after creating POMs.
- */
-
-import groovy.xml.NamespaceBuilder;
-import org.apache.tools.ant.Project;
-
-def userHome = properties['user.home'], commonDir = properties['common.dir'];
-def propPrefix = '-mvn.inject.'; int propPrefixLen = propPrefix.length();
-
-def subProject = project.createSubProject();
-project.copyUserProperties(subProject);
-subProject.initProperties();
-new AntBuilder(subProject).sequential{
-  property(file: userHome+'/lucene.build.properties', prefix: propPrefix);
-  property(file: userHome+'/build.properties', prefix: propPrefix);
-  property(file: commonDir+'/build.properties', prefix: propPrefix);
-};
-
-def cmdlineProps = subProject.properties
-  .findAll{ k, v -> k.startsWith(propPrefix) }
-  .collectEntries{ k, v -> [k.substring(propPrefixLen), v] };
-cmdlineProps << project.userProperties.findAll{ k, v -> !k.startsWith('ant.') };
-
-def artifact = NamespaceBuilder.newInstance(ant, 'antlib:org.apache.maven.artifact.ant');
-
-task.log('Running Maven with props: ' + cmdlineProps.toString(), Project.MSG_INFO);
-artifact.mvn(pom: properties['maven-build-dir']+'/pom.xml', mavenVersion: properties['maven-version'], failonerror: true, fork: true) {
-  sysproperty(key: 'maven.multiModuleProjectDirectory', file: properties['maven-build-dir'])
-  cmdlineProps.each{ k, v -> arg(value: '-D' + k + '=' + v) };
-  arg(value: '-fae');
-  arg(value: 'install');
-};
diff --git a/lucene/tools/src/java/lucene-solr.antlib.xml b/lucene/tools/src/java/lucene-solr.antlib.xml
deleted file mode 100644
index 94a102d..0000000
--- a/lucene/tools/src/java/lucene-solr.antlib.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<antlib>
-    <taskdef 
-        name="licenses" 
-        classname="org.apache.lucene.validation.LicenseCheckTask" />
-    <taskdef
-        name="libversions"
-        classname="org.apache.lucene.validation.LibVersionsCheckTask" />
-    <taskdef
-        name="mvndeps"
-        classname="org.apache.lucene.dependencies.GetMavenDependenciesTask" />
-</antlib> 
diff --git a/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java b/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java
deleted file mode 100644
index 570016c..0000000
--- a/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java
+++ /dev/null
@@ -1,920 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.dependencies;
-
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.Project;
-import org.apache.tools.ant.Task;
-import org.apache.tools.ant.types.Resource;
-import org.apache.tools.ant.types.ResourceCollection;
-import org.apache.tools.ant.types.resources.FileResource;
-import org.apache.tools.ant.types.resources.Resources;
-import org.w3c.dom.Document;
-import org.w3c.dom.Element;
-import org.w3c.dom.NodeList;
-import org.xml.sax.SAXException;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileNotFoundException;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.io.OutputStreamWriter;
-import java.io.Reader;
-import java.io.Writer;
-import java.nio.charset.StandardCharsets;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Comparator;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Set;
-import java.util.SortedMap;
-import java.util.SortedSet;
-import java.util.TreeMap;
-import java.util.TreeSet;
-import java.util.function.Consumer;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import javax.xml.parsers.DocumentBuilder;
-import javax.xml.parsers.DocumentBuilderFactory;
-import javax.xml.parsers.ParserConfigurationException;
-import javax.xml.xpath.XPath;
-import javax.xml.xpath.XPathConstants;
-import javax.xml.xpath.XPathExpressionException;
-import javax.xml.xpath.XPathFactory;
-
-/**
- * An Ant task to generate a properties file containing maven dependency
- * declarations, used to filter the maven POMs when copying them to
- * maven-build/ via 'ant get-maven-poms', or to lucene/build/poms/
- * via the '-filter-maven-poms' target, which is called from the
- * 'generate-maven-artifacts' target.
- */
-public class GetMavenDependenciesTask extends Task {
-  private static final Pattern PROPERTY_PREFIX_FROM_IVY_XML_FILE_PATTERN = Pattern.compile
-      ("[/\\\\](lucene|solr)[/\\\\](?:(?:contrib|(analysis)|(example)|(server))[/\\\\])?([^/\\\\]+)[/\\\\]ivy\\.xml");
-  private static final Pattern COORDINATE_KEY_PATTERN = Pattern.compile("/([^/]+)/([^/]+)");
-  private static final Pattern MODULE_DEPENDENCIES_COORDINATE_KEY_PATTERN
-      = Pattern.compile("(.*?)(\\.test)?\\.dependencies");
-  // lucene/build/core/classes/java
-  private static final Pattern COMPILATION_OUTPUT_DIRECTORY_PATTERN 
-      = Pattern.compile("(lucene|solr)/build/(?:contrib/)?(.*)/classes/(?:java|test)");
-  private static final String UNWANTED_INTERNAL_DEPENDENCIES
-      = "/(?:test-)?lib/|test-framework/classes/java|/test-files|/resources";
-  private static final Pattern SHARED_EXTERNAL_DEPENDENCIES_PATTERN
-      = Pattern.compile("((?:solr|lucene)/(?!test-framework).*)/((?:test-)?)lib/");
-
-  private static final String DEPENDENCY_MANAGEMENT_PROPERTY = "lucene.solr.dependency.management";
-  private static final String IVY_USER_DIR_PROPERTY = "ivy.default.ivy.user.dir";
-  private static final Properties allProperties = new Properties();
-  private static final Set<String> modulesWithSeparateCompileAndTestPOMs = new HashSet<>();
-
-  private static final Set<String> globalOptionalExternalDependencies = new HashSet<>();
-  private static final Map<String,Set<String>> perModuleOptionalExternalDependencies = new HashMap<>();
-  private static final Set<String> modulesWithTransitiveDependencies = new HashSet<>();
-  static {
-    // Add modules here that have split compile and test POMs
-    // - they need compile-scope deps to also be test-scope deps.
-    modulesWithSeparateCompileAndTestPOMs.addAll
-        (Arrays.asList("lucene-core", "lucene-codecs", "solr-core", "solr-solrj"));
-
-    // Add external dependencies here that should be optional for all modules
-    // (i.e., not invoke Maven's transitive dependency mechanism).
-    // Format is "groupId:artifactId"
-    globalOptionalExternalDependencies.addAll(Arrays.asList
-        ("org.slf4j:jul-to-slf4j", "org.slf4j:slf4j-log4j12"));
-
-    // Add modules here that should NOT have their dependencies
-    // excluded in the grandparent POM's dependencyManagement section,
-    // thus enabling their dependencies to be transitive.
-    modulesWithTransitiveDependencies.addAll(Arrays.asList("lucene-test-framework"));
-  }
-
-  private final XPath xpath = XPathFactory.newInstance().newXPath();
-  private final SortedMap<String,SortedSet<String>> internalCompileScopeDependencies
-      = new TreeMap<>();
-  private final Set<String> nonJarDependencies = new HashSet<>();
-  private final Map<String,Set<String>> dependencyClassifiers = new HashMap<>();
-  private final Map<String,Set<String>> interModuleExternalCompileScopeDependencies = new HashMap<>();
-  private final Map<String,Set<String>> interModuleExternalTestScopeDependencies = new HashMap<>();
-  private final Map<String,SortedSet<ExternalDependency>> allExternalDependencies
-     = new HashMap<>();
-  private final DocumentBuilder documentBuilder;
-  private File ivyCacheDir;
-  private Pattern internalJarPattern;
-  private Map<String,String> ivyModuleInfo;
-
-
-  /**
-   * All ivy.xml files to get external dependencies from.
-   */
-  private Resources ivyXmlResources = new Resources();
-
-  /**
-   * Centralized Ivy versions properties file
-   */
-  private File centralizedVersionsFile;
-
-  /**
-   * Module dependencies properties file, generated by task -append-module-dependencies-properties.
-   */
-  private File moduleDependenciesPropertiesFile;
-
-  /**
-   * Where all properties are written, to be used to filter POM templates when copying them.
-   */
-  private File mavenDependenciesFiltersFile;
-
-  /**
-   * A logging level associated with verbose logging.
-   */
-  private int verboseLevel = Project.MSG_VERBOSE;
-
-  /**
-   * Adds a set of ivy.xml resources to check.
-   */
-  public void add(ResourceCollection rc) {
-    ivyXmlResources.add(rc);
-  }
-
-  public void setVerbose(boolean verbose) {
-    verboseLevel = (verbose ? Project.MSG_VERBOSE : Project.MSG_INFO);
-  }
-
-  public void setCentralizedVersionsFile(File file) {
-    centralizedVersionsFile = file;
-  }
-
-  public void setModuleDependenciesPropertiesFile(File file) {
-    moduleDependenciesPropertiesFile = file;
-  }
-  
-  public void setMavenDependenciesFiltersFile(File file) {
-    mavenDependenciesFiltersFile = file;
-  }
-
-  public GetMavenDependenciesTask() {
-    try {
-      documentBuilder = DocumentBuilderFactory.newInstance().newDocumentBuilder();
-    } catch (ParserConfigurationException e) {
-      throw new BuildException(e);
-    }
-  }
-  
-  /**
-   * Collect dependency information from Ant build.xml and ivy.xml files
-   * and from ivy-versions.properties, then write out an Ant filters file
-   * to be used when copying POMs.
-   */
-  @Override
-  public void execute() throws BuildException {
-    // Local:   lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar
-    // Jenkins: lucene/build/analysis/common/lucene-analyzers-common-5.0-2013-10-31_18-52-24.jar
-    // Also support any custom version, which won't necessarily conform to any predefined pattern.
-    internalJarPattern = Pattern.compile(".*(lucene|solr)([^/]*?)-"
-        + Pattern.quote(getProject().getProperty("version")) + "\\.jar");
-
-    ivyModuleInfo = getIvyModuleInfo(ivyXmlResources, documentBuilder, xpath);
-
-    setInternalDependencyProperties();            // side-effect: all modules' internal deps are recorded
-    setExternalDependencyProperties();            // side-effect: all modules' external deps are recorded
-    setGrandparentDependencyManagementProperty(); // uses deps recorded in above two methods
-    writeFiltersFile();
-  }
-
-  /**
-   * Write out an Ant filters file to be used when copying POMs.
-   */
-  private void writeFiltersFile() {
-    Writer writer = null;
-    try {
-      FileOutputStream outputStream = new FileOutputStream(mavenDependenciesFiltersFile);
-      writer = new OutputStreamWriter(outputStream, StandardCharsets.ISO_8859_1);
-      allProperties.store(writer, null);
-    } catch (FileNotFoundException e) {
-      throw new BuildException("Can't find file: '" + mavenDependenciesFiltersFile.getPath() + "'", e);
-    } catch (IOException e) {
-      throw new BuildException("Exception writing out '" + mavenDependenciesFiltersFile.getPath() + "'", e);
-    } finally {
-      if (null != writer) {
-        try {
-          writer.close();
-        } catch (IOException e) {
-          // ignore
-        }
-      }
-    }
-  }
-
-  /**
-   * Visits all ivy.xml files and collects module and organisation attributes into a map.
-   */
-  private static Map<String,String> getIvyModuleInfo(Resources ivyXmlResources,
-      DocumentBuilder documentBuilder, XPath xpath) {
-    Map<String,String> ivyInfoModuleToOrganisation = new HashMap<String,String>();
-    traverseIvyXmlResources(ivyXmlResources, new Consumer<File>() {
-      @Override
-      public void accept(File f) {
-        try {
-          Document document = documentBuilder.parse(f);
-          {
-            String infoPath = "/ivy-module/info";
-            NodeList infos = (NodeList)xpath.evaluate(infoPath, document, XPathConstants.NODESET);
-            for (int infoNum = 0 ; infoNum < infos.getLength() ; ++infoNum) {
-              Element infoElement = (Element)infos.item(infoNum);
-              String infoOrg = infoElement.getAttribute("organisation");
-              String infoOrgSuffix = infoOrg.substring(infoOrg.lastIndexOf('.')+1);
-              String infoModule = infoElement.getAttribute("module");
-              String module = infoOrgSuffix+"-"+infoModule;
-              ivyInfoModuleToOrganisation.put(module, infoOrg);
-            }
-          }
-        } catch (XPathExpressionException | IOException | SAXException e) {
-          throw new RuntimeException(e);
-        }
-      }
-    });
-    return ivyInfoModuleToOrganisation;
-  }
-
-  /**
-   * Collects external dependencies from each ivy.xml file and sets
-   * external dependency properties to be inserted into modules' POMs. 
-   */
-  private void setExternalDependencyProperties() {
-    traverseIvyXmlResources(ivyXmlResources, new Consumer<File>() {
-      @Override
-      public void accept(File f) {
-        try {
-        collectExternalDependenciesFromIvyXmlFile(f);
-        } catch (XPathExpressionException | IOException | SAXException e) {
-          throw new RuntimeException(e);
-        }
-      }
-    });
-    addSharedExternalDependencies();
-    setExternalDependencyXmlProperties();
-  }
-
-  private static void traverseIvyXmlResources(Resources ivyXmlResources, Consumer<File> ivyXmlFileConsumer) {
-    @SuppressWarnings("unchecked")
-    Iterator<Resource> iter = (Iterator<Resource>)ivyXmlResources.iterator();
-    while (iter.hasNext()) {
-      final Resource resource = iter.next();
-      if ( ! resource.isExists()) {
-        throw new BuildException("Resource does not exist: " + resource.getName());
-      }
-      if ( ! (resource instanceof FileResource)) {
-        throw new BuildException("Only filesystem resources are supported: "
-            + resource.getName() + ", was: " + resource.getClass().getName());
-      }
-
-      File ivyXmlFile = ((FileResource)resource).getFile();
-      try {
-        ivyXmlFileConsumer.accept(ivyXmlFile);
-      } catch (BuildException e) {
-        throw e;
-      } catch (Exception e) {
-        throw new BuildException("Exception reading file " + ivyXmlFile.getPath() + ": " + e, e);
-      }
-    }
-  }
-
-  /**
-   * For each module that includes other modules' external dependencies via
-   * including all files under their ".../lib/" dirs in their (test.)classpath,
-   * add the other modules' dependencies to its set of external dependencies. 
-   */
-  private void addSharedExternalDependencies() {
-    // Delay adding shared compile-scope dependencies until after all have been processed,
-    // so dependency sharing is limited to a depth of one.
-    Map<String,SortedSet<ExternalDependency>> sharedDependencies = new HashMap<>();
-    for (Map.Entry<String, Set<String>> entry : interModuleExternalCompileScopeDependencies.entrySet()) {
-      TreeSet<ExternalDependency> deps = new TreeSet<>();
-      sharedDependencies.put(entry.getKey(), deps);
-      Set<String> moduleDependencies = entry.getValue();
-      if (null != moduleDependencies) {
-        for (String otherArtifactId : moduleDependencies) {
-          SortedSet<ExternalDependency> otherExtDeps = allExternalDependencies.get(otherArtifactId); 
-          if (null != otherExtDeps) {
-            for (ExternalDependency otherDep : otherExtDeps) {
-              if ( ! otherDep.isTestDependency) {
-                deps.add(otherDep);
-              }
-            }
-          }
-        }
-      }
-    }
-    for (Map.Entry<String, Set<String>> entry : interModuleExternalTestScopeDependencies.entrySet()) {
-      String module = entry.getKey();
-      SortedSet<ExternalDependency> deps = sharedDependencies.get(module);
-      if (null == deps) {
-        deps = new TreeSet<>();
-        sharedDependencies.put(module, deps);
-      }
-      Set<String> moduleDependencies = entry.getValue();
-      if (null != moduleDependencies) {
-        for (String otherArtifactId : moduleDependencies) {
-          int testScopePos = otherArtifactId.indexOf(":test");
-          boolean isTestScope = false;
-          if (-1 != testScopePos) {
-            otherArtifactId = otherArtifactId.substring(0, testScopePos);
-            isTestScope = true;
-          }
-          SortedSet<ExternalDependency> otherExtDeps = allExternalDependencies.get(otherArtifactId);
-          if (null != otherExtDeps) {
-            for (ExternalDependency otherDep : otherExtDeps) {
-              if (otherDep.isTestDependency == isTestScope) {
-                if (  ! deps.contains(otherDep)
-                   && (  null == allExternalDependencies.get(module)
-                      || ! allExternalDependencies.get(module).contains(otherDep))) {
-                  // Add test-scope clone only if it's not already a compile-scope dependency. 
-                  ExternalDependency otherDepTestScope = new ExternalDependency
-                      (otherDep.groupId, otherDep.artifactId, otherDep.classifier, true, otherDep.isOptional);
-                  deps.add(otherDepTestScope);
-                }
-              }
-            }
-          }
-        }
-      }
-    }
-    for (Map.Entry<String, SortedSet<ExternalDependency>> entry : sharedDependencies.entrySet()) {
-      String module = entry.getKey();
-      SortedSet<ExternalDependency> deps = allExternalDependencies.get(module);
-      if (null == deps) {
-        deps = new TreeSet<>();
-        allExternalDependencies.put(module, deps);
-      }
-      for (ExternalDependency dep : entry.getValue()) {
-        String dependencyCoordinate = dep.groupId + ":" + dep.artifactId;
-        if (globalOptionalExternalDependencies.contains(dependencyCoordinate)
-            || (perModuleOptionalExternalDependencies.containsKey(module)
-                && perModuleOptionalExternalDependencies.get(module).contains(dependencyCoordinate))) {
-          // make a copy of the dep and set optional=true
-          dep = new ExternalDependency(dep.groupId, dep.artifactId, dep.classifier, dep.isTestDependency, true);
-        }
-        deps.add(dep);
-      }
-    }
-  }
-
-  /**
-   * For each module, sets a compile-scope and a test-scope property
-   * with values that contain the appropriate &lt;dependency&gt;
-   * snippets.
-   */
-  private void setExternalDependencyXmlProperties() {
-    for (String module : internalCompileScopeDependencies.keySet()) { // get full module list
-      StringBuilder compileScopeBuilder = new StringBuilder();
-      StringBuilder testScopeBuilder = new StringBuilder();
-      SortedSet<ExternalDependency> extDeps = allExternalDependencies.get(module);
-      if (null != extDeps) {
-        for (ExternalDependency dep : extDeps) {
-          StringBuilder builder = dep.isTestDependency ? testScopeBuilder : compileScopeBuilder;
-          appendDependencyXml(builder, dep.groupId, dep.artifactId, "    ", null, 
-                              dep.isTestDependency, dep.isOptional, dep.classifier, null);
-          // Test POMs for solrj, solr-core, lucene-codecs and lucene-core modules
-          // need to include all compile-scope dependencies as test-scope dependencies
-          // since we've turned off transitive dependency resolution.
-          if ( ! dep.isTestDependency && modulesWithSeparateCompileAndTestPOMs.contains(module)) {
-            appendDependencyXml(testScopeBuilder, dep.groupId, dep.artifactId, "    ", null,
-                                true, dep.isOptional, dep.classifier, null);
-          }
-        }
-      }
-      if (compileScopeBuilder.length() > 0) {
-        compileScopeBuilder.setLength(compileScopeBuilder.length() - 1); // drop trailing newline
-      }
-      if (testScopeBuilder.length() > 0) {
-        testScopeBuilder.setLength(testScopeBuilder.length() - 1); // drop trailing newline
-      }
-      allProperties.setProperty(module + ".external.dependencies", compileScopeBuilder.toString());
-      allProperties.setProperty(module + ".external.test.dependencies", testScopeBuilder.toString());
-    }
-  }
-
-  /**
-   * Sets the property to be inserted into the grandparent POM's 
-   * &lt;dependencyManagement&gt; section.
-   */
-  private void setGrandparentDependencyManagementProperty() {
-    StringBuilder builder = new StringBuilder();
-    appendAllInternalDependencies(builder);
-    Map<String,String> versionsMap = new HashMap<>();
-    appendAllExternalDependencies(builder, versionsMap);
-    builder.setLength(builder.length() - 1); // drop trailing newline
-    allProperties.setProperty(DEPENDENCY_MANAGEMENT_PROPERTY, builder.toString());
-    for (Map.Entry<String,String> entry : versionsMap.entrySet()) {
-      allProperties.setProperty(entry.getKey(), entry.getValue());
-    }
-  }
-
-  /**
-   * For each artifact in the project, append a dependency with version
-   * ${project.version} to the grandparent POM's &lt;dependencyManagement&gt;
-   * section.  An &lt;exclusion&gt; is added for each of the artifact's
-   * dependencies.
-   */
-  private void appendAllInternalDependencies(StringBuilder builder) {
-    for (Map.Entry<String, SortedSet<String>> entry : internalCompileScopeDependencies.entrySet()) {
-      String artifactId = entry.getKey();
-      List<String> exclusions = new ArrayList<>(entry.getValue());
-      SortedSet<ExternalDependency> extDeps = allExternalDependencies.get(artifactId);
-      if (null != extDeps) {
-        for (ExternalDependency externalDependency : extDeps) {
-          if ( ! externalDependency.isTestDependency && ! externalDependency.isOptional) {
-            exclusions.add(externalDependency.groupId + ':' + externalDependency.artifactId);
-          }
-        }
-      }
-      String groupId = ivyModuleInfo.get(artifactId);
-      appendDependencyXml(builder, groupId, artifactId, "      ", "${project.version}", false, false, null, exclusions);
-    }
-  }
-
-  /**
-   * Sets the ivyCacheDir field, to either the ${ivy.default.ivy.user.dir} 
-   * property, or if that's not set, to the default ~/.ivy2/.
-   */
-  private File getIvyCacheDir() {
-    String ivyUserDirName = getProject().getUserProperty(IVY_USER_DIR_PROPERTY);
-    if (null == ivyUserDirName) {
-      ivyUserDirName = getProject().getProperty(IVY_USER_DIR_PROPERTY);
-      if (null == ivyUserDirName) {
-        ivyUserDirName = System.getProperty("user.home") + System.getProperty("file.separator") + ".ivy2";
-      }
-    }
-    File ivyUserDir = new File(ivyUserDirName);
-    if ( ! ivyUserDir.exists()) {
-      throw new BuildException("Ivy user dir does not exist: '" + ivyUserDir.getPath() + "'");
-    }
-    File dir = new File(ivyUserDir, "cache");
-    if ( ! dir.exists()) {
-      throw new BuildException("Ivy cache dir does not exist: '" + ivyCacheDir.getPath() + "'");
-    }
-    return dir;
-  }
-
-  /**
-   * Append each dependency listed in the centralized Ivy versions file
-   * to the grandparent POM's &lt;dependencyManagement&gt; section.  
-   * An &lt;exclusion&gt; is added for each of the artifact's dependencies,
-   * which are collected from the artifact's ivy.xml from the Ivy cache.
-   * 
-   * Also add a version property for each dependency.
-   */
-  private void appendAllExternalDependencies(StringBuilder dependenciesBuilder, Map<String,String> versionsMap) {
-    log("Loading centralized ivy versions from: " + centralizedVersionsFile, verboseLevel);
-    ivyCacheDir = getIvyCacheDir();
-    Properties versions = new InterpolatedProperties();
-    try (InputStream inputStream = new FileInputStream(centralizedVersionsFile);
-         Reader reader = new InputStreamReader(inputStream, StandardCharsets.UTF_8)) {
-      versions.load(reader);
-    } catch (IOException e) {
-      throw new BuildException("Exception reading centralized versions file " + centralizedVersionsFile.getPath(), e);
-    } 
-    SortedSet<Map.Entry<?,?>> sortedEntries = new TreeSet<>(new Comparator<Map.Entry<?,?>>() {
-      @Override public int compare(Map.Entry<?,?> o1, Map.Entry<?,?> o2) {
-        return ((String)o1.getKey()).compareTo((String)o2.getKey());
-      }
-    });
-    sortedEntries.addAll(versions.entrySet());
-    for (Map.Entry<?,?> entry : sortedEntries) {
-      String key = (String)entry.getKey();
-      Matcher matcher = COORDINATE_KEY_PATTERN.matcher(key);
-      if (matcher.lookingAt()) {
-        String groupId = matcher.group(1);
-        String artifactId = matcher.group(2);
-        String coordinate = groupId + ':' + artifactId;
-        String version = (String)entry.getValue();
-        versionsMap.put(coordinate + ".version", version);
-        if ( ! nonJarDependencies.contains(coordinate)) {
-          Set<String> classifiers = dependencyClassifiers.get(coordinate);
-          if (null != classifiers) {
-            for (String classifier : classifiers) {
-              Collection<String> exclusions = getTransitiveDependenciesFromIvyCache(groupId, artifactId, version);
-              appendDependencyXml
-                  (dependenciesBuilder, groupId, artifactId, "      ", version, false, false, classifier, exclusions);
-            }
-          }
-        }
-      }
-    }
-  }
-
-  /**
-   * Collect transitive compile-scope dependencies for the given artifact's
-   * ivy.xml from the Ivy cache, using the default ivy pattern 
-   * "[organisation]/[module]/ivy-[revision].xml".  See 
-   * <a href="http://ant.apache.org/ivy/history/latest-milestone/settings/caches.html"
-   * >the Ivy cache documentation</a>.
-   */
-  private Collection<String> getTransitiveDependenciesFromIvyCache
-  (String groupId, String artifactId, String version) {
-    SortedSet<String> transitiveDependencies = new TreeSet<>();
-    //                                      E.g. ~/.ivy2/cache/xerces/xercesImpl/ivy-2.9.1.xml
-    File ivyXmlFile = new File(new File(new File(ivyCacheDir, groupId), artifactId), "ivy-" + version + ".xml");
-    if ( ! ivyXmlFile.exists()) {
-      throw new BuildException("File not found: " + ivyXmlFile.getPath());
-    }
-    try {
-      Document document = documentBuilder.parse(ivyXmlFile);
-      String dependencyPath = "/ivy-module/dependencies/dependency"
-                            + "[   not(starts-with(@conf,'test->'))"
-                            + "and not(starts-with(@conf,'provided->'))"
-                            + "and not(starts-with(@conf,'optional->'))]";
-      NodeList dependencies = (NodeList)xpath.evaluate(dependencyPath, document, XPathConstants.NODESET);
-      for (int i = 0 ; i < dependencies.getLength() ; ++i) {
-        Element dependency = (Element)dependencies.item(i);
-        transitiveDependencies.add(dependency.getAttribute("org") + ':' + dependency.getAttribute("name"));
-      }
-    } catch (Exception e) {
-      throw new BuildException( "Exception collecting transitive dependencies for " 
-                              + groupId + ':' + artifactId + ':' + version + " from "
-                              + ivyXmlFile.getAbsolutePath(), e);
-    }
-    return transitiveDependencies;
-  }
-
-  /**
-   * Sets the internal dependencies compile and test properties to be inserted 
-   * into modules' POMs.
-   * 
-   * Also collects shared external dependencies, 
-   * e.g. solr-core wants all of solrj's external dependencies 
-   */
-  private void  setInternalDependencyProperties() {
-    log("Loading module dependencies from: " + moduleDependenciesPropertiesFile, verboseLevel);
-    Properties moduleDependencies = new Properties();
-    try (InputStream inputStream = new FileInputStream(moduleDependenciesPropertiesFile);
-         Reader reader = new InputStreamReader(inputStream, StandardCharsets.UTF_8)) {
-      moduleDependencies.load(reader);
-    } catch (FileNotFoundException e) {
-      throw new BuildException("Properties file does not exist: " + moduleDependenciesPropertiesFile.getPath());
-    } catch (IOException e) {
-      throw new BuildException("Exception reading properties file " + moduleDependenciesPropertiesFile.getPath(), e);
-    }
-    Map<String,SortedSet<String>> testScopeDependencies = new HashMap<>();
-    Map<String, String> testScopePropertyKeys = new HashMap<>();
-    for (Map.Entry<?,?> entry : moduleDependencies.entrySet()) {
-      String newPropertyKey = (String)entry.getKey();
-      StringBuilder newPropertyValue = new StringBuilder();
-      String value = (String)entry.getValue();
-      Matcher matcher = MODULE_DEPENDENCIES_COORDINATE_KEY_PATTERN.matcher(newPropertyKey);
-      if ( ! matcher.matches()) {
-        throw new BuildException("Malformed module dependencies property key: '" + newPropertyKey + "'");
-      }
-      String antProjectName = matcher.group(1);
-      boolean isTest = null != matcher.group(2);
-      String artifactName = antProjectToArtifactName(antProjectName);
-      newPropertyKey = artifactName + (isTest ? ".internal.test" : ".internal") + ".dependencies"; // Add ".internal"
-      if (isTest) {
-        testScopePropertyKeys.put(artifactName, newPropertyKey);
-      }
-      if (null == value || value.isEmpty()) {
-        allProperties.setProperty(newPropertyKey, "");
-        Map<String,SortedSet<String>> scopedDependencies
-            = isTest ? testScopeDependencies : internalCompileScopeDependencies;
-        scopedDependencies.put(artifactName, new TreeSet<String>());
-      } else {
-        // Lucene analysis modules' build dirs do not include hyphens, but Solr contribs' build dirs do
-        String origModuleDir = antProjectName.replace("analyzers-", "analysis/");
-        // Exclude the module's own build output, in addition to UNWANTED_INTERNAL_DEPENDENCIES
-        Pattern unwantedInternalDependencies = Pattern.compile
-            ("(?:lucene/build/|solr/build/(?:contrib/)?)" + origModuleDir + "/" // require dir separator 
-             + "|" + UNWANTED_INTERNAL_DEPENDENCIES);
-        SortedSet<String> sortedDeps = new TreeSet<>();
-        for (String dependency : value.split(",")) {
-          matcher = SHARED_EXTERNAL_DEPENDENCIES_PATTERN.matcher(dependency);
-          if (matcher.find()) {
-            String otherArtifactName = matcher.group(1);
-            boolean isTestScope = null != matcher.group(2) && matcher.group(2).length() > 0;
-            otherArtifactName = otherArtifactName.replace('/', '-');
-            otherArtifactName = otherArtifactName.replace("lucene-analysis", "lucene-analyzers");
-            otherArtifactName = otherArtifactName.replace("solr-contrib-solr-", "solr-");
-            otherArtifactName = otherArtifactName.replace("solr-contrib-", "solr-");
-            if ( ! otherArtifactName.equals(artifactName)) {
-              Map<String,Set<String>> sharedDeps
-                  = isTest ? interModuleExternalTestScopeDependencies : interModuleExternalCompileScopeDependencies;
-              Set<String> sharedSet = sharedDeps.get(artifactName);
-              if (null == sharedSet) {
-                sharedSet = new HashSet<>();
-                sharedDeps.put(artifactName, sharedSet);
-              }
-              if (isTestScope) {
-                otherArtifactName += ":test";
-              }
-              sharedSet.add(otherArtifactName);
-            }
-          }
-          matcher = unwantedInternalDependencies.matcher(dependency);
-          if (matcher.find()) {
-            continue;  // skip external (/(test-)lib/), and non-jar and unwanted (self) internal deps
-          }
-          String artifactId = dependencyToArtifactId(newPropertyKey, dependency);
-          String groupId = ivyModuleInfo.get(artifactId);
-          String coordinate = groupId + ':' + artifactId;
-          sortedDeps.add(coordinate);
-        }
-        if (isTest) {  // Don't set test-scope properties until all compile-scope deps have been seen
-          testScopeDependencies.put(artifactName, sortedDeps);
-        } else {
-          internalCompileScopeDependencies.put(artifactName, sortedDeps);
-          for (String dependency : sortedDeps) {
-            int splitPos = dependency.indexOf(':');
-            String groupId = dependency.substring(0, splitPos);
-            String artifactId = dependency.substring(splitPos + 1);
-            appendDependencyXml(newPropertyValue, groupId, artifactId, "    ", null, false, false, null, null);
-          }
-          if (newPropertyValue.length() > 0) {
-            newPropertyValue.setLength(newPropertyValue.length() - 1); // drop trailing newline
-          }
-          allProperties.setProperty(newPropertyKey, newPropertyValue.toString());
-        }
-      }
-    }
-    // Now that all compile-scope dependencies have been seen, include only those test-scope
-    // dependencies that are not also compile-scope dependencies.
-    for (Map.Entry<String,SortedSet<String>> entry : testScopeDependencies.entrySet()) {
-      String module = entry.getKey();
-      SortedSet<String> testDeps = entry.getValue();
-      SortedSet<String> compileDeps = internalCompileScopeDependencies.get(module);
-      if (null == compileDeps) {
-        throw new BuildException("Can't find compile scope dependencies for module " + module);
-      }
-      StringBuilder newPropertyValue = new StringBuilder();
-      for (String dependency : testDeps) {
-        // modules with separate compile-scope and test-scope POMs need their compile-scope deps
-        // included in their test-scope deps.
-        if (modulesWithSeparateCompileAndTestPOMs.contains(module) || ! compileDeps.contains(dependency)) {
-          int splitPos = dependency.indexOf(':');
-          String groupId = dependency.substring(0, splitPos);
-          String artifactId = dependency.substring(splitPos + 1);
-          appendDependencyXml(newPropertyValue, groupId, artifactId, "    ", null, true, false, null, null);
-        }
-      }
-      if (newPropertyValue.length() > 0) {
-        newPropertyValue.setLength(newPropertyValue.length() - 1); // drop trailing newline
-      }
-      allProperties.setProperty(testScopePropertyKeys.get(module), newPropertyValue.toString());
-    }
-  }
-
-  /**
-   * Converts either a compile output directory or an internal jar
-   * dependency, taken from an Ant (test.)classpath, into an artifactId
-   */
-  private String dependencyToArtifactId(String newPropertyKey, String dependency) {
-    StringBuilder artifactId = new StringBuilder();
-    Matcher matcher = COMPILATION_OUTPUT_DIRECTORY_PATTERN.matcher(dependency);
-    if (matcher.matches()) {
-      // Pattern.compile("(lucene|solr)/build/(.*)/classes/java");
-      String artifact = matcher.group(2);
-      artifact = artifact.replace('/', '-');
-      artifact = artifact.replaceAll("(?<!solr-)analysis-", "analyzers-");
-      if ("lucene".equals(matcher.group(1))) {
-        artifactId.append("lucene-");
-      }
-      artifactId.append(artifact);
-    } else {
-      matcher = internalJarPattern.matcher(dependency);
-      if (matcher.matches()) {
-        // internalJarPattern is /.*(lucene|solr)([^/]*?)-<version>\.jar/,
-        // where <version> is the value of the Ant "version" property
-        artifactId.append(matcher.group(1));
-        artifactId.append(matcher.group(2));
-      } else {
-        throw new BuildException
-            ("Malformed module dependency from '" + newPropertyKey + "': '" + dependency + "'");
-      }
-    }
-    return artifactId.toString();
-  }
-
-  /**
-   * Convert Ant project names to artifact names: prepend "lucene-"
-   * to Lucene project names
-   */
-  private String antProjectToArtifactName(String origModule) {
-    String module = origModule;
-    if ( ! origModule.startsWith("solr-")) { // lucene modules names don't have "lucene-" prepended
-      module = "lucene-" + module;
-    }
-    return module;
-  }
-
-  /**
-   * Collect external dependencies from the given ivy.xml file, constructing
-   * property values containing &lt;dependency&gt; snippets, which will be
-   * filtered (substituted) when copying the POM for the module corresponding
-   * to the given ivy.xml file.
-   */
-  private void collectExternalDependenciesFromIvyXmlFile(File ivyXmlFile)
-      throws XPathExpressionException, IOException, SAXException {
-    String module = getModuleName(ivyXmlFile);
-    log("Collecting external dependencies from: " + ivyXmlFile.getPath(), verboseLevel);
-    Document document = documentBuilder.parse(ivyXmlFile);
-    // Exclude the 'start' configuration in solr/server/ivy.xml
-    String dependencyPath = "/ivy-module/dependencies/dependency[not(starts-with(@conf,'start'))]";
-    NodeList dependencies = (NodeList)xpath.evaluate(dependencyPath, document, XPathConstants.NODESET);
-    for (int depNum = 0 ; depNum < dependencies.getLength() ; ++depNum) {
-      Element dependency = (Element)dependencies.item(depNum);
-      String groupId = dependency.getAttribute("org");
-      String artifactId = dependency.getAttribute("name");
-      String dependencyCoordinate = groupId + ':' + artifactId;
-      Set<String> classifiers = dependencyClassifiers.get(dependencyCoordinate);
-      if (null == classifiers) {
-        classifiers = new HashSet<>();
-        dependencyClassifiers.put(dependencyCoordinate, classifiers);
-      }
-      String conf = dependency.getAttribute("conf");
-      boolean confContainsTest = conf.contains("test");
-      boolean isOptional = globalOptionalExternalDependencies.contains(dependencyCoordinate)
-          || ( perModuleOptionalExternalDependencies.containsKey(module)
-              && perModuleOptionalExternalDependencies.get(module).contains(dependencyCoordinate));
-      SortedSet<ExternalDependency> deps = allExternalDependencies.get(module);
-      if (null == deps) {
-        deps = new TreeSet<>();
-        allExternalDependencies.put(module, deps);
-      }
-      NodeList artifacts = null;
-      if (dependency.hasChildNodes()) {
-        artifacts = (NodeList)xpath.evaluate("artifact", dependency, XPathConstants.NODESET);
-      }
-      if (null != artifacts && artifacts.getLength() > 0) {
-        for (int artifactNum = 0 ; artifactNum < artifacts.getLength() ; ++artifactNum) {
-          Element artifact = (Element)artifacts.item(artifactNum);
-          String type = artifact.getAttribute("type");
-          String ext = artifact.getAttribute("ext");
-          // When conf contains BOTH "test" and "compile", and type != "test", this is NOT a test dependency
-          boolean isTestDependency = confContainsTest && (type.equals("test") || ! conf.contains("compile"));
-          if ((type.isEmpty() && ext.isEmpty()) || type.equals("jar") || ext.equals("jar")) {
-            String classifier = artifact.getAttribute("maven:classifier");
-            if (classifier.isEmpty()) {
-              classifier = null;
-            }
-            classifiers.add(classifier);
-            deps.add(new ExternalDependency(groupId, artifactId, classifier, isTestDependency, isOptional));
-          } else { // not a jar
-            nonJarDependencies.add(dependencyCoordinate);
-          }
-        }
-      } else {
-        classifiers.add(null);
-        deps.add(new ExternalDependency(groupId, artifactId, null, confContainsTest, isOptional));
-      }
-    }
-  }
-
-  /**
-   * Stores information about an external dependency
-   */
-  private static class ExternalDependency implements Comparable<ExternalDependency> {
-    String groupId;
-    String artifactId;
-    boolean isTestDependency;
-    boolean isOptional;
-    String classifier;
-    
-    public ExternalDependency
-        (String groupId, String artifactId, String classifier, boolean isTestDependency, boolean isOptional) {
-      this.groupId = groupId;
-      this.artifactId = artifactId;
-      this.classifier = classifier;
-      this.isTestDependency = isTestDependency;
-      this.isOptional = isOptional;
-    }
-    
-    @Override
-    public boolean equals(Object o) {
-      if ( ! (o instanceof ExternalDependency)) {
-        return false;
-      }
-      ExternalDependency other = (ExternalDependency)o;
-      return groupId.equals(other.groupId) 
-          && artifactId.equals(other.artifactId) 
-          && isTestDependency == other.isTestDependency
-          && isOptional == other.isOptional
-          && classifier.equals(other.classifier);
-    } 
-    
-    @Override
-    public int hashCode() {
-      return groupId.hashCode() * 31
-          + artifactId.hashCode() * 31
-          + (isTestDependency ? 31 : 0)
-          + (isOptional ? 31 : 0)
-          + classifier.hashCode();
-    }
-
-    @Override
-    public int compareTo(ExternalDependency other) {
-      int comparison = groupId.compareTo(other.groupId);
-      if (0 != comparison) {
-        return comparison;
-      }
-      comparison = artifactId.compareTo(other.artifactId);
-      if (0 != comparison) {
-        return comparison;
-      }
-      if (null == classifier) {
-        if (null != other.classifier) {
-          return -1;
-        }
-      } else if (null == other.classifier) { // classifier is not null
-        return 1;
-      } else {                               // neither classifier is  null
-        if (0 != (comparison = classifier.compareTo(other.classifier))) {
-          return comparison;
-        }
-      }
-      // test and optional don't matter in this sort
-      return 0;
-    }
-  }
-  
-  /**
-   * Extract module name from ivy.xml path.
-   */
-  private String getModuleName(File ivyXmlFile) {
-    String path = ivyXmlFile.getAbsolutePath();
-    Matcher matcher = PROPERTY_PREFIX_FROM_IVY_XML_FILE_PATTERN.matcher(path);
-    if ( ! matcher.find()) {
-      throw new BuildException("Can't get module name from ivy.xml path: " + path);
-    }
-    StringBuilder builder = new StringBuilder();
-    builder.append(matcher.group(1));
-    if (null != matcher.group(2)) { // "lucene/analysis/..."
-      builder.append("-analyzers");
-    } else if (null != matcher.group(3)) { // "solr/example/..."
-      builder.append("-example");
-    } else if (null != matcher.group(4)) { // "solr/server/..."
-      builder.append("-server");
-    }
-    builder.append('-');
-    builder.append(matcher.group(5));
-    return builder.toString().replace("solr-solr-", "solr-");
-  }
-
-/**
- * Appends a &lt;dependency&gt; snippet to the given builder.
- */
-  private void appendDependencyXml(StringBuilder builder, String groupId, String artifactId, 
-                                   String indent, String version, boolean isTestDependency, 
-                                   boolean isOptional, String classifier, Collection<String> exclusions) {
-    builder.append(indent).append("<dependency>\n");
-    builder.append(indent).append("  <groupId>").append(groupId).append("</groupId>\n");
-    builder.append(indent).append("  <artifactId>").append(artifactId).append("</artifactId>\n");
-    if (null != version) {
-      builder.append(indent).append("  <version>").append(version).append("</version>\n");
-    }
-    if (isTestDependency) {
-      builder.append(indent).append("  <scope>test</scope>\n");
-    }
-    if (isOptional) {
-      builder.append(indent).append("  <optional>true</optional>\n");
-    }
-    if (null != classifier) {
-      builder.append(indent).append("  <classifier>").append(classifier).append("</classifier>\n");
-    }
-    if ( ! modulesWithTransitiveDependencies.contains(artifactId) && null != exclusions && ! exclusions.isEmpty()) {
-      builder.append(indent).append("  <exclusions>\n");
-      for (String dependency : exclusions) {
-        int splitPos = dependency.indexOf(':');
-        String excludedGroupId = dependency.substring(0, splitPos);
-        String excludedArtifactId = dependency.substring(splitPos + 1);
-        builder.append(indent).append("    <exclusion>\n");
-        builder.append(indent).append("      <groupId>").append(excludedGroupId).append("</groupId>\n");
-        builder.append(indent).append("      <artifactId>").append(excludedArtifactId).append("</artifactId>\n");
-        builder.append(indent).append("    </exclusion>\n");
-      }
-      builder.append(indent).append("  </exclusions>\n");
-    }
-    builder.append(indent).append("</dependency>\n");
-  }
-}
diff --git a/lucene/tools/src/java/org/apache/lucene/dependencies/InterpolatedProperties.java b/lucene/tools/src/java/org/apache/lucene/dependencies/InterpolatedProperties.java
deleted file mode 100644
index 073db7b..0000000
--- a/lucene/tools/src/java/org/apache/lucene/dependencies/InterpolatedProperties.java
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.dependencies;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.Reader;
-import java.util.Arrays;
-import java.util.Enumeration;
-import java.util.HashSet;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Set;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-import java.util.stream.Collectors;
-
-/**
- * Parse a properties file, performing recursive Ant-like
- * property value interpolation, and return the resulting Properties.
- */
-public class InterpolatedProperties extends Properties {
-  private static final Pattern PROPERTY_REFERENCE_PATTERN = Pattern.compile("\\$\\{(?<name>[^}]+)\\}");
-
-  /**
-   * Loads the properties file via {@link Properties#load(InputStream)},
-   * then performs recursive Ant-like property value interpolation.
-   */
-  @Override
-  public void load(InputStream inStream) throws IOException {
-    throw new UnsupportedOperationException("InterpolatedProperties.load(InputStream) is not supported.");
-  }
-
-  /**
-   * Loads the properties file via {@link Properties#load(Reader)},
-   * then performs recursive Ant-like property value interpolation.
-   */
-  @Override
-  public void load(Reader reader) throws IOException {
-    Properties p = new Properties();
-    p.load(reader);
-
-    LinkedHashMap<String, String> props = new LinkedHashMap<>();
-    Enumeration<?> e = p.propertyNames();
-    while (e.hasMoreElements()) {
-      String key = (String) e.nextElement();
-      props.put(key, p.getProperty(key));
-    }
-
-    resolve(props).forEach((k, v) -> this.setProperty(k, v));
-  }
-
-  private static Map<String,String> resolve(Map<String,String> props) {
-    LinkedHashMap<String, String> resolved = new LinkedHashMap<>();
-    HashSet<String> recursive = new HashSet<>();
-    props.forEach((k, v) -> {
-      resolve(props, resolved, recursive, k, v);
-    });
-    return resolved;
-  }
-
-  private static String resolve(Map<String,String> props,
-                               LinkedHashMap<String, String> resolved,
-                               Set<String> recursive,
-                               String key,
-                               String value) {
-    if (value == null) {
-      throw new IllegalArgumentException("Missing replaced property key: " + key);
-    }
-
-    if (recursive.contains(key)) {
-      throw new IllegalArgumentException("Circular recursive property resolution: " + recursive);
-    }
-
-    if (!resolved.containsKey(key)) {
-      recursive.add(key);
-      StringBuffer buffer = new StringBuffer();
-      Matcher matcher = PROPERTY_REFERENCE_PATTERN.matcher(value);
-      while (matcher.find()) {
-        String referenced = matcher.group("name");
-        String concrete = resolve(props, resolved, recursive, referenced, props.get(referenced));
-        matcher.appendReplacement(buffer, Matcher.quoteReplacement(concrete));
-      }
-      matcher.appendTail(buffer);
-      resolved.put(key, buffer.toString());
-      recursive.remove(key);
-    }
-    assert resolved.get(key).equals(value);
-    return resolved.get(key);
-  }
-
-  public static void main(String [] args) {
-    {
-      Map<String, String> props = new LinkedHashMap<>();
-      props.put("a", "${b}");
-      props.put("b", "${c}");
-      props.put("c", "foo");
-      props.put("d", "${a}/${b}/${c}");
-      assertEquals(resolve(props), "a=foo", "b=foo", "c=foo", "d=foo/foo/foo");
-    }
-
-    {
-      Map<String, String> props = new LinkedHashMap<>();
-      props.put("a", "foo");
-      props.put("b", "${a}");
-      assertEquals(resolve(props), "a=foo", "b=foo");
-    }
-
-    {
-      Map<String, String> props = new LinkedHashMap<>();
-      props.put("a", "${b}");
-      props.put("b", "${c}");
-      props.put("c", "${a}");
-      try {
-        resolve(props);
-      } catch (IllegalArgumentException e) {
-        // Expected, circular reference.
-        if (!e.getMessage().contains("Circular recursive")) {
-          throw new AssertionError();
-        }
-      }
-    }
-
-    {
-      Map<String, String> props = new LinkedHashMap<>();
-      props.put("a", "${b}");
-      try {
-        resolve(props);
-      } catch (IllegalArgumentException e) {
-        // Expected, no referenced value.
-        if (!e.getMessage().contains("Missing replaced")) {
-          throw new AssertionError();
-        }
-      }
-    }
-  }
-
-  private static void assertEquals(Map<String,String> resolved, String... keyValuePairs) {
-    List<String> result = resolved.entrySet().stream().sorted((a, b) -> a.getKey().compareTo(b.getKey()))
-        .map(e -> e.getKey() + "=" + e.getValue())
-        .collect(Collectors.toList());
-    if (!result.equals(Arrays.asList(keyValuePairs))) {
-      throw new AssertionError("Mismatch: \n" + result + "\nExpected: " + Arrays.asList(keyValuePairs));
-    }
-  }
-}
diff --git a/lucene/tools/src/java/org/apache/lucene/validation/LibVersionsCheckTask.java b/lucene/tools/src/java/org/apache/lucene/validation/LibVersionsCheckTask.java
deleted file mode 100644
index 2b89703..0000000
--- a/lucene/tools/src/java/org/apache/lucene/validation/LibVersionsCheckTask.java
+++ /dev/null
@@ -1,903 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.validation;
-
-import org.apache.ivy.Ivy;
-import org.apache.ivy.core.LogOptions;
-import org.apache.ivy.core.report.ResolveReport;
-import org.apache.ivy.core.resolve.ResolveOptions;
-import org.apache.ivy.core.settings.IvySettings;
-import org.apache.ivy.plugins.conflict.NoConflictManager;
-import org.apache.lucene.dependencies.InterpolatedProperties;
-import org.apache.lucene.validation.ivyde.IvyNodeElement;
-import org.apache.lucene.validation.ivyde.IvyNodeElementAdapter;
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.Project;
-import org.apache.tools.ant.Task;
-import org.apache.tools.ant.types.Resource;
-import org.apache.tools.ant.types.ResourceCollection;
-import org.apache.tools.ant.types.resources.FileResource;
-import org.apache.tools.ant.types.resources.Resources;
-import org.xml.sax.Attributes;
-import org.xml.sax.InputSource;
-import org.xml.sax.SAXException;
-import org.xml.sax.SAXNotRecognizedException;
-import org.xml.sax.SAXNotSupportedException;
-import org.xml.sax.helpers.DefaultHandler;
-
-import javax.xml.XMLConstants;
-import javax.xml.parsers.ParserConfigurationException;
-import javax.xml.parsers.SAXParser;
-import javax.xml.parsers.SAXParserFactory;
-import javax.xml.transform.Transformer;
-import javax.xml.transform.TransformerException;
-import javax.xml.transform.TransformerFactory;
-import javax.xml.transform.stream.StreamResult;
-import javax.xml.transform.stream.StreamSource;
-
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.io.OutputStreamWriter;
-import java.io.Reader;
-import java.io.StringWriter;
-import java.io.Writer;
-import java.nio.charset.StandardCharsets;
-import java.text.ParseException;
-import java.util.Arrays;
-import java.util.Comparator;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Set;
-import java.util.Stack;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-/**
- * An Ant task to verify that the '/org/name' keys in ivy-versions.properties
- * are sorted lexically and are neither duplicates nor orphans, and that all
- * dependencies in all ivy.xml files use rev="${/org/name}" format.
- */
-public class LibVersionsCheckTask extends Task {
-
-  private static final String IVY_XML_FILENAME = "ivy.xml";
-  private static final Pattern COORDINATE_KEY_PATTERN = Pattern.compile("(/([^/ \t\f]+)/([^=:/ \t\f]+))");
-  private static final Pattern BLANK_OR_COMMENT_LINE_PATTERN = Pattern.compile("[ \t\f]*(?:[#!].*)?");
-  private static final Pattern TRAILING_BACKSLASH_PATTERN = Pattern.compile("[^\\\\]*(\\\\+)$");
-  private static final Pattern LEADING_WHITESPACE_PATTERN = Pattern.compile("[ \t\f]+(.*)");
-  private static final Pattern WHITESPACE_GOODSTUFF_WHITESPACE_BACKSLASH_PATTERN
-      = Pattern.compile("[ \t\f]*(.*?)(?:(?<!\\\\)[ \t\f]*)?\\\\");
-  private static final Pattern TRAILING_WHITESPACE_BACKSLASH_PATTERN
-      = Pattern.compile("(.*?)(?:(?<!\\\\)[ \t\f]*)?\\\\");
-  private static final Pattern MODULE_NAME_PATTERN = Pattern.compile("\\smodule\\s*=\\s*[\"']([^\"']+)[\"']");
-  private static final Pattern MODULE_DIRECTORY_PATTERN 
-      = Pattern.compile(".*[/\\\\]((?:lucene|solr)[/\\\\].*)[/\\\\].*");
-  private static final SAXParserFactory SAX_PARSER_FACTORY = SAXParserFactory.newDefaultInstance();
-  static {
-    try {
-      SAX_PARSER_FACTORY.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);
-    } catch (SAXNotRecognizedException | SAXNotSupportedException | ParserConfigurationException e) {
-      throw new Error(e);
-    }
-  }
-  private Ivy ivy;
-
-  /**
-   * All ivy.xml files to check.
-   */
-  private Resources ivyXmlResources = new Resources();
-
-  /**
-   * Centralized Ivy versions properties file: ivy-versions.properties
-   */
-  private File centralizedVersionsFile;
-
-  /**
-   * Centralized Ivy ignore conflicts file: ivy-ignore-conflicts.properties
-   */
-  private File ignoreConflictsFile;
-
-  /**
-   * Ivy settings file: top-level-ivy-settings.xml
-   */
-  private File topLevelIvySettingsFile;
-
-  /**
-   * Location of common build dir: lucene/build/
-   */
-  private File commonBuildDir;
-
-  /**
-   * Location of ivy cache resolution directory.
-   */
-  private File ivyResolutionCacheDir;
-  
-  /**
-   * Artifact lock strategy that Ivy should use.
-   */
-  private String ivyLockStrategy;
-  
-  /**
-   * A logging level associated with verbose logging.
-   */
-  private int verboseLevel = Project.MSG_VERBOSE;
-  
-  /**
-   * All /org/name keys found in ivy-versions.properties,
-   * mapped to info about direct dependence and what would
-   * be conflicting indirect dependencies if Lucene/Solr
-   * were to use transitive dependencies.
-   */
-  private Map<String,Dependency> directDependencies = new LinkedHashMap<>();
-
-  /**
-   * All /org/name keys found in ivy-ignore-conflicts.properties,
-   * mapped to the set of indirect dependency versions that will
-   * be ignored, i.e. not trigger a conflict.
-   */
-  private Map<String,HashSet<String>> ignoreConflictVersions = new HashMap<>();
-
-  private static class Dependency {
-    String org;
-    String name;
-    String directVersion;
-    String latestVersion;
-    boolean directlyReferenced = false;
-    LinkedHashMap<IvyNodeElement,Set<String>> conflictLocations = new LinkedHashMap<>(); // dependency path -> moduleNames
-    
-    Dependency(String org, String name, String directVersion) {
-      this.org = org;
-      this.name = name;
-      this.directVersion = directVersion;
-    }
-  }
-  
-  /**
-   * Adds a set of ivy.xml resources to check.
-   */
-  public void add(ResourceCollection rc) {
-    ivyXmlResources.add(rc);
-  }
-
-  public void setVerbose(boolean verbose) {
-    verboseLevel = (verbose ? Project.MSG_INFO : Project.MSG_VERBOSE);
-  }
-
-  public void setCentralizedVersionsFile(File file) {
-    centralizedVersionsFile = file;
-  }
-
-  public void setTopLevelIvySettingsFile(File file) {
-    topLevelIvySettingsFile = file;
-  }
-
-  public void setIvyResolutionCacheDir(File dir) {
-    ivyResolutionCacheDir = dir;
-  }
-  
-  public void setIvyLockStrategy(String strategy) {
-    this.ivyLockStrategy = strategy;
-  }
-
-  public void setCommonBuildDir(File file) {
-    commonBuildDir = file;
-  }
-  
-  public void setIgnoreConflictsFile(File file) {
-    ignoreConflictsFile = file;
-  }
-
-  /**
-   * Execute the task.
-   */
-  @Override
-  public void execute() throws BuildException {
-    log("Starting scan.", verboseLevel);
-    long start = System.currentTimeMillis();
-
-    setupIvy();
-
-    int numErrors = 0;
-    if ( ! verifySortedCoordinatesPropertiesFile(centralizedVersionsFile)) {
-      ++numErrors;
-    }
-    if ( ! verifySortedCoordinatesPropertiesFile(ignoreConflictsFile)) {
-      ++numErrors;
-    }
-    collectDirectDependencies();
-    if ( ! collectVersionConflictsToIgnore()) {
-      ++numErrors;
-    }
-
-    int numChecked = 0;
-
-    @SuppressWarnings("unchecked")
-    Iterator<Resource> iter = (Iterator<Resource>)ivyXmlResources.iterator();
-    while (iter.hasNext()) {
-      final Resource resource = iter.next();
-      if ( ! resource.isExists()) {
-        throw new BuildException("Resource does not exist: " + resource.getName());
-      }
-      if ( ! (resource instanceof FileResource)) {
-        throw new BuildException("Only filesystem resources are supported: " 
-            + resource.getName() + ", was: " + resource.getClass().getName());
-      }
-
-      File ivyXmlFile = ((FileResource)resource).getFile();
-      try {
-        if ( ! checkIvyXmlFile(ivyXmlFile)) {
-          ++numErrors;
-        }
-        if ( ! resolveTransitively(ivyXmlFile)) {
-          ++numErrors;
-        }
-        if ( ! findLatestConflictVersions()) {
-          ++numErrors;
-        }
-      } catch (Exception e) {
-        throw new BuildException("Exception reading file " + ivyXmlFile.getPath() + " - " + e.toString(), e);
-      }
-      ++numChecked;
-    }
-
-    log("Checking for orphans in " + centralizedVersionsFile.getName(), verboseLevel);
-    for (Map.Entry<String,Dependency> entry : directDependencies.entrySet()) {
-      String coordinateKey = entry.getKey();
-      if ( ! entry.getValue().directlyReferenced) {
-        log("ORPHAN coordinate key '" + coordinateKey + "' in " + centralizedVersionsFile.getName()
-            + " is not found in any " + IVY_XML_FILENAME + " file.",
-            Project.MSG_ERR);
-        ++numErrors;
-      }
-    }
-
-    int numConflicts = emitConflicts();
-
-    int messageLevel = numErrors > 0 ? Project.MSG_ERR : Project.MSG_INFO;
-    log("Checked that " + centralizedVersionsFile.getName() + " and " + ignoreConflictsFile.getName()
-        + " have lexically sorted '/org/name' keys and no duplicates or orphans.",
-        messageLevel);
-    log("Scanned " + numChecked + " " + IVY_XML_FILENAME + " files for rev=\"${/org/name}\" format.",
-        messageLevel);
-    log("Found " + numConflicts + " indirect dependency version conflicts.");
-    log(String.format(Locale.ROOT, "Completed in %.2fs., %d error(s).",
-                      (System.currentTimeMillis() - start) / 1000.0, numErrors),
-        messageLevel);
-
-    if (numConflicts > 0 || numErrors > 0) {
-      throw new BuildException("Lib versions check failed. Check the logs.");
-    }
-  }
-
-  private boolean findLatestConflictVersions() {
-    boolean success = true;
-    StringBuilder latestIvyXml = new StringBuilder();
-    latestIvyXml.append("<ivy-module version=\"2.0\">\n");
-    latestIvyXml.append("  <info organisation=\"org.apache.lucene\" module=\"core-tools-find-latest-revision\"/>\n");
-    latestIvyXml.append("  <configurations>\n");
-    latestIvyXml.append("    <conf name=\"default\" transitive=\"false\"/>\n");
-    latestIvyXml.append("  </configurations>\n");
-    latestIvyXml.append("  <dependencies>\n");
-    for (Map.Entry<String, Dependency> directDependency : directDependencies.entrySet()) {
-      Dependency dependency = directDependency.getValue();
-      if (dependency.conflictLocations.entrySet().isEmpty()) {
-        continue;
-      }
-      latestIvyXml.append("    <dependency org=\"");
-      latestIvyXml.append(dependency.org);
-      latestIvyXml.append("\" name=\"");
-      latestIvyXml.append(dependency.name);
-      latestIvyXml.append("\" rev=\"latest.release\" conf=\"default->*\"/>\n");
-    }
-    latestIvyXml.append("  </dependencies>\n");
-    latestIvyXml.append("</ivy-module>\n");
-    File buildDir = new File(commonBuildDir, "ivy-transitive-resolve");
-    if ( ! buildDir.exists() && ! buildDir.mkdirs()) {
-      throw new BuildException("Could not create temp directory " + buildDir.getPath());
-    }
-    File findLatestIvyXmlFile = new File(buildDir, "find.latest.conflicts.ivy.xml");
-    try {
-      try (Writer writer = new OutputStreamWriter(new FileOutputStream(findLatestIvyXmlFile), StandardCharsets.UTF_8)) {
-        writer.write(latestIvyXml.toString());
-      }
-      ResolveOptions options = new ResolveOptions();
-      options.setDownload(false);           // Download only module descriptors, not artifacts
-      options.setTransitive(false);         // Resolve only direct dependencies
-      options.setUseCacheOnly(false);       // Download the internet!
-      options.setOutputReport(false);       // Don't print to the console
-      options.setLog(LogOptions.LOG_QUIET); // Don't log to the console
-      options.setConfs(new String[] {"*"}); // Resolve all configurations
-      ResolveReport resolveReport = ivy.resolve(findLatestIvyXmlFile.toURI().toURL(), options);
-      IvyNodeElement root = IvyNodeElementAdapter.adapt(resolveReport);
-      for (IvyNodeElement element : root.getDependencies()) {
-        String coordinate = "/" + element.getOrganization() + "/" + element.getName();
-        Dependency dependency = directDependencies.get(coordinate);
-        if (null == dependency) {
-          log("ERROR: the following coordinate key does not appear in "
-              + centralizedVersionsFile.getName() + ": " + coordinate, Project.MSG_ERR);
-          success = false;
-        } else {
-          dependency.latestVersion = element.getRevision();
-        }
-      }
-    } catch (IOException e) {
-      log("Exception writing to " + findLatestIvyXmlFile.getPath() + ": " + e.toString(), Project.MSG_ERR);
-      success = false;
-    } catch (ParseException e) {
-      log("Exception parsing filename " + findLatestIvyXmlFile.getPath() + ": " + e.toString(), Project.MSG_ERR);
-      success = false;
-    }
-    return success;
-  }
-
-  /**
-   * Collects indirect dependency version conflicts to ignore 
-   * in ivy-ignore-conflicts.properties, and also checks for orphans
-   * (coordinates not included in ivy-versions.properties).
-   * 
-   * Returns true if no orphans are found.
-   */
-  private boolean collectVersionConflictsToIgnore() {
-    log("Checking for orphans in " + ignoreConflictsFile.getName(), verboseLevel);
-    boolean orphansFound = false;
-    InterpolatedProperties properties = new InterpolatedProperties();
-    try (InputStream inputStream = new FileInputStream(ignoreConflictsFile);
-         Reader reader = new InputStreamReader(inputStream, StandardCharsets.UTF_8)) {
-      properties.load(reader);
-    } catch (IOException e) {
-      throw new BuildException("Exception reading " + ignoreConflictsFile + ": " + e.toString(), e);
-    }
-    for (Object obj : properties.keySet()) {
-      String coordinate = (String)obj;
-      if (COORDINATE_KEY_PATTERN.matcher(coordinate).matches()) {
-        if ( ! directDependencies.containsKey(coordinate)) {
-          orphansFound = true;
-          log("ORPHAN coordinate key '" + coordinate + "' in " + ignoreConflictsFile.getName()
-                  + " is not found in " + centralizedVersionsFile.getName(),
-              Project.MSG_ERR);
-        } else {
-          String versionsToIgnore = properties.getProperty(coordinate);
-          List<String> ignore = Arrays.asList(versionsToIgnore.trim().split("\\s*,\\s*|\\s+"));
-          ignoreConflictVersions.put(coordinate, new HashSet<>(ignore));
-        }
-      }
-    }
-    return ! orphansFound;
-  }
-
-  private void collectDirectDependencies() {
-    InterpolatedProperties properties = new InterpolatedProperties();
-    try (InputStream inputStream = new FileInputStream(centralizedVersionsFile);
-         Reader reader = new InputStreamReader(inputStream, StandardCharsets.UTF_8)) {
-      properties.load(reader);
-    } catch (IOException e) {
-      throw new BuildException("Exception reading " + centralizedVersionsFile + ": " + e.toString(), e);
-    }
-    for (Object obj : properties.keySet()) {
-      String coordinate = (String)obj;
-      Matcher matcher = COORDINATE_KEY_PATTERN.matcher(coordinate);
-      if (matcher.matches()) {
-        String org = matcher.group(2);
-        String name = matcher.group(3);
-        String directVersion = properties.getProperty(coordinate);
-        Dependency dependency = new Dependency(org, name, directVersion);
-        directDependencies.put(coordinate, dependency);
-      }
-    }
-  }
-
-  /**
-   * Transitively resolves all dependencies in the given ivy.xml file,
-   * looking for indirect dependencies with versions that conflict
-   * with those of direct dependencies.  Dependency conflict when a
-   * direct dependency's version is older than that of an indirect
-   * dependency with the same /org/name.
-   * 
-   * Returns true if no version conflicts are found and no resolution
-   * errors occurred, false otherwise.
-   */
-  private boolean resolveTransitively(File ivyXmlFile) {
-    boolean success = true;
-
-    ResolveOptions options = new ResolveOptions();
-    options.setDownload(false);           // Download only module descriptors, not artifacts
-    options.setTransitive(true);          // Resolve transitively, if not already specified in the ivy.xml file
-    options.setUseCacheOnly(false);       // Download the internet!
-    options.setOutputReport(false);       // Don't print to the console
-    options.setLog(LogOptions.LOG_QUIET); // Don't log to the console
-    options.setConfs(new String[] {"*"}); // Resolve all configurations
-
-    // Rewrite the ivy.xml, replacing all 'transitive="false"' with 'transitive="true"'
-    // The Ivy API is file-based, so we have to write the result to the filesystem.
-    String moduleName = "unknown";
-    String ivyXmlContent = xmlToString(ivyXmlFile);
-    Matcher matcher = MODULE_NAME_PATTERN.matcher(ivyXmlContent);
-    if (matcher.find()) {
-      moduleName = matcher.group(1);
-    }
-    ivyXmlContent = ivyXmlContent.replaceAll("\\btransitive\\s*=\\s*[\"']false[\"']", "transitive=\"true\"");
-    File transitiveIvyXmlFile = null;
-    try {
-      File buildDir = new File(commonBuildDir, "ivy-transitive-resolve");
-      if ( ! buildDir.exists() && ! buildDir.mkdirs()) {
-        throw new BuildException("Could not create temp directory " + buildDir.getPath());
-      }
-      matcher = MODULE_DIRECTORY_PATTERN.matcher(ivyXmlFile.getCanonicalPath());
-      if ( ! matcher.matches()) {
-        throw new BuildException("Unknown ivy.xml module directory: " + ivyXmlFile.getCanonicalPath());
-      }
-      String moduleDirPrefix = matcher.group(1).replaceAll("[/\\\\]", ".");
-      transitiveIvyXmlFile = new File(buildDir, "transitive." + moduleDirPrefix + ".ivy.xml");
-      try (Writer writer = new OutputStreamWriter(new FileOutputStream(transitiveIvyXmlFile), StandardCharsets.UTF_8)) {
-        writer.write(ivyXmlContent);
-      }
-      ResolveReport resolveReport = ivy.resolve(transitiveIvyXmlFile.toURI().toURL(), options);
-      IvyNodeElement root = IvyNodeElementAdapter.adapt(resolveReport);
-      for (IvyNodeElement directDependency : root.getDependencies()) {
-        String coordinate = "/" + directDependency.getOrganization() + "/" + directDependency.getName();
-        Dependency dependency = directDependencies.get(coordinate);
-        if (null == dependency) {
-          log("ERROR: the following coordinate key does not appear in " 
-              + centralizedVersionsFile.getName() + ": " + coordinate);
-          success = false;
-        } else {
-          dependency.directlyReferenced = true;
-          if (collectConflicts(directDependency, directDependency, moduleName)) {
-            success = false;
-          }
-        }
-      }
-    } catch (ParseException | IOException e) {
-      if (null != transitiveIvyXmlFile) {
-        log("Exception reading " + transitiveIvyXmlFile.getPath() + ": " + e.toString());
-      }
-      success = false;
-    }
-    return success;
-  }
-
-  /**
-   * Recursively finds indirect dependencies that have a version conflict with a direct dependency.
-   * Returns true if one or more conflicts are found, false otherwise
-   */
-  private boolean collectConflicts(IvyNodeElement root, IvyNodeElement parent, String moduleName) {
-    boolean conflicts = false;
-    for (IvyNodeElement child : parent.getDependencies()) {
-      String coordinate = "/" + child.getOrganization() + "/" + child.getName();
-      Dependency dependency = directDependencies.get(coordinate);
-      if (null != dependency) { // Ignore this indirect dependency if it's not also a direct dependency
-        String indirectVersion = child.getRevision();
-        if (isConflict(coordinate, dependency.directVersion, indirectVersion)) {
-          conflicts = true;
-          Set<String> moduleNames = dependency.conflictLocations.get(root);
-          if (null == moduleNames) {
-            moduleNames = new HashSet<>();
-            dependency.conflictLocations.put(root, moduleNames);
-          }
-          moduleNames.add(moduleName);
-        }
-        conflicts |= collectConflicts(root, child, moduleName);
-      }
-    }
-    return conflicts;
-  }
-
-  /**
-   * Copy-pasted from Ivy's 
-   * org.apache.ivy.plugins.latest.LatestRevisionStrategy
-   * with minor modifications
-   */
-  private static final Map<String,Integer> SPECIAL_MEANINGS;
-  static {
-    SPECIAL_MEANINGS = new HashMap<>();
-    SPECIAL_MEANINGS.put("dev", -1);
-    SPECIAL_MEANINGS.put("rc", 1);
-    SPECIAL_MEANINGS.put("final", 2);
-  }
-
-  /**
-   * Copy-pasted from Ivy's 
-   * org.apache.ivy.plugins.latest.LatestRevisionStrategy.MridComparator
-   * with minor modifications
-   */
-  private static class LatestVersionComparator implements Comparator<String> {
-    @Override
-    public int compare(String rev1, String rev2) {
-      rev1 = rev1.replaceAll("([a-zA-Z])(\\d)", "$1.$2");
-      rev1 = rev1.replaceAll("(\\d)([a-zA-Z])", "$1.$2");
-      rev2 = rev2.replaceAll("([a-zA-Z])(\\d)", "$1.$2");
-      rev2 = rev2.replaceAll("(\\d)([a-zA-Z])", "$1.$2");
-
-      String[] parts1 = rev1.split("[-._+]");
-      String[] parts2 = rev2.split("[-._+]");
-
-      int i = 0;
-      for (; i < parts1.length && i < parts2.length; i++) {
-        if (parts1[i].equals(parts2[i])) {
-          continue;
-        }
-        boolean is1Number = isNumber(parts1[i]);
-        boolean is2Number = isNumber(parts2[i]);
-        if (is1Number && !is2Number) {
-          return 1;
-        }
-        if (is2Number && !is1Number) {
-          return -1;
-        }
-        if (is1Number && is2Number) {
-          return Long.valueOf(parts1[i]).compareTo(Long.valueOf(parts2[i]));
-        }
-        // both are strings, we compare them taking into account special meaning
-        Integer sm1 = SPECIAL_MEANINGS.get(parts1[i].toLowerCase(Locale.ROOT));
-        Integer sm2 = SPECIAL_MEANINGS.get(parts2[i].toLowerCase(Locale.ROOT));
-        if (sm1 != null) {
-          sm2 = sm2 == null ? 0 : sm2;
-          return sm1.compareTo(sm2);
-        }
-        if (sm2 != null) {
-          return Integer.valueOf(0).compareTo(sm2);
-        }
-        return parts1[i].compareTo(parts2[i]);
-      }
-      if (i < parts1.length) {
-        return isNumber(parts1[i]) ? 1 : -1;
-      }
-      if (i < parts2.length) {
-        return isNumber(parts2[i]) ? -1 : 1;
-      }
-      return 0;
-    }
-
-    private static final Pattern IS_NUMBER = Pattern.compile("\\d+");
-    private static boolean isNumber(String str) {
-      return IS_NUMBER.matcher(str).matches();
-    }
-  }
-  private static LatestVersionComparator LATEST_VERSION_COMPARATOR = new LatestVersionComparator();
-
-  /**
-   * Returns true if directVersion is less than indirectVersion, and 
-   * coordinate=indirectVersion is not present in ivy-ignore-conflicts.properties. 
-   */
-  private boolean isConflict(String coordinate, String directVersion, String indirectVersion) {
-    boolean isConflict = LATEST_VERSION_COMPARATOR.compare(directVersion, indirectVersion) < 0;
-    if (isConflict) {
-      Set<String> ignoredVersions = ignoreConflictVersions.get(coordinate);
-      if (null != ignoredVersions && ignoredVersions.contains(indirectVersion)) {
-        isConflict = false;
-      }
-    }
-    return isConflict;
-  }
-
-  /**
-   * Returns the number of direct dependencies in conflict with indirect
-   * dependencies.
-   */
-  private int emitConflicts() {
-    int conflicts = 0;
-    StringBuilder builder = new StringBuilder();
-    for (Map.Entry<String,Dependency> directDependency : directDependencies.entrySet()) {
-      String coordinate = directDependency.getKey();
-      Set<Map.Entry<IvyNodeElement,Set<String>>> entrySet
-          = directDependency.getValue().conflictLocations.entrySet();
-      if (entrySet.isEmpty()) {
-        continue;
-      }
-      ++conflicts;
-      Map.Entry<IvyNodeElement,Set<String>> first = entrySet.iterator().next();
-      int notPrinted = entrySet.size() - 1;
-      builder.append("VERSION CONFLICT: transitive dependency in module(s) ");
-      boolean isFirst = true;
-      for (String moduleName : first.getValue()) {
-        if (isFirst) {
-          isFirst = false;
-        } else {
-          builder.append(", ");
-        }
-        builder.append(moduleName);
-      }
-      builder.append(":\n");
-      IvyNodeElement element = first.getKey();
-      builder.append('/').append(element.getOrganization()).append('/').append(element.getName())
-             .append('=').append(element.getRevision()).append('\n');
-      emitConflict(builder, coordinate, first.getKey(), 1);
-        
-      if (notPrinted > 0) {
-        builder.append("... and ").append(notPrinted).append(" more\n");
-      }
-      builder.append("\n");
-    }
-    if (builder.length() > 0) {
-      log(builder.toString());
-    }
-    return conflicts;
-  }
-  
-  private boolean emitConflict(StringBuilder builder, String conflictCoordinate, IvyNodeElement parent, int depth) {
-    for (IvyNodeElement child : parent.getDependencies()) {
-      String indirectCoordinate = "/" + child.getOrganization() + "/" + child.getName();
-      if (conflictCoordinate.equals(indirectCoordinate)) {
-        Dependency dependency = directDependencies.get(conflictCoordinate);
-        String directVersion = dependency.directVersion;
-        if (isConflict(conflictCoordinate, directVersion, child.getRevision())) {
-          for (int i = 0 ; i < depth - 1 ; ++i) {
-            builder.append("    ");
-          }
-          builder.append("+-- ");
-          builder.append(indirectCoordinate).append("=").append(child.getRevision());
-          builder.append(" <<< Conflict (direct=").append(directVersion);
-          builder.append(", latest=").append(dependency.latestVersion).append(")\n");
-          return true;
-        }
-      } else if (hasConflicts(conflictCoordinate, child)) {
-        for (int i = 0 ; i < depth -1 ; ++i) {
-          builder.append("    ");
-        }
-        builder.append("+-- ");
-        builder.append(indirectCoordinate).append("=").append(child.getRevision()).append("\n");
-        if (emitConflict(builder, conflictCoordinate, child, depth + 1)) {
-          return true;
-        }
-      }
-    }
-    return false;
-  }
-  
-  private boolean hasConflicts(String conflictCoordinate, IvyNodeElement parent) {
-    // the element itself will never be in conflict, since its coordinate is different
-    for (IvyNodeElement child : parent.getDependencies()) {
-      String indirectCoordinate = "/" + child.getOrganization() + "/" + child.getName();
-      if (conflictCoordinate.equals(indirectCoordinate)) {
-        Dependency dependency = directDependencies.get(conflictCoordinate);
-        if (isConflict(conflictCoordinate, dependency.directVersion, child.getRevision())) {
-          return true;
-        }
-      } else if (hasConflicts(conflictCoordinate, child)) {
-        return true;
-      }
-    }
-    return false;
-  }
-
-  private String xmlToString(File ivyXmlFile) {
-    StringWriter writer = new StringWriter();
-    try {
-      StreamSource inputSource = new StreamSource(new FileInputStream(ivyXmlFile.getPath()));
-      Transformer serializer = TransformerFactory.newInstance().newTransformer();
-      serializer.transform(inputSource, new StreamResult(writer));
-    } catch (TransformerException | IOException e) {
-      throw new BuildException("Exception reading " + ivyXmlFile.getPath() + ": " + e.toString(), e);
-    }
-    return writer.toString();
-  }
-
-  private void setupIvy() {
-    IvySettings ivySettings = new IvySettings();
-    try {
-      ivySettings.setVariable("common.build.dir", commonBuildDir.getAbsolutePath());
-      ivySettings.setVariable("ivy.exclude.types", "source|javadoc");
-      ivySettings.setVariable("ivy.resolution-cache.dir", ivyResolutionCacheDir.getAbsolutePath());
-      ivySettings.setVariable("ivy.lock-strategy", ivyLockStrategy);
-      ivySettings.setVariable("ivysettings.xml", getProject().getProperty("ivysettings.xml")); // nested settings file
-      ivySettings.setBaseDir(commonBuildDir);
-      ivySettings.setDefaultConflictManager(new NoConflictManager());
-      ivy = Ivy.newInstance(ivySettings);
-      ivy.configure(topLevelIvySettingsFile);
-    } catch (Exception e) {
-      throw new BuildException("Exception reading " + topLevelIvySettingsFile.getPath() + ": " + e.toString(), e);
-    }
-  }
-
-  /**
-   * Returns true if the "/org/name" coordinate keys in the given
-   * properties file are lexically sorted and are not duplicates.
-   */
-  private boolean verifySortedCoordinatesPropertiesFile(File coordinatePropertiesFile) {
-    log("Checking for lexically sorted non-duplicated '/org/name' keys in: " + coordinatePropertiesFile, verboseLevel);
-    boolean success = true;
-    String line = null;
-    String currentKey = null;
-    String previousKey = null;
-    try (InputStream stream = new FileInputStream(coordinatePropertiesFile);
-         Reader reader = new InputStreamReader(stream, StandardCharsets.ISO_8859_1);
-         BufferedReader bufferedReader = new BufferedReader(reader)) {
-      while (null != (line = readLogicalPropertiesLine(bufferedReader))) {
-        final Matcher keyMatcher = COORDINATE_KEY_PATTERN.matcher(line);
-        if ( ! keyMatcher.lookingAt()) {
-          continue; // Ignore keys that don't look like "/org/name"
-        }
-        currentKey = keyMatcher.group(1);
-        if (null != previousKey) {
-          int comparison = currentKey.compareTo(previousKey);
-          if (0 == comparison) {
-            log("DUPLICATE coordinate key '" + currentKey + "' in " + coordinatePropertiesFile.getName(),
-                Project.MSG_ERR);
-            success = false;
-          } else if (comparison < 0) {
-            log("OUT-OF-ORDER coordinate key '" + currentKey + "' in " + coordinatePropertiesFile.getName(),
-                Project.MSG_ERR);
-            success = false;
-          }
-        }
-        previousKey = currentKey;
-      }
-    } catch (IOException e) {
-      throw new BuildException("Exception reading " + coordinatePropertiesFile.getPath() + ": " + e.toString(), e);
-    }
-    return success;
-  }
-
-  /**
-   * Builds up logical {@link java.util.Properties} lines, composed of one non-blank,
-   * non-comment initial line, either:
-   * 
-   * 1. without a non-escaped trailing slash; or
-   * 2. with a non-escaped trailing slash, followed by
-   *    zero or more lines with a non-escaped trailing slash, followed by
-   *    one or more lines without a non-escaped trailing slash
-   *
-   * All leading non-escaped whitespace and trailing non-escaped whitespace +
-   * non-escaped slash are trimmed from each line before concatenating.
-   * 
-   * After composing the logical line, escaped characters are un-escaped.
-   * 
-   * null is returned if there are no lines left to read. 
-   */
-  private String readLogicalPropertiesLine(BufferedReader reader) throws IOException {
-    final StringBuilder logicalLine = new StringBuilder();
-    String line;
-    do {
-      line = reader.readLine();
-      if (null == line) { 
-        return null;
-      }
-    } while (BLANK_OR_COMMENT_LINE_PATTERN.matcher(line).matches());
-
-    Matcher backslashMatcher = TRAILING_BACKSLASH_PATTERN.matcher(line); 
-    // Check for a non-escaped backslash
-    if (backslashMatcher.find() && 1 == (backslashMatcher.group(1).length() % 2)) {
-      final Matcher firstLineMatcher = TRAILING_WHITESPACE_BACKSLASH_PATTERN.matcher(line);
-      if (firstLineMatcher.matches()) {
-        logicalLine.append(firstLineMatcher.group(1)); // trim trailing backslash and any preceding whitespace
-      }
-      line = reader.readLine();
-      while (null != line
-             && (backslashMatcher = TRAILING_BACKSLASH_PATTERN.matcher(line)).find()
-             && 1 == (backslashMatcher.group(1).length() % 2)) {
-        // Trim leading whitespace, the trailing backslash and any preceding whitespace
-        final Matcher goodStuffMatcher = WHITESPACE_GOODSTUFF_WHITESPACE_BACKSLASH_PATTERN.matcher(line);
-        if (goodStuffMatcher.matches()) {
-          logicalLine.append(goodStuffMatcher.group(1));
-        }
-        line = reader.readLine();
-      }
-      if (null != line) {
-        // line can't have a non-escaped trailing backslash
-        final Matcher leadingWhitespaceMatcher = LEADING_WHITESPACE_PATTERN.matcher(line);
-        if (leadingWhitespaceMatcher.matches()) {
-          line = leadingWhitespaceMatcher.group(1); // trim leading whitespace
-        }
-        logicalLine.append(line);
-      }
-    } else {
-      logicalLine.append(line);
-    }
-    // trim non-escaped leading whitespace
-    final Matcher leadingWhitespaceMatcher = LEADING_WHITESPACE_PATTERN.matcher(logicalLine);
-    final CharSequence leadingWhitespaceStripped = leadingWhitespaceMatcher.matches()
-                                                 ? leadingWhitespaceMatcher.group(1)
-                                                 : logicalLine;
-
-    // unescape all chars in the logical line
-    StringBuilder output = new StringBuilder();
-    final int numChars = leadingWhitespaceStripped.length();
-    for (int pos = 0 ; pos < numChars - 1 ; ++pos) {
-      char ch = leadingWhitespaceStripped.charAt(pos);
-      if (ch == '\\') {
-        ch = leadingWhitespaceStripped.charAt(++pos);
-      }
-      output.append(ch);
-    }
-    if (numChars > 0) {
-      output.append(leadingWhitespaceStripped.charAt(numChars - 1));
-    }
-
-    return output.toString();
-  }
-
-  /**
-   * Check a single ivy.xml file for dependencies' versions in rev="${/org/name}"
-   * format.  Returns false if problems are found, true otherwise.
-   */
-  private boolean checkIvyXmlFile(File ivyXmlFile)
-      throws ParserConfigurationException, SAXException, IOException {
-    log("Scanning: " + ivyXmlFile.getPath(), verboseLevel);
-    SAXParser xmlReader = SAX_PARSER_FACTORY.newSAXParser();
-    DependencyRevChecker revChecker = new DependencyRevChecker(ivyXmlFile); 
-    xmlReader.parse(new InputSource(ivyXmlFile.getAbsolutePath()), revChecker);
-    return ! revChecker.fail;
-  }
-
-  private class DependencyRevChecker extends DefaultHandler {
-    private final File ivyXmlFile;
-    private final Stack<String> tags = new Stack<>();
-    
-    public boolean fail = false;
-
-    public DependencyRevChecker(File ivyXmlFile) {
-      this.ivyXmlFile = ivyXmlFile;
-    }
-
-    @Override
-    public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException {
-      if (localName.equals("dependency") && insideDependenciesTag()) {
-        String org = attributes.getValue("org");
-        boolean foundAllAttributes = true;
-        if (null == org) {
-          log("MISSING 'org' attribute on <dependency> in " + ivyXmlFile.getPath(), Project.MSG_ERR);
-          fail = true;
-          foundAllAttributes = false;
-        }
-        String name = attributes.getValue("name");
-        if (null == name) {
-          log("MISSING 'name' attribute on <dependency> in " + ivyXmlFile.getPath(), Project.MSG_ERR);
-          fail = true;
-          foundAllAttributes = false;
-        }
-        String rev = attributes.getValue("rev");
-        if (null == rev) {
-          log("MISSING 'rev' attribute on <dependency> in " + ivyXmlFile.getPath(), Project.MSG_ERR);
-          fail = true;
-          foundAllAttributes = false;
-        }
-        if (foundAllAttributes) {
-          String coordinateKey = "/" + org + '/' + name;
-          String expectedRev = "${" + coordinateKey + '}';
-          if ( ! rev.equals(expectedRev)) {
-            log("BAD <dependency> 'rev' attribute value '" + rev + "' - expected '" + expectedRev + "'"
-                + " in " + ivyXmlFile.getPath(), Project.MSG_ERR);
-            fail = true;
-          }
-          if ( ! directDependencies.containsKey(coordinateKey)) {
-            log("MISSING key '" + coordinateKey + "' in " + centralizedVersionsFile.getPath(), Project.MSG_ERR);
-            fail = true;
-          }
-        }
-      }
-      tags.push(localName);
-    }
-
-    @Override
-    public void endElement (String uri, String localName, String qName) throws SAXException {
-      tags.pop();
-    }
-
-    private boolean insideDependenciesTag() {
-      return tags.size() == 2 && tags.get(0).equals("ivy-module") && tags.get(1).equals("dependencies");
-    }
-  }
-}
diff --git a/lucene/tools/src/java/org/apache/lucene/validation/LicenseCheckTask.java b/lucene/tools/src/java/org/apache/lucene/validation/LicenseCheckTask.java
deleted file mode 100644
index 96ef01d..0000000
--- a/lucene/tools/src/java/org/apache/lucene/validation/LicenseCheckTask.java
+++ /dev/null
@@ -1,352 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.validation;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.BufferedReader;
-import java.io.InputStreamReader;
-import java.io.IOException;
-import java.nio.charset.StandardCharsets;
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-import java.util.regex.PatternSyntaxException;
-import java.security.DigestInputStream;
-import java.security.MessageDigest;
-import java.security.NoSuchAlgorithmException;
-
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.Project;
-import org.apache.tools.ant.Task;
-import org.apache.tools.ant.types.Mapper;
-import org.apache.tools.ant.types.Resource;
-import org.apache.tools.ant.types.ResourceCollection;
-import org.apache.tools.ant.types.resources.FileResource;
-import org.apache.tools.ant.types.resources.Resources;
-import org.apache.tools.ant.util.FileNameMapper;
-
-/**
- * An ANT task that verifies if JAR file have associated <code>LICENSE</code>,
- * <code>NOTICE</code>, and <code>sha1</code> files. 
- */
-public class LicenseCheckTask extends Task {
-
-  public final static String CHECKSUM_TYPE = "sha1";
-  private static final int CHECKSUM_BUFFER_SIZE = 8 * 1024;
-  private static final int CHECKSUM_BYTE_MASK = 0xFF;
-  private static final String FAILURE_MESSAGE = "License check failed. Check the logs.\n"
-      + "If you recently modified ivy-versions.properties or any module's ivy.xml,\n"
-      + "make sure you run \"ant clean-jars jar-checksums\" before running precommit.";
-
-  private Pattern skipRegexChecksum;
-  private boolean skipSnapshotsChecksum;
-  private boolean skipChecksum;
-  
-  /**
-   * All JAR files to check.
-   */
-  private Resources jarResources = new Resources();
-  
-  /**
-   * Directory containing licenses
-   */
-  private File licenseDirectory;
-
-  /**
-   * License file mapper.
-   */
-  private FileNameMapper licenseMapper;
-
-  /**
-   * A logging level associated with verbose logging.
-   */
-  private int verboseLevel = Project.MSG_VERBOSE;
-  
-  /**
-   * Failure flag.
-   */
-  private boolean failures;
-
-  /**
-   * Adds a set of JAR resources to check.
-   */
-  public void add(ResourceCollection rc) {
-    jarResources.add(rc);
-  }
-  
-  /**
-   * Adds a license mapper.
-   */
-  public void addConfiguredLicenseMapper(Mapper mapper) {
-    if (licenseMapper != null) {
-      throw new BuildException("Only one license mapper is allowed.");
-    }
-    this.licenseMapper = mapper.getImplementation();
-  }
-
-  public void setVerbose(boolean verbose) {
-    verboseLevel = (verbose ? Project.MSG_INFO : Project.MSG_VERBOSE);
-  }
-  
-  public void setLicenseDirectory(File file) {
-    licenseDirectory = file;
-  }
-  
-  public void setSkipSnapshotsChecksum(boolean skipSnapshotsChecksum) {
-    this.skipSnapshotsChecksum = skipSnapshotsChecksum;
-  }
-  
-  public void setSkipChecksum(boolean skipChecksum) {
-    this.skipChecksum = skipChecksum;
-  }
-
-  public void setSkipRegexChecksum(String skipRegexChecksum) {
-    try {
-      if (skipRegexChecksum != null && skipRegexChecksum.length() > 0) {
-        this.skipRegexChecksum = Pattern.compile(skipRegexChecksum);
-      }
-    } catch (PatternSyntaxException e) {
-      throw new BuildException("Unable to compile skipRegexChecksum pattern.  Reason: "
-          + e.getMessage() + " " + skipRegexChecksum, e);
-    }
-  }
-
-  /**
-   * Execute the task.
-   */
-  @Override
-  public void execute() throws BuildException {
-    if (licenseMapper == null) {
-      throw new BuildException("Expected an embedded <licenseMapper>.");
-    }
-
-    if (skipChecksum) {
-      log("Skipping checksum verification for dependencies", Project.MSG_INFO);
-    } else {
-      if (skipSnapshotsChecksum) {
-        log("Skipping checksum for SNAPSHOT dependencies", Project.MSG_INFO);
-      }
-
-      if (skipRegexChecksum != null) {
-        log("Skipping checksum for dependencies matching regex: " + skipRegexChecksum.pattern(),
-            Project.MSG_INFO);
-      }
-    }
-
-    jarResources.setProject(getProject());
-    processJars();
-
-    if (failures) {
-      throw new BuildException(FAILURE_MESSAGE);
-    }
-  }
-
-  /**
-   * Process all JARs.
-   */
-  private void processJars() {
-    log("Starting scan.", verboseLevel);
-    long start = System.currentTimeMillis();
-
-    @SuppressWarnings("unchecked")
-    Iterator<Resource> iter = (Iterator<Resource>) jarResources.iterator();
-    int checked = 0;
-    int errors = 0;
-    while (iter.hasNext()) {
-      final Resource r = iter.next();
-      if (!r.isExists()) { 
-        throw new BuildException("JAR resource does not exist: " + r.getName());
-      }
-      if (!(r instanceof FileResource)) {
-        throw new BuildException("Only filesystem resource are supported: " + r.getName()
-            + ", was: " + r.getClass().getName());
-      }
-
-      File jarFile = ((FileResource) r).getFile();
-      if (! checkJarFile(jarFile) ) {
-        errors++;
-      }
-      checked++;
-    }
-
-    log(String.format(Locale.ROOT, 
-        "Scanned %d JAR file(s) for licenses (in %.2fs.), %d error(s).",
-        checked, (System.currentTimeMillis() - start) / 1000.0, errors),
-        errors > 0 ? Project.MSG_ERR : Project.MSG_INFO);
-  }
-
-  /**
-   * Check a single JAR file.
-   */
-  private boolean checkJarFile(File jarFile) {
-    log("Scanning: " + jarFile.getPath(), verboseLevel);
-    
-    if (!skipChecksum) {
-      boolean skipDueToSnapshot = skipSnapshotsChecksum && jarFile.getName().contains("-SNAPSHOT");
-      if (!skipDueToSnapshot && !matchesRegexChecksum(jarFile, skipRegexChecksum)) {
-        // validate the jar matches against our expected hash
-        final File checksumFile = new File(licenseDirectory, jarFile.getName()
-            + "." + CHECKSUM_TYPE);
-        if (!(checksumFile.exists() && checksumFile.canRead())) {
-          log("MISSING " + CHECKSUM_TYPE + " checksum file for: "
-              + jarFile.getPath(), Project.MSG_ERR);
-          log("EXPECTED " + CHECKSUM_TYPE + " checksum file : "
-              + checksumFile.getPath(), Project.MSG_ERR);
-          this.failures = true;
-          return false;
-        } else {
-          final String expectedChecksum = readChecksumFile(checksumFile);
-          try {
-            final MessageDigest md = MessageDigest.getInstance(CHECKSUM_TYPE);
-            byte[] buf = new byte[CHECKSUM_BUFFER_SIZE];
-            try {
-              FileInputStream fis = new FileInputStream(jarFile);
-              try {
-                DigestInputStream dis = new DigestInputStream(fis, md);
-                try {
-                  while (dis.read(buf, 0, CHECKSUM_BUFFER_SIZE) != -1) {
-                    // NOOP
-                  }
-                } finally {
-                  dis.close();
-                }
-              } finally {
-                fis.close();
-              }
-            } catch (IOException ioe) {
-              throw new BuildException("IO error computing checksum of file: "
-                  + jarFile, ioe);
-            }
-            final byte[] checksumBytes = md.digest();
-            final String checksum = createChecksumString(checksumBytes);
-            if (!checksum.equals(expectedChecksum)) {
-              log("CHECKSUM FAILED for " + jarFile.getPath() + " (expected: \""
-                  + expectedChecksum + "\" was: \"" + checksum + "\")",
-                  Project.MSG_ERR);
-              this.failures = true;
-              return false;
-            }
-            
-          } catch (NoSuchAlgorithmException ae) {
-            throw new BuildException("Digest type " + CHECKSUM_TYPE
-                + " not supported by your JVM", ae);
-          }
-        }
-      } else if (skipDueToSnapshot) {
-        log("Skipping jar because it is a SNAPSHOT : "
-            + jarFile.getAbsolutePath(), Project.MSG_INFO);
-      } else {
-        log("Skipping jar because it matches regex pattern: "
-            + jarFile.getAbsolutePath() + " pattern: " + skipRegexChecksum.pattern(), Project.MSG_INFO);
-      }
-    }
-    
-    // Get the expected license path base from the mapper and search for license files.
-    Map<File, LicenseType> foundLicenses = new LinkedHashMap<>();
-    List<File> expectedLocations = new ArrayList<>();
-outer:
-    for (String mappedPath : licenseMapper.mapFileName(jarFile.getName())) {
-      for (LicenseType licenseType : LicenseType.values()) {
-        File licensePath = new File(licenseDirectory, mappedPath + licenseType.licenseFileSuffix());
-        if (licensePath.exists()) {
-          foundLicenses.put(licensePath, licenseType);
-          log(" FOUND " + licenseType.name() + " license at " + licensePath.getPath(), 
-              verboseLevel);
-          // We could continue scanning here to detect duplicate associations?
-          break outer;
-        } else {
-          expectedLocations.add(licensePath);
-        }
-      }
-    }
-
-    // Check for NOTICE files.
-    for (Map.Entry<File, LicenseType> e : foundLicenses.entrySet()) {
-      LicenseType license = e.getValue();
-      String licensePath = e.getKey().getName();
-      String baseName = licensePath.substring(
-          0, licensePath.length() - license.licenseFileSuffix().length());
-      File noticeFile = new File(licenseDirectory, baseName + license.noticeFileSuffix());
-
-      if (noticeFile.exists()) {
-        log(" FOUND NOTICE file at " + noticeFile.getAbsolutePath(), verboseLevel);
-      } else {
-        if (license.isNoticeRequired()) {
-            this.failures = true;
-            log("MISSING NOTICE for the license file:\n  "
-                + licensePath + "\n  Expected location below:\n  "
-                + noticeFile.getAbsolutePath(), Project.MSG_ERR);
-        }
-      }
-    }
-
-    // In case there is something missing, complain.
-    if (foundLicenses.isEmpty()) {
-      this.failures = true;
-      StringBuilder message = new StringBuilder();
-      message.append("MISSING LICENSE for the following file:\n  ").append(jarFile.getAbsolutePath()).append("\n  Expected locations below:\n");
-      for (File location : expectedLocations) {
-        message.append("  => ").append(location.getAbsolutePath()).append("\n");
-      }
-      log(message.toString(), Project.MSG_ERR);
-      return false;
-    }
-
-    return true;
-  }
-
-  private static final String createChecksumString(byte[] digest) {
-    StringBuilder checksum = new StringBuilder();
-    for (int i = 0; i < digest.length; i++) {
-      checksum.append(String.format(Locale.ROOT, "%02x", 
-                                    CHECKSUM_BYTE_MASK & digest[i]));
-    }
-    return checksum.toString();
-  }
-  private static final String readChecksumFile(File f) {
-    BufferedReader reader = null;
-    try {
-      reader = new BufferedReader(new InputStreamReader
-                                  (new FileInputStream(f), StandardCharsets.UTF_8));
-      try {
-        String checksum = reader.readLine();
-        if (null == checksum || 0 == checksum.length()) {
-          throw new BuildException("Failed to find checksum in file: " + f);
-        }
-        return checksum;
-      } finally {
-        reader.close();
-      }
-    } catch (IOException e) {
-      throw new BuildException("IO error reading checksum file: " + f, e);
-    }
-  }
-
-  private static final boolean matchesRegexChecksum(File jarFile, Pattern skipRegexChecksum) {
-    if (skipRegexChecksum == null) {
-      return false;
-    }
-    Matcher m = skipRegexChecksum.matcher(jarFile.getName());
-    return m.matches();
-  }
-}
diff --git a/lucene/tools/src/java/org/apache/lucene/validation/LicenseType.java b/lucene/tools/src/java/org/apache/lucene/validation/LicenseType.java
deleted file mode 100644
index 2359382..0000000
--- a/lucene/tools/src/java/org/apache/lucene/validation/LicenseType.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.validation;
-
-
-/**
- * A list of accepted licenses.  See also http://www.apache.org/legal/3party.html
- *
- **/
-public enum LicenseType {
-  ASL("Apache Software License 2.0", true),
-  BSD("Berkeley Software Distribution", true),
-  BSD_LIKE("BSD like license", true),//BSD like just means someone has taken the BSD license and put in their name, copyright, or it's a very similar license.
-  CDDL("Common Development and Distribution License", false),
-  CPL("Common Public License", true),
-  EPL("Eclipse Public License Version 1.0", false),
-  MIT("Massachusetts Institute of Tech. License", false),
-  MPL("Mozilla Public License", false), //NOT SURE on the required notice
-  PD("Public Domain", false),
-  //SUNBCLA("Sun Binary Code License Agreement"),
-  SUN("Sun Open Source License", false),
-  COMPOUND("Compound license (see NOTICE).", true),
-  FAKE("FAKE license - not needed", false);
-
-  private String display;
-  private boolean noticeRequired;
-
-  LicenseType(String display, boolean noticeRequired) {
-    this.display = display;
-    this.noticeRequired = noticeRequired;
-  }
-
-  public boolean isNoticeRequired() {
-    return noticeRequired;
-  }
-
-  public String getDisplay() {
-    return display;
-  }
-
-  public String toString() {
-    return "LicenseType{" +
-            "display='" + display + '\'' +
-            '}';
-  }
-
-  /**
-   * Expected license file suffix for a given license type.
-   */
-  public String licenseFileSuffix() {
-    return "-LICENSE-" + this.name() + ".txt";
-  }
-
-  /**
-   * Expected notice file suffix for a given license type.
-   */
-  public String noticeFileSuffix() {
-    return "-NOTICE.txt";
-  }
-}
-
diff --git a/lucene/tools/src/java/org/apache/lucene/validation/ivyde/IvyNodeElement.java b/lucene/tools/src/java/org/apache/lucene/validation/ivyde/IvyNodeElement.java
deleted file mode 100644
index 287527e..0000000
--- a/lucene/tools/src/java/org/apache/lucene/validation/ivyde/IvyNodeElement.java
+++ /dev/null
@@ -1,178 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.validation.ivyde;
-
-import org.apache.ivy.core.module.id.ModuleRevisionId;
-
-import java.util.Collection;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Map;
-
-/**
- * Assists in the further separation of concerns between the view and the Ivy resolve report. The view looks at the
- * IvyNode in a unique way that can lead to expensive operations if we do not achieve this separation.
- * 
- * This class is copied from org/apache/ivyde/eclipse/resolvevisualizer/model/IvyNodeElement.java at 
- * https://svn.apache.org/repos/asf/ant/ivy/ivyde/trunk/org.apache.ivyde.eclipse.resolvevisualizer/src/
- *
- * Changes include: uncommenting generics and converting to diamond operators where appropriate;
- * removing unnecessary casts; removing javadoc tags with no description; and adding a hashCode() implementation.
- */
-public class IvyNodeElement {
-  private ModuleRevisionId moduleRevisionId;
-  private boolean evicted = false;
-  private int depth = Integer.MAX_VALUE / 10;
-  private Collection<IvyNodeElement> dependencies = new HashSet<>();
-  private Collection<IvyNodeElement> callers = new HashSet<>();
-  private Collection<IvyNodeElement> conflicts = new HashSet<>();
-
-  /**
-   * The caller configurations that caused this node to be reached in the resolution, grouped by caller.
-   */
-  private Map<IvyNodeElement,String[]>callerConfigurationMap = new HashMap<>();
-
-  /**
-   * We try to avoid building the list of this nodes deep dependencies by storing them in this cache by depth level.
-   */
-  private IvyNodeElement[] deepDependencyCache;
-  
-  @Override
-  public boolean equals(Object obj) {
-    if (obj instanceof IvyNodeElement) {
-      IvyNodeElement elem = (IvyNodeElement) obj;
-      if (elem.getOrganization().equals(getOrganization()) && elem.getName().equals(getName())
-          && elem.getRevision().equals(getRevision()))
-        return true;
-    }
-    return false;
-  }
-  
-  @Override
-  public int hashCode() {
-    int result = 1;
-    result = result * 31 + (null == getOrganization() ? 0 : getOrganization().hashCode());
-    result = result * 31 + (null == getName() ? 0 : getName().hashCode());
-    result = result * 31 + (null == getRevision() ? 0 : getRevision().hashCode());
-    return result;
-  }
-
-  public IvyNodeElement[] getDependencies() {
-    return dependencies.toArray(new IvyNodeElement[dependencies.size()]);
-  }
-
-  /**
-   * Recursive dependency retrieval
-   *
-   * @return The array of nodes that represents a node's immediate and transitive dependencies down to an arbitrary
-   *         depth.
-   */
-  public IvyNodeElement[] getDeepDependencies() {
-    if (deepDependencyCache == null) {
-      Collection<IvyNodeElement> deepDependencies = getDeepDependencies(this);
-      deepDependencyCache = deepDependencies.toArray(new IvyNodeElement[deepDependencies.size()]);
-    }
-    return deepDependencyCache;
-  }
-
-  /**
-   * Recursive dependency retrieval
-   */
-  private Collection<IvyNodeElement> getDeepDependencies(IvyNodeElement node) {
-    Collection<IvyNodeElement> deepDependencies = new HashSet<>();
-    deepDependencies.add(node);
-
-    IvyNodeElement[] directDependencies = node.getDependencies();
-    for (int i = 0; i < directDependencies.length; i++) {
-      deepDependencies.addAll(getDeepDependencies(directDependencies[i]));
-    }
-
-    return deepDependencies;
-  }
-
-  /**
-   * @return An array of configurations by which this module was resolved
-   */
-  public String[] getCallerConfigurations(IvyNodeElement caller) {
-    return callerConfigurationMap.get(caller);
-  }
-
-  public void setCallerConfigurations(IvyNodeElement caller, String[] configurations) {
-    callerConfigurationMap.put(caller, configurations);
-  }
-
-  public String getOrganization() {
-    return moduleRevisionId.getOrganisation();
-  }
-
-  public String getName() {
-    return moduleRevisionId.getName();
-  }
-
-  public String getRevision() {
-    return moduleRevisionId.getRevision();
-  }
-
-  public boolean isEvicted() {
-    return evicted;
-  }
-
-  public void setEvicted(boolean evicted) {
-    this.evicted = evicted;
-  }
-
-  public int getDepth() {
-    return depth;
-  }
-
-  /**
-   * Set this node's depth and recursively update the node's children to relative to the new value.
-   */
-  public void setDepth(int depth) {
-    this.depth = depth;
-    for (Iterator<IvyNodeElement> iter = dependencies.iterator(); iter.hasNext();) {
-      IvyNodeElement dependency = iter.next();
-      dependency.setDepth(depth + 1);
-    }
-  }
-
-  public IvyNodeElement[] getConflicts() {
-    return conflicts.toArray(new IvyNodeElement[conflicts.size()]);
-  }
-
-  public void setConflicts(Collection<IvyNodeElement> conflicts) {
-    this.conflicts = conflicts;
-  }
-
-  public ModuleRevisionId getModuleRevisionId() {
-    return moduleRevisionId;
-  }
-
-  public void setModuleRevisionId(ModuleRevisionId moduleRevisionId) {
-    this.moduleRevisionId = moduleRevisionId;
-  }
-
-  public void addCaller(IvyNodeElement caller) {
-    callers.add(caller);
-    caller.dependencies.add(this);
-  }
-
-  public IvyNodeElement[] getCallers() {
-    return callers.toArray(new IvyNodeElement[callers.size()]);
-  }
-}
\ No newline at end of file
diff --git a/lucene/tools/src/java/org/apache/lucene/validation/ivyde/IvyNodeElementAdapter.java b/lucene/tools/src/java/org/apache/lucene/validation/ivyde/IvyNodeElementAdapter.java
deleted file mode 100644
index c754dd4..0000000
--- a/lucene/tools/src/java/org/apache/lucene/validation/ivyde/IvyNodeElementAdapter.java
+++ /dev/null
@@ -1,135 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.lucene.validation.ivyde;
-
-import org.apache.ivy.core.module.id.ModuleId;
-import org.apache.ivy.core.module.id.ModuleRevisionId;
-import org.apache.ivy.core.report.ResolveReport;
-import org.apache.ivy.core.resolve.IvyNode;
-import org.apache.ivy.core.resolve.IvyNodeCallers;
-
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-
-/**
- * This class is copied from org/apache/ivyde/eclipse/resolvevisualizer/model/IvyNodeElementAdapter.java at 
- * https://svn.apache.org/repos/asf/ant/ivy/ivyde/trunk/org.apache.ivyde.eclipse.resolvevisualizer/src/
- * 
- * Changes include: uncommenting generics and converting to diamond operators where appropriate;
- * removing unnecessary casts; and removing javadoc tags with no description.
- */
-public class IvyNodeElementAdapter {
-  /**
-   * Adapt all dependencies and evictions from the ResolveReport.
-   * @return the root node adapted from the ResolveReport
-   */
-  public static IvyNodeElement adapt(ResolveReport report) {
-    Map<ModuleRevisionId,IvyNodeElement> resolvedNodes = new HashMap<>();
-
-    IvyNodeElement root = new IvyNodeElement();
-    root.setModuleRevisionId(report.getModuleDescriptor().getModuleRevisionId());
-    resolvedNodes.put(report.getModuleDescriptor().getModuleRevisionId(), root);
-
-    @SuppressWarnings("unchecked") List<IvyNode> dependencies = report.getDependencies();
-
-    // First pass - build the map of resolved nodes by revision id
-    for (Iterator<IvyNode> iter = dependencies.iterator(); iter.hasNext();) {
-      IvyNode node = iter.next();
-      if (node.getAllEvictingNodes() != null) {
-        // Nodes that are evicted as a result of conf inheritance still appear
-        // as dependencies, but with eviction data. They also appear as evictions.
-        // We map them as evictions rather than dependencies.
-        continue;
-      }
-      IvyNodeElement nodeElement = new IvyNodeElement();
-      nodeElement.setModuleRevisionId(node.getResolvedId());
-      resolvedNodes.put(node.getResolvedId(), nodeElement);
-    }
-
-    // Second pass - establish relationships between the resolved nodes
-    for (Iterator<IvyNode> iter = dependencies.iterator(); iter.hasNext();) {
-      IvyNode node = iter.next();
-      if (node.getAllEvictingNodes() != null) {
-        continue; // see note above
-      }
-
-      IvyNodeElement nodeElement = resolvedNodes.get(node.getResolvedId());
-      IvyNodeCallers.Caller[] callers = node.getAllRealCallers();
-      for (int i = 0; i < callers.length; i++) {
-        IvyNodeElement caller = resolvedNodes.get(callers[i].getModuleRevisionId());
-        if (caller != null) {
-          nodeElement.addCaller(caller);
-          nodeElement.setCallerConfigurations(caller, callers[i].getCallerConfigurations());
-        }
-      }
-    }
-
-    IvyNode[] evictions = report.getEvictedNodes();
-    for (int i = 0; i < evictions.length; i++) {
-      IvyNode eviction = evictions[i];
-      IvyNodeElement evictionElement = new IvyNodeElement();
-      evictionElement.setModuleRevisionId(eviction.getResolvedId());
-      evictionElement.setEvicted(true);
-
-      IvyNodeCallers.Caller[] callers = eviction.getAllCallers();
-      for (int j = 0; j < callers.length; j++) {
-        IvyNodeElement caller = resolvedNodes.get(callers[j].getModuleRevisionId());
-        if (caller != null) {
-          evictionElement.addCaller(caller);
-          evictionElement.setCallerConfigurations(caller, callers[j].getCallerConfigurations());
-        }
-      }
-    }
-
-    // Recursively set depth starting at root
-    root.setDepth(0);
-    findConflictsBeneathNode(root);
-
-    return root;
-  }
-
-  /**
-   * Derives configuration conflicts that exist between node and all of its descendant dependencies.
-   */
-  private static void findConflictsBeneathNode(IvyNodeElement node) {
-    // Derive conflicts
-    Map<ModuleId,Collection<IvyNodeElement>> moduleRevisionMap = new HashMap<>();
-    IvyNodeElement[] deepDependencies = node.getDeepDependencies();
-    for (int i = 0; i < deepDependencies.length; i++) {
-      if (deepDependencies[i].isEvicted())
-        continue;
-
-      ModuleId moduleId = deepDependencies[i].getModuleRevisionId().getModuleId();
-      if (moduleRevisionMap.containsKey(moduleId)) {
-        Collection<IvyNodeElement> conflicts = moduleRevisionMap.get(moduleId);
-        conflicts.add(deepDependencies[i]);
-        for (Iterator<IvyNodeElement> iter = conflicts.iterator(); iter.hasNext();) {
-          IvyNodeElement conflict = iter.next();
-          conflict.setConflicts(conflicts);
-        }
-      } else {
-        List<IvyNodeElement> immutableMatchingSet = Arrays.asList(deepDependencies[i]);
-        moduleRevisionMap.put(moduleId, new HashSet<>(immutableMatchingSet));
-      }
-    }
-  }
-}
\ No newline at end of file
diff --git a/lucene/top-level-ivy-settings.xml b/lucene/top-level-ivy-settings.xml
deleted file mode 100644
index 175cf4d..0000000
--- a/lucene/top-level-ivy-settings.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivysettings>
-  <!-- Load ivy-versions.properties as Ivy variables. -->
-  <properties file="${ivy.settings.dir}/ivy-versions.properties" override="false"/>
-  <!-- Include the Ivy settings file pointed to by the "ivysettings.xml" property. -->
-  <include file="${ivysettings.xml}"/>
-</ivysettings>
diff --git a/lucene/version.properties b/lucene/version.properties
deleted file mode 100644
index a00b086..0000000
--- a/lucene/version.properties
+++ /dev/null
@@ -1,10 +0,0 @@
-# This file contains some version properties as used by various build files.
-
-# RELEASE MANAGER must change this file after creating a release and
-# enter new base version (format "x.y.z", no prefix/appendix): 
-version.base=9.0.0
-
-# Other version property defaults, don't change:
-version.suffix=SNAPSHOT
-version=${version.base}-${version.suffix}
-spec.version=${version.base}
diff --git a/settings.gradle b/settings.gradle
index fb85047..fdf46af 100644
--- a/settings.gradle
+++ b/settings.gradle
@@ -53,8 +53,6 @@
 include "solr:core"
 include "solr:server"
 include "solr:contrib:analysis-extras"
-include "solr:contrib:dataimporthandler"
-include "solr:contrib:dataimporthandler-extras"
 include "solr:contrib:analytics"
 include "solr:contrib:clustering"
 include "solr:contrib:extraction"
diff --git a/solr/.gitignore b/solr/.gitignore
index 421dbcb..a0d8aa8 100644
--- a/solr/.gitignore
+++ b/solr/.gitignore
@@ -2,8 +2,6 @@
 
 /bin/*.pid
 
-/contrib/dataimporthandler/test-lib/
-
 /core/test-lib/
 
 /example/start.jar
@@ -15,9 +13,6 @@
 /example/solr/zoo_data
 /example/work/*
 /example/exampledocs/post.jar
-/example/example-DIH/**/data
-/example/example-DIH/**/dataimport.properties
-/example/example-DIH/solr/mail/lib/*.jar
 
 /package
 
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index b46ccfe..f41a84b 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -15,6 +15,9 @@
 inconsistent with other replica placement strategies. Other relevant placement strategies
 should be used instead, such as autoscaling policy or rules-based placement.
 
+* SOLR-14654 : plugins cannot be loaded using "runtimeLib=true" option. Use the package manager to use
+  and load plugins
+
 New Features
 ---------------------
 * SOLR-14440: Introduce new Certificate Authentication Plugin to load Principal from certificate subject. (Mike Drob)
@@ -35,10 +38,18 @@
 
 * SOLR-10814: Add short-name feature to RuleBasedAuthz plugin (Mike Drob, Hrishikesh Gadre)
 
+* SOLR-7683 Introduce support to identify Solr internal request types (Atri Sharma, Hrishikesh Gadre)
+
+* SOLR-13528 Rate Limiting in Solr (Atri Sharma, Mike Drob)
+
+* SOLR-14615: CPU Utilization Based Circuit Breaker (Atri Sharma)
+
 Other Changes
 ----------------------
 * SOLR-14656: Autoscaling framework removed (Ishan Chattopadhyaya, noble, Ilan Ginzburg)
 
+* SOLR-14616: CDCR support removed (Ishan Chattopadhyaya)
+
 * LUCENE-9391: Upgrade HPPC to 0.8.2. (Haoyu Zhai)
 
 * SOLR-10288: Remove non-minified JavaScript from the webapp. (Erik Hatcher, marcussorealheis)
@@ -100,6 +111,16 @@
 
 * SOLR-14244: Remove ReplicaInfo. (ab)
 
+* SOLR-14654: Remove plugin loading from .system collection (for 9.0) (noble)
+
+* SOLR-14702: All references to "master" and "slave" replaced with "leader" and "follower" (MarcusSorealheis, 
+  Erick Erickson, Tomás Fernández Löbbe)
+
+* LUCENE-9433: Remove Ant support from trunk (Erick Erickson, Uwe Schindler et.al.)
+
+* SOLR-14783: Remove Data Import Handler (DIH), previously deprecated (Alexandre Rafalovitch)
+
+
 Bug Fixes
 ---------------------
 * SOLR-14546: Fix for a relatively hard to hit issue in OverseerTaskProcessor that could lead to out of order execution
@@ -116,6 +137,13 @@
 
 * SOLR-14681: Introduce ability to delete .jar stored in the Package Store. (MarcusSorealheis , Mike Drob)
 
+* SOLR-14604: Add the ability to uninstall a package from with the Package CLI. (MarcusSorealheis)
+
+* SOLR-14582: Expose IWC.setMaxCommitMergeWaitMillis in Solr's index config. This is an expert config option that can be
+  set when using a custom MergePolicy (doesn't have any effect on the default MP) (Tomás Fernández Löbbe)
+
+* SOLR-13751: Add BooleanSimilarityFactory class. (Andy Webb via Christine Poerschke)
+
 Improvements
 ---------------------
 
@@ -137,6 +165,10 @@
 * SOLR-14651: The MetricsHistoryHandler can more completely disable itself when you tell it to.
   Also, it now shuts down more thoroughly. (David Smiley)
 
+* SOLR-14722: Track timeAllowed from earlier in the request lifecycle. (David Smiley)
+
+* SOLR-13438: On deleting a collection, its config set will also be deleted iff it has been auto-created, and not used by any other collection (Anderson Dorow)
+
 Optimizations
 ---------------------
 
@@ -156,6 +188,28 @@
 * SOLR-14516: Fix NPE in JSON response writer(wt=json) with /get when writing non-stored, non-indexed docvalue field
   from an uncommitted document (noble, Ishan Chattopadhyaya, Munendra S N)
 
+* SOLR-14657: Improve error handling in IndexReader realted metrics that were causing scary ERROR logging
+  if metrics were requested while Solr was in the process of closing/re-opening a new IndexReader. (hossman)
+
+* SOLR-14748: Fix incorrect auth/SSL startup logging (Jason Gerlowski)
+
+* SOLR-14751: Zookeeper Admin screen not working for old ZK versions (janhoy)
+
+* SOLR-14677: Improve DIH termination logic to close all DataSources, EntityProcessors (Jason Gerlowski)
+
+* SOLR-14703: Fix edismax replacement of all whitespace characters with spaces (Yuriy Koval via Jason Gerlowski)
+
+* SOLR-14700: Avoid NullPointerException in TupleStream.getShards() when streamContext is null.
+  (Mads Bondo Dydensborg, Christine Poerschke, Mike Drob)
+
+* SOLR-14752: Fix error in Zookeeper status when Prometheus plugin is enabled in ZK (Philipp Trulson via janhoy)
+
+* SOLR-14774: HealthCheckHandler is no longer an implicit SolrCore handler and can be configured from solr.xml
+  (Tomás Fernándex Löbbe)
+
+* SOLR-14714: Solr.cmd in windows loads the incorrect jetty module when using java>=9 (Endika Posadas via
+  Erick Erickson)
+
 Other Changes
 ---------------------
 
@@ -173,11 +227,27 @@
 
 * SOLR-11868: Deprecate CloudSolrClient.setIdField, use information from Zookeeper (Erick Erickson)
 
+* SOLR-14641: PeerSync, remove canHandleVersionRanges check (Cao Manh Dat)
+
+* SOLR-14731: Rename @SolrSingleThreaded to @SolrThreadUnsafe, mark DistribPackageStore with the annotation
+  (marcussorealheis)
+
+==================  8.6.2 ==================
+
+Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release.
+
+Bug Fixes
+---------------------
+* SOLR-14751: Zookeeper Admin screen not working for old ZK versions (janhoy)
+
 ==================  8.6.1 ==================
 
 Bug Fixes
 ---------------------
 
+* SOLR-14665, SOLR-14706: Revert SOLR-12845 adding of default autoscaling cluster policy, due to performance
+ issues. (Ishan Chattopadhyaya, Houston Putman, Gus Heck, ab)
+
 * SOLR-14671: Parsing dynamic ZK config sometimes cause NuberFormatException (janhoy)
 
 ==================  8.6.0 ==================
@@ -2951,7 +3021,7 @@
   start scripts will no longer attempt this nor move existing console or GC logs into logs/archived either (SOLR-12144).
 
 * SOLR-11673: Slave doesn't commit empty index when completely new index is detected on master during replication.
-  To return the previous behavior pass false to skipCommitOnMasterVersionZero in slave section of replication
+  To return the previous behavior pass false to skipCommitOnLeaderVersionZero in slave section of replication
   handler configuration, or pass it to the fetchindex command.
 
 * SOLR-11453: Configuring slowQueryThresholdMillis now logs slow requests to a separate file - solr_slow_requests.log.
@@ -3111,7 +3181,7 @@
 * SOLR-12155: Exception from UnInvertedField constructor puts threads to infinite wait.
  (Andrey Kudryavtsev, Mikhail Khludnev)
 
-* SOLR-12201: TestReplicationHandler.doTestIndexFetchOnMasterRestart(): handle unexpected replication failures
+* SOLR-12201: TestReplicationHandler.doTestIndexFetchOnLeaderRestart(): handle unexpected replication failures
   (Steve Rowe)
 
 * SOLR-12190: Need to properly escape output in GraphMLResponseWriter. (yonik)
@@ -3747,7 +3817,7 @@
 * SOLR-12067: Increase autoAddReplicas default 30 second wait time to 120 seconds.
   (Varun Thacker, Mark Miller via shalin)
 
-* SOLR-12078: Fixed reproducable Failure in TestReplicationHandler.doTestIndexFetchOnMasterRestart that happened
+* SOLR-12078: Fixed reproducable Failure in TestReplicationHandler.doTestIndexFetchOnLeaderRestart that happened
   due to using stale http connections. (Gus Heck, shalin)
 
 * SOLR-12099: Remove reopenReaders attribute from 'IndexConfig in SolrConfig' page in ref guide. (shalin)
diff --git a/solr/README.md b/solr/README.md
index 66e2fff..fd775e1 100644
--- a/solr/README.md
+++ b/solr/README.md
@@ -90,15 +90,14 @@
   bin/solr -e <EXAMPLE> where <EXAMPLE> is one of:
 
     cloud        : SolrCloud example
-    dih          : Data Import Handler (rdbms, mail, atom, tika)
     schemaless   : Schema-less example (schema is inferred from data during indexing)
     techproducts : Kitchen sink example providing comprehensive examples of Solr features
 ```
 
-For instance, if you want to run the Solr Data Import Handler example, do:
+For instance, if you want to run the SolrCloud example, do:
 
 ```
-  bin/solr -e dih
+  bin/solr -e cloud
 ```
 
 Indexing Documents
@@ -142,8 +141,7 @@
 
 example/
   Contains example documents and an alternative Solr home
-  directory containing examples of how to use the Data Import Handler,
-  see example/example-DIH/README.md for more information.
+  directory containing various examples.
 
 dist/solr-<component>-XX.jar
   The Apache Solr libraries.  To compile Apache Solr Plugins,
@@ -163,30 +161,22 @@
    folder included on your command path. To test this, issue a "java -version" command
    from your shell (command prompt) and verify that the Java version is 11 or later.
 
-2. Download the Apache Ant binary distribution (1.8.2+) from
-   http://ant.apache.org/  You will need Ant installed and the $ANT_HOME/bin (Windows:
-   %ANT_HOME%\bin) folder included on your command path. To test this, issue a
-   "ant -version" command from your shell (command prompt) and verify that Ant is
-   available.
-
-   You will also need to install Apache Ivy binary distribution (2.2.0) from
-   http://ant.apache.org/ivy/ and place ivy-2.2.0.jar file in ~/.ant/lib -- if you skip
-   this step, the Solr build system will offer to do it for you.
-
-3. Download the Apache Solr distribution, linked from the above web site.
+2. Download the Apache Solr distribution, linked from the above web site.
    Unzip the distribution to a folder of your choice, e.g. C:\solr or ~/solr
    Alternately, you can obtain a copy of the latest Apache Solr source code
    directly from the GIT repository:
 
-     https://lucene.apache.org/solr/versioncontrol.html
+     https://lucene.apache.org/solr/community.html#version-control
 
-4. Navigate to the "solr" folder and issue an "ant" command to see the available options
-   for building, testing, and packaging Solr.
+3. Navigate to the root of your source tree folder and issue the `./gradlew tasks` 
+   command to see the available options for building, testing, and packaging Solr.
 
-   NOTE:
-   To see Solr in action, you may want to use the "ant server" command to build
-   and package Solr into the server directory. See also server/README.md.
-
+   `./gradlew assemble` will create a Solr executable. 
+   cd to "./solr/packaging/build/solr-9.0.0-SNAPSHOT" and run the bin/solr script
+   to start Solr.
+   
+   NOTE: `gradlew` is the "Gradle Wrapper" and will automaticaly download and
+   start using the correct version of Gradle.
 
 Export control
 -------------------------------------------------
diff --git a/solr/bin/solr b/solr/bin/solr
index 7e3cf0c6..6ef2a29 100755
--- a/solr/bin/solr
+++ b/solr/bin/solr
@@ -386,7 +386,6 @@
     echo "  -e <example>  Name of the example to run; available examples:"
     echo "      cloud:         SolrCloud example"
     echo "      techproducts:  Comprehensive example illustrating many of Solr's core capabilities"
-    echo "      dih:           Data Import Handler"
     echo "      schemaless:    Schema-less example"
     echo ""
     echo "  -a            Additional parameters to pass to the JVM when starting Solr, such as to setup"
@@ -559,7 +558,7 @@
     echo "                             ...solr/server/solr/configsets' then the configs will be copied from/to"
     echo "                             that directory. Otherwise it is interpreted as a simple local path."
     echo ""
-    echo "         cp copies files or folders to/from Zookeeper or Zokeeper -> Zookeeper"
+    echo "         cp copies files or folders to/from Zookeeper or Zookeeper -> Zookeeper"
     echo "             -r    Recursively copy <src> to <dst>. Command will fail if <src> has children and "
     echo "                        -r is not specified. Optional"
     echo ""
diff --git a/solr/bin/solr.cmd b/solr/bin/solr.cmd
index 8fb5e7a..5908090 100755
--- a/solr/bin/solr.cmd
+++ b/solr/bin/solr.cmd
@@ -42,6 +42,31 @@
 
 set "DEFAULT_SERVER_DIR=%SOLR_TIP%\server"
 
+
+REM Verify Java is available
+IF DEFINED SOLR_JAVA_HOME set "JAVA_HOME=%SOLR_JAVA_HOME%"
+REM Try to detect JAVA_HOME from the registry
+IF NOT DEFINED JAVA_HOME (
+  FOR /F "skip=2 tokens=2*" %%A IN ('REG QUERY "HKLM\Software\JavaSoft\Java Runtime Environment" /v CurrentVersion') DO set CurVer=%%B
+  FOR /F "skip=2 tokens=2*" %%A IN ('REG QUERY "HKLM\Software\JavaSoft\Java Runtime Environment\!CurVer!" /v JavaHome') DO (
+    set "JAVA_HOME=%%B"
+  )
+)
+IF NOT DEFINED JAVA_HOME goto need_java_home
+set JAVA_HOME=%JAVA_HOME:"=%
+IF %JAVA_HOME:~-1%==\ SET JAVA_HOME=%JAVA_HOME:~0,-1%
+IF NOT EXIST "%JAVA_HOME%\bin\java.exe" (
+  set "SCRIPT_ERROR=java.exe not found in %JAVA_HOME%\bin. Please set JAVA_HOME to a valid JRE / JDK directory."
+  goto err
+)
+set "JAVA=%JAVA_HOME%\bin\java"
+CALL :resolve_java_info
+IF !JAVA_MAJOR_VERSION! LSS 8 (
+  set "SCRIPT_ERROR=Java 1.8 or later is required to run Solr. Current Java version is: !JAVA_VERSION_INFO! (detected major: !JAVA_MAJOR_VERSION!)"
+  goto err
+)
+
+
 REM Select HTTP OR HTTPS related configurations
 set SOLR_URL_SCHEME=http
 set "SOLR_JETTY_CONFIG=--module=http"
@@ -185,29 +210,6 @@
   set "SOLR_JETTY_HOST=127.0.0.1"
 )
 
-REM Verify Java is available
-IF DEFINED SOLR_JAVA_HOME set "JAVA_HOME=%SOLR_JAVA_HOME%"
-REM Try to detect JAVA_HOME from the registry
-IF NOT DEFINED JAVA_HOME (
-  FOR /F "skip=2 tokens=2*" %%A IN ('REG QUERY "HKLM\Software\JavaSoft\Java Runtime Environment" /v CurrentVersion') DO set CurVer=%%B
-  FOR /F "skip=2 tokens=2*" %%A IN ('REG QUERY "HKLM\Software\JavaSoft\Java Runtime Environment\!CurVer!" /v JavaHome') DO (
-    set "JAVA_HOME=%%B"
-  )
-)
-IF NOT DEFINED JAVA_HOME goto need_java_home
-set JAVA_HOME=%JAVA_HOME:"=%
-IF %JAVA_HOME:~-1%==\ SET JAVA_HOME=%JAVA_HOME:~0,-1%
-IF NOT EXIST "%JAVA_HOME%\bin\java.exe" (
-  set "SCRIPT_ERROR=java.exe not found in %JAVA_HOME%\bin. Please set JAVA_HOME to a valid JRE / JDK directory."
-  goto err
-)
-set "JAVA=%JAVA_HOME%\bin\java"
-CALL :resolve_java_info
-IF !JAVA_MAJOR_VERSION! LSS 8 (
-  set "SCRIPT_ERROR=Java 1.8 or later is required to run Solr. Current Java version is: !JAVA_VERSION_INFO! (detected major: !JAVA_MAJOR_VERSION!)"
-  goto err
-)
-
 set FIRST_ARG=%1
 
 IF [%1]==[] goto usage
@@ -360,7 +362,6 @@
 @echo   -e example    Name of the example to run; available examples:
 @echo       cloud:          SolrCloud example
 @echo       techproducts:   Comprehensive example illustrating many of Solr's core capabilities
-@echo       dih:            Data Import Handler
 @echo       schemaless:     Schema-less example
 @echo.
 @echo   -a opts       Additional parameters to pass to the JVM when starting Solr, such as to setup
@@ -407,13 +408,13 @@
 
 :healthcheck_usage
 @echo.
-@echo Usage: solr healthcheck [-c collection] [-z zkHost] [-V] 
+@echo Usage: solr healthcheck [-c collection] [-z zkHost] [-V]
 @echo.
 @echo Can be run from remote (non-Solr^) hosts, as long as a proper ZooKeeper connection is provided
 @echo.
 @echo   -c collection  Collection to run healthcheck against.
 @echo.
-@echo   -z zkHost      Zookeeper connection string; unnecessary if ZK_HOST is defined in solr.in.cmd; 
+@echo   -z zkHost      Zookeeper connection string; unnecessary if ZK_HOST is defined in solr.in.cmd;
 @echo                    otherwise, default is localhost:9983
 @echo.
 @echo   -V             Enable more verbose output
@@ -551,7 +552,7 @@
 echo                             ...solr/server/solr/configsets' then the configs will be copied from/to
 echo                             that directory. Otherwise it is interpreted as a simple local path.
 echo.
-echo         cp copies files or folders to/from Zookeeper or Zokeeper -^> Zookeeper
+echo         cp copies files or folders to/from Zookeeper or Zookeeper -^> Zookeeper
 echo             -r              Recursively copy ^<src^> to ^<dst^>. Command will fail if ^<src^> has children and
 echo                             -r is not specified. Optional
 echo.
@@ -559,8 +560,8 @@
 echo                              NOTE: ^<src^> and ^<dest^> may both be Zookeeper resources prefixed by 'zk:'
 echo             When ^<src^> is a zk resource, ^<dest^> may be '.'
 echo             If ^<dest^> ends with '/', then ^<dest^> will be a local folder or parent znode and the last
-echo             element of the ^<src^> path will be appended unless ^<src^> also ends in a slash. 
-echo             ^<dest^> may be zk:, which may be useful when using the cp -r form to backup/restore 
+echo             element of the ^<src^> path will be appended unless ^<src^> also ends in a slash.
+echo             ^<dest^> may be zk:, which may be useful when using the cp -r form to backup/restore
 echo             the entire zk state.
 echo             You must enclose local paths that end in a wildcard in quotes or just
 echo             end the local path in a slash. That is,
@@ -2047,8 +2048,10 @@
   REM Remove surrounding quotes
   set JAVA_VERSION_INFO=!JAVA_VERSION_INFO:"=!
 
+  echo "java version info is !JAVA_VERSION_INFO!"
   REM Extract the major Java version, e.g. 7, 8, 9, 10 ...
   for /f "tokens=1,2 delims=._-" %%a in ("!JAVA_VERSION_INFO!") do (
+    echo "Extracted major version is %%a"
     if %%a GEQ 9 (
       set JAVA_MAJOR_VERSION=%%a
     ) else (
diff --git a/solr/build.xml b/solr/build.xml
deleted file mode 100644
index c3cf6bc..0000000
--- a/solr/build.xml
+++ /dev/null
@@ -1,813 +0,0 @@
-<?xml version="1.0"?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<project name="solr" default="usage" 
-         xmlns:jacoco="antlib:org.jacoco.ant"
-         xmlns:ivy="antlib:org.apache.ivy.ant">
-
-  <description>Solr</description>
-  
-  <target name="usage" description="Prints out instructions">
-    <echo message="Welcome to the Solr project!" />
-    <echo message="Use 'ant server' to create the Solr server." />
-    <echo message="Use 'bin/solr' to run the Solr after it is created." />
-    <echo message="And for developers:"/>
-    <echo message="Use 'ant clean' to clean compiled files." />
-    <echo message="Use 'ant compile' to compile the source code." />
-    <echo message="Use 'ant dist' to build the project JAR files." />
-    <echo message="Use 'ant documentation' to build documentation." />
-    <echo message="Use 'ant generate-maven-artifacts' to generate maven artifacts." />
-    <echo message="Use 'ant package' to generate zip, tgz for distribution." />
-    <echo message="Use 'ant test' to run unit tests." />
-  </target>
-
-  <import file="common-build.xml"/>
-  
-  <!-- ========================================================================= -->
-  <!-- ============================== USER TASKS =============================== -->
-  <!-- ========================================================================= -->
-
-  <target name="server" depends="dist-contrib"
-          description="Creates a Solr server">
-    <ant dir="webapp" target="dist" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <jar destfile="${example}/exampledocs/post.jar">
-      <fileset dir="${dest}/solr-core/classes/java">
-        <include name="org/apache/solr/util/CLIO.class"/>
-        <include name="org/apache/solr/util/SimplePostTool*.class"/>
-        <include name="org/apache/solr/util/RTimer.class"/>
-        <include name="org/apache/solr/util/RTimer$*.class"/>
-      </fileset>
-      <manifest>
-        <attribute name="Main-Class" value="org.apache.solr.util.SimplePostTool"/>
-      </manifest>
-    </jar>
-    <echo>See ${common-solr.dir}/README.md for how to run the Solr server.</echo>
-  </target>
-  
-  <target name="run-example" depends="server"
-          description="Run Solr interactively, via Jetty.  -Dexample.debug=true to enable JVM debugger">
-    <property name="example.solr.home" location="${server.dir}/solr"/>
-    <property name="example.debug.suspend" value="n"/>
-    <property name="example.jetty.port" value="8983"/>
-    <condition property="example.jvm.line" value="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=${example.debug.suspend},address=5005">
-      <isset property="example.debug"/>
-    </condition>
-    <property name="example.jvm.line" value=""/>
-    <property name="example.heap.size" value="512M"/>
-    <java jar="${server.dir}/start.jar" fork="true" dir="${server.dir}" maxmemory="${example.heap.size}">
-      <jvmarg line="${example.jvm.line}"/>
-      <arg value="--module=http"/>
-      <sysproperty key="solr.solr.home" file="${example.solr.home}"/>
-      <sysproperty key="jetty.port" value="${example.jetty.port}"/>
-      <sysproperty key="jetty.home" value="${server.dir}"/>
-    </java>
-  </target>
- 
-  <!-- setup proxy for download tasks -->
-  <condition property="proxy.specified">
-    <or>
-      <isset property="proxy.host"/>
-      <isset property="proxy.port"/>
-      <isset property="proxy.user"/>
-    </or>
-  </condition>
- 
-  <target name="proxy.setup" if="proxy.specified">
-    <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}" proxyuser="${proxy.user}" proxypassword="${proxy.password}"/>
-  </target>
-
-  <!-- ========================================================================= -->
-  <!-- ========================== BUILD/TEST TASKS ============================= -->
-  <!-- ========================================================================= -->
-
-  <!-- solr/test-framework is excluded from compilation -->
-  <target name="compile" description="Compile the source code."
-          depends="compile-core, compile-contrib"/>
-
-  <target name="test" description="Validate, then run core, solrj, and contrib unit tests."
-          depends="-init-totals, test-core, test-contrib, -check-totals"/>
-  <target name="test-nocompile">
-    <fail message="Target 'test-nocompile' will not run recursively.  First change directory to the module you want to test."/>
-  </target>
-
-  <target name="jacoco" description="Generates JaCoCo code coverage reports." depends="-jacoco-install">
-    <!-- run jacoco for each module -->
-    <ant dir="${common-solr.dir}/core" target="jacoco" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <ant dir="solrj" target="jacoco" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <contrib-crawl target="jacoco" failonerror="false"/>
-
-    <!-- produce aggregate report -->
-    <property name="jacoco.output.dir" location="${jacoco.report.dir}/solr-all"/>
-    <!-- try to clean output dir to prevent any confusion -->
-    <delete dir="${jacoco.output.dir}" failonerror="false"/>
-    <mkdir dir="${jacoco.output.dir}"/>
-
-    <jacoco:report>
-      <executiondata>
-        <fileset dir="${common-solr.dir}/build" includes="**/jacoco.db"/>
-      </executiondata>
-      <structure name="${Name} aggregate JaCoCo coverage report">
-        <classfiles>
-          <fileset dir="${common-solr.dir}/build">
-             <include name="**/classes/java/**/*.class"/>
-             <exclude name="solr-test-framework/**"/>
-          </fileset>
-        </classfiles>
-        <!-- TODO: trying to specify source files could maybe work, but would
-             double the size of the reports -->
-      </structure>
-      <html destdir="${jacoco.output.dir}" footer="Copyright ${year} Apache Software Foundation.  All Rights Reserved."/>
-    </jacoco:report>
-  </target>
-
-  <!-- "-clover.load" is *not* a useless dependency. do not remove -->
-  <target name="test-core" description="Runs the core and solrj unit tests."
-          depends="-clover.load, test-solr-core, test-solrj"/>
-  <target name="pitest" description="Validate, then run core, solrj, and contrib unit tests."
-          depends="pitest-core, pitest-contrib"/>
-  <target name="beast">
-    <fail message="The Beast only works inside of individual modules"/>
-  </target>
-  <target name="compile-test" description="Compile core, solrj, and contrib unit tests, and solr-test-framework."
-          depends="compile-solr-test-framework, compile-test-solr-core, compile-test-solrj, compile-test-contrib"/>
-  <target name="javadocs" description="Calls javadocs-all, javadocs-solrj, and javadocs-test-framework"
-          depends="define-lucene-javadoc-url,javadocs-solr-core,javadocs-solrj,javadocs-test-framework,javadocs-contrib"/>
-  <target name="documentation" description="Generate all documentation"
-          depends="javadocs,changes-to-html,process-webpages">
-    <ant dir="solr-ref-guide" target="bare-bones-html-validation" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <property name="local.javadocs" value="true" />
-    </ant>
-  </target>
-  <target name="compile-core" depends="compile-solr-core" unless="solr.core.compiled"/>
-
-  <target name="documentation-online" description="Generate a link to the online documentation"
-      depends="define-solr-javadoc-url">
-    <xslt in="${ant.file}" out="${javadoc-online.dir}/index.html" style="site/online-link.xsl" force="true">
-      <outputproperty name="method" value="html"/>
-      <outputproperty name="version" value="4.0"/>
-      <outputproperty name="encoding" value="UTF-8"/>
-      <outputproperty name="indent" value="yes"/>
-      <param name="version" expression="${version}"/>
-      <param name="solrJavadocUrl" expression="${solr.javadoc.url}"/>
-    </xslt>
-    <copy todir="${javadoc-online.dir}">
-      <fileset dir="site/assets" includes="**/solr.svg"/>
-    </copy>
-  </target>
-
-  <target name="process-webpages" depends="define-lucene-javadoc-url,resolve-markdown">
-    <makeurl property="process-webpages.buildfiles" separator="|">
-      <fileset dir="." includes="core/build.xml,test-framework/build.xml,solrj/build.xml,contrib/**/build.xml"/>
-    </makeurl>
-
-    <loadresource property="doc-solr-guide-version-path">
-      <propertyresource name="version"/>
-      <filterchain>
-        <tokenfilter>
-          <filetokenizer/>
-          <replaceregex pattern="^(\d+)\.(\d+).*" replace="\1_\2"/>
-        </tokenfilter>
-      </filterchain>
-    </loadresource>
-    <!--
-      The XSL input file is ignored completely, but XSL expects one to be given,
-      so we pass ourself (${ant.file}) here. The list of module build.xmls is given
-      via string parameter, that must be splitted by the XSL at '|'.
-    --> 
-    <xslt in="${ant.file}" out="${javadoc.dir}/index.html" style="site/index.xsl" force="true">
-      <outputproperty name="method" value="html"/>
-      <outputproperty name="version" value="4.0"/>
-      <outputproperty name="encoding" value="UTF-8"/>
-      <outputproperty name="indent" value="yes"/>
-      <param name="buildfiles" expression="${process-webpages.buildfiles}"/>
-      <param name="version" expression="${version}"/>
-      <param name="luceneJavadocUrl" expression="${lucene.javadoc.url}"/>
-      <param name="solrGuideVersion" expression="${doc-solr-guide-version-path}"/>
-    </xslt>
-    
-    <markdown todir="${javadoc.dir}">
-      <fileset dir="site" includes="**/*.md" excludes="**/*.template.md"/>
-      <globmapper from="*.md" to="*.html"/>
-    </markdown>
-
-    <copy todir="${javadoc.dir}">
-      <fileset dir="site/assets" />
-    </copy>
-  </target>
-
-  <target name="jar" depends="jar-core,jar-solrj,jar-solr-test-framework,jar-contrib"
-          description="Jar solr core, solrj, solr-test-framework, and all contribs"/>
-
-  <target name="jar-src" 
-          description="Create source jars for solr core, solrj, solr-test-framework, and all contribs">
-    <ant dir="core" target="jar-src" inheritAll="false"/>
-    <ant dir="solrj" target="jar-src" inheritAll="false"/>
-    <ant dir="test-framework" target="jar-src" inheritAll="false"/>
-    <contrib-crawl target="jar-src"/>
-  </target>
-
-  <!-- Solr core targets -->
-  <target name="test-solr-core" description="Test solr core">
-    <ant dir="core" target="test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="jar-core">
-    <ant dir="${common-solr.dir}/core" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-  
-  <!-- Solrj targets -->
-  <target name="test-solrj" description="Test java client">
-    <ant dir="solrj" target="test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- Solr contrib targets -->
-  <target name="test-contrib" description="Run contrib unit tests.">
-    <contrib-crawl target="test"/>
-  </target>
-
-  <!-- Pitest targets -->
-  <target name="pitest-core" description="PiTest solr core">
-    <ant dir="core" target="pitest" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="pitest-solrj" description="PiTest java client">
-    <ant dir="solrj" target="pitest" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="pitest-contrib" description="Run contrib PiTests.">
-    <contrib-crawl target="pitest" failonerror="false"/>
-  </target>
-  
-  <!-- test-framework targets -->
-  <target name="javadocs-test-framework">
-    <ant dir="test-framework" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-    
-  <!-- Validation (license/notice/api checks). -->
-  <target name="validate" depends="check-example-lucene-match-version,check-licenses,rat-sources,check-forbidden-apis" description="Validate stuff." />
-
-  <target name="check-example-lucene-match-version">
-    <property name="configsets.dir" value="${server.dir}/solr/configsets"/>
-    <!-- validates all configset solrconfig files-->
-    <fail message="Some example solrconfig.xml files under ${configsets.dir} do not refer to the correct luceneMatchVersion: ${tests.luceneMatchVersion}">
-      <condition>
-        <resourcecount when="greater" count="0">
-          <fileset dir="${configsets.dir}" includes="**/solrconfig.xml">
-            <not>
-              <contains text="&lt;luceneMatchVersion&gt;${tests.luceneMatchVersion}&lt;" casesensitive="no"/>
-            </not>
-          </fileset>
-        </resourcecount>
-      </condition>
-    </fail>
-    <!-- validates remaining example solrconfig files-->
-    <fail message="Some example solrconfig.xml files under ${example} do not refer to the correct luceneMatchVersion: ${tests.luceneMatchVersion}">
-      <condition>
-        <resourcecount when="greater" count="0">
-          <fileset dir="${example}" includes="**/solrconfig.xml">
-            <not>
-              <contains text="&lt;luceneMatchVersion&gt;${tests.luceneMatchVersion}&lt;" casesensitive="no"/>
-            </not>
-          </fileset>
-        </resourcecount>
-      </condition>
-    </fail>
-    <!-- Count the immediate sub-directories of the configsets dir to ensure all sub-dirs have a solrconfig.xml -->
-    <resourcecount property="count.subdirs">
-      <dirset dir="${configsets.dir}" includes="*"/>
-    </resourcecount>
-    <!-- Ensure there is at least one sub-directory -->
-    <fail message="No sub-directories found under ${configsets.dir}">
-      <condition>
-        <equals arg1="${count.subdirs}" arg2="0"/>
-      </condition>
-    </fail>
-    <fail message="At least one sub-directory under ${configsets.dir} does not have a solrconfig.xml file">
-      <condition>
-        <resourcecount when="ne" count="${count.subdirs}">
-          <fileset dir="${configsets.dir}" includes="**/solrconfig.xml"/>
-        </resourcecount>
-      </condition>
-    </fail>
-  </target>
-
-  <target name="check-licenses" depends="compile-tools,resolve,load-custom-tasks" description="Validate license stuff.">
-    <property name="skipRegexChecksum" value=""/>
-    <license-check-macro dir="${basedir}" licensedir="${common-solr.dir}/licenses">
-      <additional-excludes>
-        <exclude name="example/exampledocs/post.jar" />
-        <exclude name="server/solr-webapp/**" />
-        <exclude name="package/**"/>
-      </additional-excludes>
-      <additional-filters>
-        <replaceregex pattern="^jetty([^/]+)$" replace="jetty" flags="gi" />
-        <!-- start.jar comes from jetty, .jar already stripped by checker defaults --> 
-        <replaceregex pattern="^start$" replace="jetty" flags="gi" />
-        <replaceregex pattern="slf4j-([^/]+)$" replace="slf4j" flags="gi" />
-        <replaceregex pattern="(bcmail|bcprov)-([^/]+)$" replace="\1" flags="gi" />
-      </additional-filters>
-    </license-check-macro>
-  </target>
-  
-  <target name="check-forbidden-apis" depends="-install-forbidden-apis" description="Check forbidden API calls in compiled class files.">
-    <subant target="check-forbidden-apis" inheritall="false" >
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="solrj" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-    </subant>
-    <contrib-crawl target="check-forbidden-apis"/>
-  </target>
-
-  <!-- rat sources -->
-  <!-- rat-sources-typedef is *not* a useless dependency. do not remove -->
-  <target name="rat-sources" depends="rat-sources-typedef,common.rat-sources">
-    <subant target="rat-sources" inheritall="false" >
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="solrj" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-      <fileset dir="webapp" includes="build.xml"/>
-    </subant>
-    <contrib-crawl target="rat-sources"/>
-  </target>
-  
-  <!-- Clean targets -->
-  <target name="clean" description="Cleans compiled files and other temporary artifacts.">
-    <delete dir="build" />
-    <delete dir="dist" />
-    <delete dir="package" />
-    <delete dir="server/solr/lib" />
-    <delete dir="example/solr/lib" />
-    <delete dir="example/cloud" />
-    <delete dir="example/techproducts" />
-    <delete dir="example/schemaless" />
-    <delete includeemptydirs="true">
-      <fileset dir="bin">
-        <include name="*.pid" />
-      </fileset>
-      <fileset dir="example">
-        <include name="**/data/**/*" />
-        <exclude name="**/.gitignore" />
-      </fileset>
-      <fileset dir="server">
-        <include name="**/data/**/*" />
-        <include name="solr/zoo_data/" />
-        <include name="start.jar" />
-        <include name="logs/*" />
-        <include name="webapps" />
-        <include name="solr-webapp/**/*" />
-        <exclude name="**/.gitignore" />
-      </fileset>      
-    </delete>
-  </target>
-  
-  <target name="clean-dest"
-          description="Cleans out build/ but leaves build/docs/, dist/ and package/ alone.  This allows us to run nightly and clover together in Hudson">
-    <delete includeemptydirs="true" >
-      <fileset dir="build">
-        <exclude name="docs/"/>
-      </fileset>
-    </delete>
-  </target>
-
-  <!-- ========================================================================= -->
-  <!-- ===================== DISTRIBUTION-RELATED TASKS ======================== -->
-  <!-- ========================================================================= -->
- 
-  <target name="dist"
-          description="Creates the Solr distribution files."
-          depends="dist-solrj, dist-core, dist-test-framework, dist-contrib" />
- 
-  <target name="dist-test-framework" depends="init-dist"
-          description="Creates the Solr test-framework JAR.">
-    <ant dir="test-framework" target="dist" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-  
-  <target name="dist-contrib" depends="init-dist"
-          description="Make the contribs ready for distribution">
-    <contrib-crawl target="dist"/>
-  </target>
-  
-  <target name="prepare-release-no-sign" depends="clean, package, generate-maven-artifacts"/>
-  <target name="prepare-release" depends="prepare-release-no-sign, sign-artifacts"/>
- 
-  <!-- make a distribution -->
-  <target name="package" depends="package-src-tgz,create-package,documentation,-dist-changes"/>
-
-  <!-- copy changes/ to the release folder -->
-  <target name="-dist-changes">
-   <copy todir="${package.dir}/changes">
-     <fileset dir="build/docs/changes"/>
-   </copy>
-  </target>
-
-  <!-- Makes a tarball of the source.    -->
-  <!-- Copies NOTICE.txt and LICENSE.txt from solr/ to the root level. -->
-  <target name="package-src-tgz" depends="init-dist"
-          description="Packages the Solr Source Distribution">
-    <property name="source.package.file"
-              location="${package.dir}/${fullnamever}-src.tgz"/>
-    <delete file="${source.package.file}" failonerror="false" />
-    <export-source source.dir=".."/>
-
-    <!-- Exclude javadoc package-list files under licenses incompatible with the ASL -->
-    <delete dir="${src.export.dir}/lucene/tools/javadoc/java8"/>
-
-    <build-changes changes.src.file="${src.export.dir}/solr/CHANGES.txt"
-                   changes.target.dir="${src.export.dir}/solr/docs/changes"
-                   changes.product="solr"/>
-
-    <tar destfile="${source.package.file}" compression="gzip" longfile="gnu">
-      <tarfileset dir="${src.export.dir}/lucene"
-                  includes="CHANGES.txt"
-                  fullpath="${fullnamever}/solr/LUCENE_CHANGES.txt" />
-      <tarfileset dir="${src.export.dir}"
-                  prefix="${fullnamever}"
-                  excludes="solr/example/**/*.sh solr/example/**/bin/ solr/scripts/** gradle/wrapper/gradle-wrapper.jar"/>
-      <tarfileset dir="${src.export.dir}"
-                  prefix="${fullnamever}"
-                  filemode="755"
-                  includes="solr/example/**/*.sh solr/example/**/bin/ solr/scripts/**"/>
-      <tarfileset dir="${src.export.dir}/solr" prefix="${fullnamever}"
-                  includes="NOTICE.txt,LICENSE.txt"/>
-    </tar>
-    <make-checksums file="${source.package.file}"/>
-  </target>
- 
-  <target name="package-local-src-tgz"
-          description="Packages the Solr and Lucene sources from the local working copy">
-    <mkdir dir="${common-solr.dir}/build"/>
-    <property name="source.package.file"
-              value="${common-solr.dir}/build/${fullnamever}-src.tgz"/>
-    <delete file="${source.package.file}" failonerror="false" />
-
-    <!-- includes/excludes requires a relative path -->
-    <property name="dist.rel" location="${dist}" relative="yes"/>
-    <property name="package.dir.rel" location="${package.dir}" relative="yes"/>
-
-    <tar destfile="${source.package.file}" compression="gzip" longfile="gnu">
-      <tarfileset dir=".." prefix="${fullnamever}" includes="*.txt *.xml dev-tools/" />
-      <tarfileset dir="." prefix="${fullnamever}" includes="LICENSE.txt NOTICE.txt"/>
-      <tarfileset dir="." prefix="${fullnamever}/solr"
-                  excludes="build/** ${package.dir.rel}/** ${dist.rel}/**
-                            example/lib/**
-                            **/*.jar 
-                            lib/README.committers.txt **/data/ **/logs/*
-                            **/*.sh **/bin/ scripts/ 
-                            .idea/ **/*.iml **/pom.xml" />
-      <tarfileset dir="." prefix="${fullnamever}/solr"
-                  includes="core/src/test-files/solr/lib/classes/empty-file-main-lib.txt" />
-      <tarfileset dir="." filemode="755" prefix="${fullnamever}/solr"
-                  includes="**/*.sh **/bin/ scripts/"
-                  excludes="build/**"/>
-      <tarfileset dir="../lucene" prefix="${fullnamever}/lucene">
-        <patternset refid="lucene.local.src.package.patterns"/>
-      </tarfileset>
-    </tar>
-  </target>
-
-  <target name="create-package"
-          description="Packages the Solr Binary Distribution">
-    <antcall inheritall="true">
-      <param name="called.from.create-package" value="true"/>
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <target name="init-dist"/>
-      <target name="dist"/>
-      <target name="server"/>
-      <target name="documentation-online"/>
-    </antcall>
-    <mkdir dir="${dest}/${fullnamever}"/>
-    <delete includeemptydirs="true">
-      <fileset dir="${dest}/${fullnamever}" includes="**/*"/>
-    </delete>
- 
-    <delete file="${package.dir}/${fullnamever}.tgz" failonerror="false" />
-    <delete file="${package.dir}/${fullnamever}.zip" failonerror="false" />
- 
-    <mkdir dir="${dest}/contrib-lucene-libs-to-package"/>
-    <delete dir="${dest}/contrib-lucene-libs-to-package" includes="**/*"/>
-    <contrib-crawl target="add-lucene-libs-to-package"/>
- 
-    <tar destfile="${package.dir}/${fullnamever}.tgz" compression="gzip" longfile="gnu">
-      <tarfileset dir="../lucene"
-                  includes="CHANGES.txt"
-                  fullpath="${fullnamever}/LUCENE_CHANGES.txt" />
-      <tarfileset dir="."
-                  prefix="${fullnamever}"
-                  includes="LICENSE.txt NOTICE.txt CHANGES.txt README.md site/SYSTEM_REQUIREMENTS.md
-                            bin/** server/** example/** contrib/**/lib/** contrib/**/conf/** contrib/**/README.md
-                            licenses/**"
-                  excludes="licenses/README.committers.txt **/data/ **/logs/* 
-                            **/classes/ **/*.sh **/ivy.xml **/build.xml
-                            **/bin/ **/*.iml **/*.ipr **/*.iws **/pom.xml 
-                            **/*pom.xml.template server/etc/test/ contrib/**/src/" />
-      <tarfileset dir="${dest}/contrib-lucene-libs-to-package"
-                  prefix="${fullnamever}"
-                  includes="**" />
-      <tarfileset dir="."
-                  filemode="755"
-                  prefix="${fullnamever}"
-                  includes="bin/** server/**/*.sh example/**/*.sh example/**/bin/ contrib/**/bin/**"
-                  excludes="server/etc/test/**" />
-      <tarfileset dir="."
-                  prefix="${fullnamever}"
-                  includes="dist/*.jar
-                            dist/solrj-lib/*
-                            dist/test-framework/**"
-                  excludes="**/*.tgz **/*.zip **/*.md5 **/*src*.jar **/*docs*.jar **/*.sha1 **/*.sha512" />
-      <tarfileset dir="${javadoc-online.dir}"
-                  prefix="${fullnamever}/docs" />
-    </tar>
-    <make-checksums file="${package.dir}/${fullnamever}.tgz"/>
- 
-    <untar compression="gzip" src="${package.dir}/${fullnamever}.tgz" dest="${dest}"/>
- 
-    <!--
-        This is a list of text file patterns to convert to CRLF line-ending style.
-        Shell scripts and files included in shell scripts should not be converted.
-        NB: The line-ending conversion process will mangle non-UTF8-encoded files.
-       -->
-    <fixcrlf srcdir="${dest}/${fullnamever}"
-             encoding="UTF-8"
-             eol="crlf"
-             includes="**/*.alg **/*.cfg **/*.cgi **/*.cpp **/*.css **/*.csv **/*.dtd
-                        **/*.erb **/*.fcgi **/.htaccess **/*.htm **/*.html **/*.incl
-                        **/*.java **/*.javacc **/*.jflex **/*.jflex-macro **/*.jj
-                        **/*.js **/*.json **/*.jsp **/*LICENSE **/package-list **/*.pl
-                        **/*.pom **/*pom.xml.template **/*.properties **/*.py
-                        **/*.rake **/Rakefile **/*.rb **/*.rbbi **/README* **/*.rhtml
-                        **/*.rslp **/*.rxml **/*.script **/*.svg **/*.tsv **/*.txt
-                        **/UPGRADING **/USAGE **/*.uxf **/*.vm **/*.xcat **/*.xml
-                        **/*.xsl **/*.xslt **/*.yml"
-             excludes="**/stopwordsWrongEncoding.txt **/gb18030-example.xml"
-        />
- 
-    <zip destfile="${package.dir}/${fullnamever}.zip">
-      <zipfileset dir="${dest}/${fullnamever}"
-                  prefix="${fullnamever}"
-                  excludes="**/*.sh **/bin/ src/scripts/" />
-      <zipfileset dir="${dest}/${fullnamever}"
-                  prefix="${fullnamever}"
-                  includes="**/*.sh **/bin/ src/scripts/"
-                  filemode="755" />
-    </zip>
-    <make-checksums file="${package.dir}/${fullnamever}.zip"/>
-  </target>
-
-  <target name="changes-to-html" depends="define-lucene-javadoc-url">
-    <build-changes changes.product="solr"/>
-  </target>
- 
-  <target name="sign-artifacts">
-    <sign-artifacts-macro artifacts.dir="${package.dir}"/>
-  </target>
- 
-  <target name="resolve" depends="resolve-example,resolve-server">
-     <sequential>
-     <ant dir="core" target="resolve" inheritall="false">
-       <propertyset refid="uptodate.and.compiled.properties"/>
-     </ant>
-     <ant dir="solrj" target="resolve" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-     </ant>
-     <ant dir="test-framework" target="resolve" inheritall="false">
-       <propertyset refid="uptodate.and.compiled.properties"/>
-     </ant>
-     <ant dir="server" target="resolve" inheritall="false">
-       <propertyset refid="uptodate.and.compiled.properties"/>
-     </ant>
-     <ant dir="solr-ref-guide" target="resolve" inheritall="false">
-       <propertyset refid="uptodate.and.compiled.properties"/>
-     </ant>
-     <contrib-crawl target="resolve"/>
-    </sequential>
-  </target>
-
-  <target name="documentation-lint" depends="-ecj-javadoc-lint,-documentation-lint-unsupported" if="documentation-lint.supported"
-          description="Validates the generated documentation (HTML errors, broken links,...)">
-    <!-- we use antcall here, otherwise ANT will run all dependent targets: -->
-    <antcall target="-documentation-lint"/>
-  </target>
-
-  <!-- TODO: does solr have any other docs we should check? -->
-  <!-- TODO: also integrate checkJavaDocs.py, which does more checks -->
-  <target name="-documentation-lint" depends="documentation">
-    <echo message="Checking for broken links..."/>
-    <check-broken-links dir="${javadoc.dir}"/>
-    <echo message="Checking for malformed docs..."/>
-    <!-- TODO: add missing docs for all classes and bump this to level=class -->
-    <check-missing-javadocs dir="${javadoc.dir}" level="package"/>
-  </target>
-
-  <target name="-ecj-javadoc-lint" depends="compile,compile-test,jar-test-framework,-ecj-javadoc-lint-unsupported,-ecj-resolve" if="ecj-javadoc-lint.supported">
-    <subant target="-ecj-javadoc-lint" failonerror="true" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-      <fileset dir="core" includes="build.xml"/>
-      <fileset dir="solrj" includes="build.xml"/>
-      <fileset dir="test-framework" includes="build.xml"/>
-    </subant>
-    <contrib-crawl target="-ecj-javadoc-lint"/>
-  </target>
-
-  <!-- define-lucene-javadoc-url is *not* a useless dependencies. Do not remove! -->
-  <target name="-dist-maven" depends="install-maven-tasks,define-lucene-javadoc-url">
-    <sequential>
-      <m2-deploy pom.xml="${filtered.pom.templates.dir}/solr/pom.xml"/>    <!-- Solr parent POM -->
-      <subant target="-dist-maven" inheritall="false" >
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="core" includes="build.xml"/>
-        <fileset dir="solrj" includes="build.xml"/>
-        <fileset dir="test-framework" includes="build.xml"/>
-        <fileset dir="webapp" includes="build.xml"/>
-      </subant>
-      <contrib-crawl target="-dist-maven"/>
-    </sequential>
-  </target>
-
-  <target name="-install-to-maven-local-repo" depends="install-maven-tasks">
-    <sequential>
-      <m2-install pom.xml="${filtered.pom.templates.dir}/solr/pom.xml"/>    <!-- Solr parent POM -->
-      <subant target="-install-to-maven-local-repo" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="core" includes="build.xml"/>
-        <fileset dir="solrj" includes="build.xml"/>
-        <fileset dir="test-framework" includes="build.xml"/>
-        <fileset dir="webapp" includes="build.xml"/>
-      </subant>
-      <contrib-crawl target="-install-to-maven-local-repo"/>
-    </sequential>
-  </target>
-
-  <target name="generate-maven-artifacts" depends="-unpack-solr-tgz">
-    <ant dir=".." target="resolve" inheritall="false"/>
-    <antcall target="-filter-pom-templates" inheritall="false"/>
-    <antcall target="-dist-maven" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-  </target>
- 
-  <target name="-validate-maven-dependencies" depends="compile-tools, install-maven-tasks, load-custom-tasks">
-    <sequential>
-      <subant target="-validate-maven-dependencies" failonerror="true" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="core" includes="build.xml"/>
-        <fileset dir="solrj" includes="build.xml"/>
-        <fileset dir="test-framework" includes="build.xml"/>
-        <fileset dir="webapp" includes="build.xml"/>
-      </subant>
-      <contrib-crawl target="-validate-maven-dependencies"/>
-    </sequential>
-  </target>
-   
-  <!-- ========================================================================= -->
-  <!-- ========================= COMMITTERS' HELPERS =========================== -->
-  <!-- ========================================================================= -->
-  
-  <property name="analysis-common.res.dir"  value="../lucene/analysis/common/src/resources/org/apache/lucene/analysis"/>
-  <property name="analysis-kuromoji.res.dir"  value="../lucene/analysis/kuromoji/src/resources/org/apache/lucene/analysis"/>
-  <property name="analysis.conf.dest" value="${example}/solr/collection1/conf/lang"/>
-
-  <target name="sync-analyzers"
-          description="Committers' Helper: synchronizes analysis resources (e.g. stoplists) to the example">
-    <!-- arabic -->
-    <copy verbose="true" file="${analysis-common.res.dir}/ar/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ar.txt"/>
-    <!-- bulgarian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/bg/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_bg.txt"/>
-    <!-- catalan -->
-    <copy verbose="true" file="${analysis-common.res.dir}/ca/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ca.txt"/>
-    <!-- kurdish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/ckb/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ckb.txt"/>
-    <!-- czech -->
-    <copy verbose="true" file="${analysis-common.res.dir}/cz/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_cz.txt"/>
-    <!-- danish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/danish_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_da.txt"/>
-    <!-- german -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/german_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_de.txt"/>
-    <!-- greek -->
-    <copy verbose="true" file="${analysis-common.res.dir}/el/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_el.txt"/>
-    <!-- spanish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/spanish_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_es.txt"/>
-    <!-- basque -->
-    <copy verbose="true" file="${analysis-common.res.dir}/eu/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_eu.txt"/>
-    <!-- persian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/fa/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_fa.txt"/>
-    <!-- finnish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/finnish_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_fi.txt"/>
-    <!-- french -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/french_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_fr.txt"/>
-        <!-- irish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/ga/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ga.txt"/>
-    <!-- galician -->
-    <copy verbose="true" file="${analysis-common.res.dir}/gl/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_gl.txt"/>
-    <!-- hindi -->
-    <copy verbose="true" file="${analysis-common.res.dir}/hi/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_hi.txt"/>
-    <!-- hungarian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/hungarian_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_hu.txt"/>
-    <!-- armenian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/hy/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_hy.txt"/>
-    <!-- indonesian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/id/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_id.txt"/>
-    <!-- italian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/italian_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_it.txt"/>
-    <!-- japanese -->
-    <copy verbose="true" file="${analysis-kuromoji.res.dir}/ja/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ja.txt"/>
-    <copy verbose="true" file="${analysis-kuromoji.res.dir}/ja/stoptags.txt"
-                         tofile="${analysis.conf.dest}/stoptags_ja.txt"/>
-    <!-- latvian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/lv/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_lv.txt"/>
-    <!-- dutch -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/dutch_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_nl.txt"/>
-    <!-- norwegian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/norwegian_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_no.txt"/>
-    <!-- portuguese -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/portuguese_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_pt.txt"/>
-    <!-- romanian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/ro/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ro.txt"/>
-    <!-- russian -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/russian_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_ru.txt"/>
-    <!-- swedish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/snowball/swedish_stop.txt"
-                         tofile="${analysis.conf.dest}/stopwords_sv.txt"/>
-    <!-- thai -->
-    <copy verbose="true" file="${analysis-common.res.dir}/th/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_th.txt"/>
-    <!-- turkish -->
-    <copy verbose="true" file="${analysis-common.res.dir}/tr/stopwords.txt"
-                         tofile="${analysis.conf.dest}/stopwords_tr.txt"/>
-  </target>
-
-  <target name="jar-checksums" depends="resolve">
-    <jar-checksum-macro srcdir="${common-solr.dir}" dstdir="${common-solr.dir}/licenses"/>
-  </target>
-
-  <target name="-append-module-dependencies-properties">
-    <ant dir="core" target="-append-module-dependencies-properties" inheritAll="false"/>
-    <ant dir="solrj" target="-append-module-dependencies-properties" inheritAll="false"/>
-    <ant dir="test-framework" target="-append-module-dependencies-properties" inheritAll="false"/>
-    <contrib-crawl target="-append-module-dependencies-properties"/>
-  </target>
-
-</project>
diff --git a/solr/common-build.xml b/solr/common-build.xml
deleted file mode 100644
index c76944d..0000000
--- a/solr/common-build.xml
+++ /dev/null
@@ -1,551 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<project name="common-solr" default="default" xmlns:rsel="antlib:org.apache.tools.ant.types.resources.selectors">
-  <description>
-    This file is designed for importing into a main build file, and not intended
-    for standalone use.
-  </description>
-
-  <dirname file="${ant.file.common-solr}" property="common-solr.dir"/>
-
-  <property name="Name" value="Solr" />
-  
-  <!-- solr uses Java 11 -->
-  <property name="javac.release" value="11"/>
-  <property name="javac.args" value="-Xlint:-deprecation"/>
-  <property name="javac.profile.args" value=""/>
-
-  <property name="dest" location="${common-solr.dir}/build" />
-  <property name="build.dir" location="${dest}/${ant.project.name}"/>
-  <property name="jacoco.report.dir" location="${dest}/jacoco"/>
-  <property name="dist" location="${common-solr.dir}/dist"/>
-  <property name="package.dir" location="${common-solr.dir}/package"/>
-  <property name="maven.dist.dir" location="${package.dir}/maven"/>
-  <property name="lucene-libs" location="${dest}/lucene-libs" />
-  <property name="tests.userdir" location="src/test-files"/>
-  <property name="tests.policy" location="${common-solr.dir}/server/etc/security.policy"/>
-  <property name="server.dir" location="${common-solr.dir}/server" />
-  <property name="example" location="${common-solr.dir}/example" />
-  <property name="javadoc.dir" location="${dest}/docs"/>
-  <property name="javadoc-online.dir" location="${dest}/docs-online"/>
-  <property name="tests.cleanthreads.sysprop" value="perClass"/>
-
-  <property name="changes.target.dir" location="${dest}/docs/changes"/>
-  <property name="license.dir" location="${common-solr.dir}/licenses"/>
-
-  <property name="solr.tgz.unpack.dir" location="${common-solr.dir}/build/solr.tgz.unpacked"/>
-  <property name="dist.jar.dir.prefix" value="${solr.tgz.unpack.dir}/solr"/>
-  <property name="dist.jar.dir.suffix" value="dist"/>
-
-  <import file="${common-solr.dir}/../lucene/module-build.xml"/>
-
-  <property name="solr.tgz.file" location="${common-solr.dir}/package/solr-${version}.tgz"/>
-  <available file="${solr.tgz.file}" property="solr.tgz.exists"/>
-  <available type="dir" file="${solr.tgz.unpack.dir}" property="solr.tgz.unpack.dir.exists"/>
-  <target name="-ensure-solr-tgz-exists" unless="solr.tgz.exists">
-    <ant dir="${common-solr.dir}" target="create-package" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-  <target name="-unpack-solr-tgz" unless="${solr.tgz.unpack.dir.exists}">
-    <antcall target="-ensure-solr-tgz-exists">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </antcall>
-    <mkdir dir="${solr.tgz.unpack.dir}"/>
-    <untar compression="gzip" src="${solr.tgz.file}" dest="${solr.tgz.unpack.dir}">
-      <patternset refid="patternset.lucene.solr.jars"/>
-    </untar>
-  </target>
-
-  <!-- backwards compatibility with existing targets/tasks; TODO: remove this! -->
-  <property name="fullnamever" value="${final.name}"/>
-
-  <path id="additional.dependencies">
-    <fileset dir="${common-solr.dir}/core/lib" excludes="${common.classpath.excludes}"/>
-    <fileset dir="${common-solr.dir}/solrj/lib" excludes="${common.classpath.excludes}"/>
-    <fileset dir="${common-solr.dir}/server/lib" excludes="${common.classpath.excludes}"/>
-    <fileset dir="${common-solr.dir}/example/example-DIH/solr/db/lib" excludes="${common.classpath.excludes}"/>
-    <fileset dir="lib" excludes="${common.classpath.excludes}" erroronmissingdir="false"/>
-  </path>
-
-  <path id="solr.lucene.libs">
-    <!-- List of jars that will be used as the foundation for both
-         the base classpath, as well as copied into the lucene-libs dir
-         in the release.
-    -->
-    <!-- NOTE: lucene-core is explicitly not included because of the
-         base.classpath (compilation & tests are done directly against
-         the class files w/o needing to build the jar)
-    -->
-    <pathelement location="${analyzers-common.jar}"/>
-    <pathelement location="${analyzers-kuromoji.jar}"/>
-    <pathelement location="${analyzers-nori.jar}"/>
-    <pathelement location="${analyzers-phonetic.jar}"/>
-    <pathelement location="${codecs.jar}"/>
-    <pathelement location="${backward-codecs.jar}"/>
-    <pathelement location="${highlighter.jar}"/>
-    <pathelement location="${memory.jar}"/>
-    <pathelement location="${misc.jar}"/>
-    <pathelement location="${spatial-extras.jar}"/>
-    <pathelement location="${spatial3d.jar}"/>
-    <pathelement location="${expressions.jar}"/>
-    <pathelement location="${suggest.jar}"/>
-    <pathelement location="${grouping.jar}"/>
-    <pathelement location="${queries.jar}"/>
-    <pathelement location="${queryparser.jar}"/>
-    <pathelement location="${join.jar}"/>
-    <pathelement location="${sandbox.jar}"/>
-    <pathelement location="${classification.jar}"/>
-  </path>
-
-  <path id="solr.base.classpath">
-    <pathelement location="${common-solr.dir}/build/solr-solrj/classes/java"/>
-    <pathelement location="${common-solr.dir}/build/solr-core/classes/java"/>
-    <path refid="solr.lucene.libs" />
-    <path refid="additional.dependencies"/>
-    <path refid="base.classpath"/>
-  </path>
-
-  <path id="classpath" refid="solr.base.classpath"/>
-
-  <path id="solr.test.base.classpath">
-    <pathelement path="${common-solr.dir}/build/solr-test-framework/classes/java"/>
-    <fileset dir="${common-solr.dir}/test-framework/lib">
-      <include name="*.jar"/>
-      <exclude name="junit-*.jar" />
-      <exclude name="randomizedtesting-runner-*.jar" />
-      <exclude name="ant*.jar" />
-    </fileset>
-    <pathelement path="src/test-files"/>
-    <path refid="test.base.classpath"/>
-  </path>
-
-  <path id="test.classpath" refid="solr.test.base.classpath"/>
-
-  <macrodef name="solr-contrib-uptodate">
-    <attribute name="name"/>
-    <attribute name="property" default="@{name}.uptodate"/>
-    <attribute name="classpath.property" default="@{name}.jar"/>
-    <!-- set jarfile only, if the target jar file has no generic name -->
-    <attribute name="jarfile" default="${common-solr.dir}/build/contrib/solr-@{name}/solr-@{name}-${version}.jar"/>
-    <sequential>
-      <!--<echo message="Checking '@{jarfile}' against source folder '${common.dir}/contrib/@{name}/src/java'"/>-->
-      <property name="@{classpath.property}" location="@{jarfile}"/>
-      <uptodate property="@{property}" targetfile="@{jarfile}">
-        <srcfiles dir="${common-solr.dir}/contrib/@{name}/src/java" includes="**/*.java"/>
-      </uptodate>
-    </sequential>
-  </macrodef>
-
-  <target name="validate" depends="compile-tools">
-  </target>
-
-  <target name="init-dist" depends="resolve-groovy">
-    <mkdir dir="${build.dir}"/>
-    <mkdir dir="${package.dir}"/>
-    <mkdir dir="${dist}"/>
-    <mkdir dir="${maven.dist.dir}"/>
-  </target>
-
-  <target name="prep-lucene-jars"
-          depends="resolve-groovy,
-                   jar-lucene-core, jar-backward-codecs, jar-analyzers-phonetic, jar-analyzers-kuromoji, jar-analyzers-nori, jar-codecs,jar-expressions, jar-suggest, jar-highlighter, jar-memory,
-                   jar-misc, jar-spatial-extras, jar-spatial3d, jar-grouping, jar-queries, jar-queryparser, jar-join, jar-sandbox, jar-classification">
-      <property name="solr.deps.compiled" value="true"/>
-  </target>
-
-  <target name="lucene-jars-to-solr"
-          depends="-lucene-jars-to-solr-not-for-package,-lucene-jars-to-solr-package"/>
-
-  <target name="-lucene-jars-to-solr-not-for-package" unless="called.from.create-package">
-    <sequential>
-      <antcall target="prep-lucene-jars" inheritall="true"/>
-      <property name="solr.deps.compiled" value="true"/>
-      <copy todir="${lucene-libs}" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-        <path refid="solr.lucene.libs" />
-        <!-- NOTE: lucene-core is not already included in "solr.lucene.libs" because of its use in classpaths. -->
-        <fileset file="${lucene-core.jar}" />
-      </copy>
-    </sequential>
-  </target>
-
-  <target name="-lucene-jars-to-solr-package" if="called.from.create-package">
-    <sequential>
-      <antcall target="-unpack-lucene-tgz" inheritall="true"/>
-      <pathconvert property="relative.solr.lucene.libs" pathsep=",">
-        <path refid="solr.lucene.libs"/>
-        <fileset file="${lucene-core.jar}"/>
-        <globmapper from="${common.build.dir}/*" to="*" handledirsep="true"/>
-      </pathconvert>
-      <mkdir dir="${lucene-libs}"/>
-      <copy todir="${lucene-libs}" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-        <fileset dir="${lucene.tgz.unpack.dir}/lucene-${version}" includes="${relative.solr.lucene.libs}"/>
-      </copy>
-    </sequential>
-  </target>
-
-  <!-- Shared core/solrj/test-framework/contrib targets -->
-
-  <macrodef name="solr-jarify" description="Builds a Solr JAR file">
-    <attribute name="basedir" default="${build.dir}/classes/java"/>
-    <attribute name="destfile" default="${build.dir}/${final.name}.jar"/>
-    <attribute name="title" default="Apache Solr Search Server: ${ant.project.name}"/>
-    <attribute name="excludes" default="**/pom.xml,**/*.iml"/>
-    <attribute name="metainf.source.dir" default="${common-solr.dir}"/>
-    <attribute name="implementation.title" default="org.apache.solr"/>
-    <attribute name="manifest.file" default="${manifest.file}"/>
-    <element name="solr-jarify-filesets" optional="true"/>
-    <element name="solr-jarify-additional-manifest-attributes" optional="true"/>
-    <sequential>
-      <jarify basedir="@{basedir}" destfile="@{destfile}"
-              title="@{title}" excludes="@{excludes}"
-              metainf.source.dir="@{metainf.source.dir}"
-              implementation.title="@{implementation.title}"
-              manifest.file="@{manifest.file}">
-        <filesets>
-          <solr-jarify-filesets />
-        </filesets>
-        <jarify-additional-manifest-attributes>
-          <solr-jarify-additional-manifest-attributes />
-        </jarify-additional-manifest-attributes>
-      </jarify>
-    </sequential>
-  </macrodef>
-
-  <target name="jar-core" depends="compile-core">
-    <solr-jarify/>
-  </target>
-
-  <target name="compile-core" depends="prep-lucene-jars,resolve-example,resolve-server,common.compile-core"/>
-  <target name="compile-test" depends="compile-solr-test-framework,common.compile-test"/>
-
-  <target name="dist" depends="jar-core">
-    <copy file="${build.dir}/${fullnamever}.jar" todir="${dist}"/>
-  </target>
-
-  <property name="lucenedocs" location="${common.dir}/build/docs"/>
-
-  <!-- dependency to ensure all lucene javadocs are present -->
-  <target name="lucene-javadocs" depends="javadocs-lucene-core,javadocs-analyzers-common,javadocs-analyzers-icu,javadocs-analyzers-kuromoji,javadocs-analyzers-nori,javadocs-analyzers-phonetic,javadocs-analyzers-smartcn,javadocs-analyzers-morfologik,javadocs-analyzers-stempel,javadocs-backward-codecs,javadocs-codecs,javadocs-expressions,javadocs-suggest,javadocs-grouping,javadocs-queries,javadocs-queryparser,javadocs-highlighter,javadocs-memory,javadocs-misc,javadocs-spatial-extras,javadocs-join,javadocs-test-framework"/>
-
-  <!-- create javadocs for the current module -->
-  <target name="javadocs" depends="compile-core,define-lucene-javadoc-url,lucene-javadocs,javadocs-solr-core,check-javadocs-uptodate" unless="javadocs-uptodate-${name}">
-     <sequential>
-      <mkdir dir="${javadoc.dir}/${name}"/>
-      <solr-invoke-javadoc>
-        <solrsources>
-          <packageset dir="${src.dir}"/>
-        </solrsources>
-        <links>
-          <link href="../solr-solrj"/>
-          <link href="../solr-core"/>
-        </links>
-      </solr-invoke-javadoc>
-      <solr-jarify basedir="${javadoc.dir}/${name}" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-     </sequential>
-  </target>
-
-  <target name="check-solr-core-javadocs-uptodate" unless="solr-core-javadocs.uptodate">
-    <uptodate property="solr-core-javadocs.uptodate" targetfile="${build.dir}/solr-core/solr-core-${version}-javadoc.jar">
-       <srcfiles dir="${common-solr.dir}/core/src/java" includes="**/*.java"/>
-    </uptodate>
-  </target>
-
-  <target name="check-solrj-javadocs-uptodate" unless="solrj-javadocs.uptodate">
-    <uptodate property="solrj-javadocs.uptodate" targetfile="${build.dir}/solr-solrj/solr-solrj-${version}-javadoc.jar">
-       <srcfiles dir="${common-solr.dir}/solrj/src/java" includes="**/*.java"/>
-    </uptodate>
-  </target>
-
-  <target name="javadocs-solr-core" depends="check-solr-core-javadocs-uptodate" unless="solr-core-javadocs.uptodate">
-    <ant dir="${common-solr.dir}/core" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solr-core-javadocs.uptodate" value="true"/>
-  </target>
-
-  <target name="javadocs-solrj" depends="check-solrj-javadocs-uptodate" unless="solrj-javadocs.uptodate">
-    <ant dir="${common-solr.dir}/solrj" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solrj-javadocs.uptodate" value="true"/>
-  </target>
-
-  <!-- macro to create solr javadocs with links to lucene. make sure calling task depends on lucene-javadocs -->
-  <macrodef name="solr-invoke-javadoc">
-      <element name="solrsources" optional="yes"/>
-      <element name="links" optional="yes"/>
-      <attribute name="destdir" default="${javadoc.dir}/${name}"/>
-      <attribute name="title" default="${Name} ${version} ${name} API"/>
-      <attribute name="overview" default="${src.dir}/overview.html"/>
-    <sequential>
-      <mkdir dir="@{destdir}"/>
-      <invoke-javadoc destdir="@{destdir}" title="@{title}" overview="@{overview}">
-        <sources>
-          <solrsources/>
-          <link offline="true" href="${lucene.javadoc.url}core" packagelistloc="${lucenedocs}/core"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-common" packagelistloc="${lucenedocs}/analyzers-common"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-icu" packagelistloc="${lucenedocs}/analyzers-icu"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-kuromoji" packagelistloc="${lucenedocs}/analyzers-kuromoji"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-nori" packagelistloc="${lucenedocs}/analyzers-nori"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-morfologik" packagelistloc="${lucenedocs}/analyzers-morfologik"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-phonetic" packagelistloc="${lucenedocs}/analyzers-phonetic"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-smartcn" packagelistloc="${lucenedocs}/analyzers-smartcn"/>
-          <link offline="true" href="${lucene.javadoc.url}analyzers-stempel" packagelistloc="${lucenedocs}/analyzers-stempel"/>
-          <link offline="true" href="${lucene.javadoc.url}backward-codecs" packagelistloc="${lucenedocs}/backward-codecs"/>
-          <link offline="true" href="${lucene.javadoc.url}codecs" packagelistloc="${lucenedocs}/codecs"/>
-          <link offline="true" href="${lucene.javadoc.url}expressions" packagelistloc="${lucenedocs}/expressions"/>
-          <link offline="true" href="${lucene.javadoc.url}suggest" packagelistloc="${lucenedocs}/suggest"/>
-          <link offline="true" href="${lucene.javadoc.url}grouping" packagelistloc="${lucenedocs}/grouping"/>
-          <link offline="true" href="${lucene.javadoc.url}join" packagelistloc="${lucenedocs}/join"/>
-          <link offline="true" href="${lucene.javadoc.url}queries" packagelistloc="${lucenedocs}/queries"/>
-          <link offline="true" href="${lucene.javadoc.url}queryparser" packagelistloc="${lucenedocs}/queryparser"/>
-          <link offline="true" href="${lucene.javadoc.url}highlighter" packagelistloc="${lucenedocs}/highlighter"/>
-          <link offline="true" href="${lucene.javadoc.url}memory" packagelistloc="${lucenedocs}/memory"/>
-          <link offline="true" href="${lucene.javadoc.url}misc" packagelistloc="${lucenedocs}/misc"/>
-          <link offline="true" href="${lucene.javadoc.url}classification" packagelistloc="${lucenedocs}/classification"/>
-          <link offline="true" href="${lucene.javadoc.url}spatial-extras" packagelistloc="${lucenedocs}/spatial-extras"/>
-          <links/>
-          <link href=""/>
-        </sources>
-      </invoke-javadoc>
-    </sequential>
-  </macrodef>
-
-  <target name="define-lucene-javadoc-url" depends="resolve-groovy" unless="lucene.javadoc.url">
-    <property name="useLocalJavadocUrl" value=""/>
-    <groovy><![CDATA[
-      String url, version = properties['version'];
-      String useLocalJavadocUrl = properties['useLocalJavadocUrl'];
-      if (version != properties['version.base'] || Boolean.parseBoolean(useLocalJavadocUrl)) {
-        url = new File(properties['common.dir'], 'build' + File.separator + 'docs').toURI().toASCIIString();
-        if (!(url =~ /\/$/)) url += '/';
-      } else {
-        version = version.replace('.', '_');
-        url = 'https://lucene.apache.org/core/' + version + '/';
-      }
-      task.log('Using the following URL to refer to Lucene Javadocs: ' + url);
-      properties['lucene.javadoc.url'] = url;
-    ]]></groovy>
-  </target>
-
-  <target name="define-solr-javadoc-url" depends="resolve-groovy" unless="solr.javadoc.url">
-    <groovy><![CDATA[
-      String url, version = properties['version'];
-      if (version != properties['version.base']) {
-        url = '';
-        task.log('Disabled Solr Javadocs online URL for packaging (custom build / SNAPSHOT version).');
-      } else {
-        version = version.replace('.', '_');
-        url = 'https://lucene.apache.org/solr/' + version + '/';
-        task.log('Using the following URL to refer to Solr Javadocs: ' + url);
-      }
-      properties['solr.javadoc.url'] = url;
-    ]]></groovy>
-  </target>
-
-  <target name="jar-src">
-    <sequential>
-      <mkdir dir="${build.dir}"/>
-      <solr-jarify basedir="${src.dir}" destfile="${build.dir}/${final.name}-src.jar">
-        <solr-jarify-filesets>
-          <fileset dir="${resources.dir}" erroronmissingdir="no"/>
-        </solr-jarify-filesets>
-      </solr-jarify>
-    </sequential>
-  </target>
-
-  <target name="-validate-maven-dependencies" depends="-validate-maven-dependencies.init">
-    <m2-validate-dependencies pom.xml="${maven.pom.xml}" licenseDirectory="${license.dir}">
-      <additional-filters>
-        <replaceregex pattern="jetty([^/]+)$" replace="jetty" flags="gi" />
-        <replaceregex pattern="slf4j-([^/]+)$" replace="slf4j" flags="gi" />
-        <replaceregex pattern="(bcmail|bcprov)-([^/]+)$" replace="\1" flags="gi" />
-      </additional-filters>
-      <excludes>
-        <rsel:or>
-          <rsel:name name="**/lucene-*-${maven.version.glob}.jar" handledirsep="true"/>
-          <rsel:name name="**/solr-*-${maven.version.glob}.jar" handledirsep="true"/>
-          <!-- TODO: figure out what is going on here with servlet-apis -->
-          <rsel:name name="**/*servlet*.jar" handledirsep="true"/>
-        </rsel:or>
-      </excludes>
-    </m2-validate-dependencies>
-  </target>
-
-  <!-- Solr core targets -->
-  <target name="compile-solr-core" description="Compile Solr core." unless="solr.core.compiled">
-    <ant dir="${common-solr.dir}/core" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solr.core.compiled" value="true"/>
-  </target>
-  <target name="compile-test-solr-core" description="Compile solr core tests">
-    <ant dir="${common-solr.dir}/core" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solr.core.compiled" value="true"/>
-  </target>
-  <target name="dist-core" depends="init-dist"
-          description="Creates the Solr JAR Distribution file.">
-    <ant dir="${common-solr.dir}/core" target="dist" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- Solrj targets -->
-  <target name="compile-solrj" description="Compile the java client." unless="solrj.compiled">
-    <ant dir="${common-solr.dir}/solrj" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solrj.compiled" value="true"/>
-  </target>
-  <target name="compile-test-solrj" description="Compile java client tests">
-    <ant dir="${common-solr.dir}/solrj" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solrj.compiled" value="true"/>
-  </target>
-  <target name="dist-solrj" depends="init-dist"
-          description="Creates the Solr-J JAR Distribution file.">
-    <ant dir="${common-solr.dir}/solrj" target="dist" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-  <target name="jar-solrj" description="Jar Solr-J">
-    <ant dir="${common-solr.dir}/solrj" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- Solr test-framework targets -->
-  <target name="compile-solr-test-framework" description="Compile the Solr test-framework" unless="solr.test.framework.compiled">
-    <ant dir="${common-solr.dir}/test-framework" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="solr.core.compiled" value="true"/>
-    <property name="solr.test.framework.compiled" value="true"/>
-  </target>
-
-  <target name="jar-solr-test-framework" depends="compile-solr-test-framework">
-    <ant dir="${common-solr.dir}/test-framework" target="jar-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- resolve dependencies in the example (relied upon by compile/tests) -->
-  <target name="resolve-example" unless="example.libs.uptodate">
-    <ant dir="${common-solr.dir}/example/example-DIH" target="resolve" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="example.libs.uptodate" value="true"/>
-  </target>
-
-  <!-- resolve dependencies in the server directory (relied upon by compile/tests) -->
-  <target name="resolve-server" unless="server.libs.uptodate">
-    <ant dir="${common-solr.dir}/server" target="resolve" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-    <property name="server.libs.uptodate" value="true"/>
-  </target>
-
-  <macrodef name="contrib-crawl">
-    <attribute name="target" default=""/>
-    <attribute name="failonerror" default="true"/>
-    <sequential>
-      <subant target="@{target}" failonerror="@{failonerror}" inheritall="false">
-        <propertyset refid="uptodate.and.compiled.properties"/>
-        <fileset dir="." includes="contrib/*/build.xml"/>
-      </subant>
-    </sequential>
-  </macrodef>
-
-  <target name="-compile-test-lucene-analysis">
-    <ant dir="${common.dir}/analysis" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="-compile-test-lucene-queryparser">
-    <ant dir="${common.dir}/queryparser" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="-compile-test-lucene-backward-codecs">
-    <ant dir="${common.dir}/backward-codecs" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- Solr contrib targets -->
-  <target name="-compile-analysis-extras">
-    <ant dir="${common-solr.dir}/contrib/analysis-extras" target="compile" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="compile-contrib" description="Compile contrib modules">
-    <contrib-crawl target="compile-core"/>
-  </target>
-
-  <target name="compile-test-contrib" description="Compile contrib modules' tests">
-    <contrib-crawl target="compile-test"/>
-  </target>
-
-  <target name="javadocs-contrib" description="Compile contrib modules">
-    <contrib-crawl target="javadocs"/>
-  </target>
-
-  <target name="jar-contrib" description="Jar contrib modules">
-    <contrib-crawl target="jar-core"/>
-  </target>
-
-  <target name="contribs-add-to-webapp">
-    <mkdir dir="${dest}/web"/>
-    <delete dir="${dest}/web" includes="**/*" failonerror="false"/>
-    <contrib-crawl target="add-to-webapp"/>
-  </target>
-
-  <!-- Forbidden API Task, customizations for Solr -->
-  <target name="-check-forbidden-all" depends="-init-forbidden-apis,compile-core,compile-test">
-    <property prefix="ivyversions" file="${common.dir}/ivy-versions.properties"/><!-- for commons-io version -->
-    <forbidden-apis suppressAnnotation="**.SuppressForbidden" classpathref="forbidden-apis.allclasses.classpath" targetVersion="${javac.release}">
-      <signatures>
-        <bundled name="jdk-unsafe"/>
-        <bundled name="jdk-deprecated"/>
-        <bundled name="jdk-non-portable"/>
-        <bundled name="jdk-reflection"/>
-        <bundled name="commons-io-unsafe-${ivyversions./commons-io/commons-io}"/>
-        <fileset dir="${common.dir}/tools/forbiddenApis">
-          <include name="base.txt" />
-          <include name="servlet-api.txt" />
-          <include name="solr.txt" />
-        </fileset>
-      </signatures>
-      <fileset dir="${build.dir}/classes/java" excludes="${forbidden-base-excludes}"/>
-      <fileset dir="${build.dir}/classes/test" excludes="${forbidden-tests-excludes}" erroronmissingdir="false"/>
-    </forbidden-apis>
-  </target>
-
-
-  <!-- hack for now to disable *all* Solr tests on Jenkins when "tests.disable-solr" property is set -->
-  <target name="test" unless="tests.disable-solr">
-    <antcall target="common.test" inheritrefs="true" inheritall="true"/>
-  </target>
-</project>
diff --git a/solr/contrib/analysis-extras/build.xml b/solr/contrib/analysis-extras/build.xml
deleted file mode 100644
index 68a88ad..0000000
--- a/solr/contrib/analysis-extras/build.xml
+++ /dev/null
@@ -1,92 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-analysis-extras" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-
-  <description>
-    Additional analysis components
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-  <target name="compile-test" depends="-compile-test-lucene-analysis,common-solr.compile-test"/>
-
-  <path id="analysis.extras.lucene.libs">
-    <pathelement location="${analyzers-icu.jar}"/>
-    <!--
-      Although the smartcn, stempel, morfologik and opennlp jars are not dependencies of
-      code in the analysis-extras contrib, they must remain here in order to
-      populate the Solr distribution
-     -->
-    <pathelement location="${analyzers-smartcn.jar}"/>
-    <pathelement location="${analyzers-stempel.jar}"/>
-    <pathelement location="${analyzers-morfologik.jar}"/>
-    <pathelement location="${analyzers-opennlp.jar}"/>
-  </path>
-
-  <path id="classpath">
-    <fileset dir="lib" excludes="${common.classpath.excludes}"/>
-    <path refid="analysis.extras.lucene.libs" />
-    <path refid="solr.base.classpath"/>
-  </path>
-
-  <path id="test.classpath">
-    <path refid="solr.test.base.classpath"/>
-    <dirset dir="${common.dir}/build/analysis/">
-      <include name="**/classes/java"/>
-      <include name="**/classes/test"/>
-    </dirset>
-  </path>
-
-  <!--
-    Although the smartcn, stempel, morfologik and opennlp jars are not dependencies of
-    code in the analysis-extras contrib, they must remain here in order to
-    populate the Solr distribution
-   -->
-  <target name="module-jars-to-solr"
-          depends="-module-jars-to-solr-not-for-package,-module-jars-to-solr-package"/>
-  <target name="-module-jars-to-solr-not-for-package" unless="called.from.create-package">
-    <antcall inheritall="true">
-      <target name="jar-analyzers-icu"/>
-      <target name="jar-analyzers-smartcn"/>
-      <target name="jar-analyzers-stempel"/>
-      <target name="jar-analyzers-morfologik"/>
-      <target name="jar-analyzers-opennlp"/>
-    </antcall>
-    <property name="analyzers-icu.uptodate" value="true"/> <!-- compile-time dependency -->
-    <mkdir dir="${build.dir}/lucene-libs"/>
-    <copy todir="${build.dir}/lucene-libs" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-      <path refid="analysis.extras.lucene.libs" />
-    </copy>
-  </target>
-  <target name="-module-jars-to-solr-package" if="called.from.create-package">
-    <antcall target="-unpack-lucene-tgz" inheritall="true"/>
-    <pathconvert property="relative.analysis.extras.lucene.libs" pathsep=",">
-      <path refid="analysis.extras.lucene.libs"/>
-      <globmapper from="${common.build.dir}/*" to="*" handledirsep="true"/>
-    </pathconvert>
-    <mkdir dir="${build.dir}/lucene-libs"/>
-    <copy todir="${build.dir}/lucene-libs" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-      <fileset dir="${lucene.tgz.unpack.dir}/lucene-${version}" includes="${relative.analysis.extras.lucene.libs}"/>
-    </copy>
-  </target>
-
-  <target name="compile-core" depends="jar-analyzers-icu, jar-analyzers-opennlp, solr-contrib-build.compile-core"/>
-  <target name="dist" depends="module-jars-to-solr, common-solr.dist"/>
-</project>
diff --git a/solr/contrib/analysis-extras/ivy.xml b/solr/contrib/analysis-extras/ivy.xml
deleted file mode 100644
index a427903..0000000
--- a/solr/contrib/analysis-extras/ivy.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="analysis-extras"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="com.ibm.icu" name="icu4j" rev="${/com.ibm.icu/icu4j}" conf="compile"/>
-    <dependency org="org.apache.opennlp" name="opennlp-tools" rev="${/org.apache.opennlp/opennlp-tools}" conf="compile" />
-
-    <!--
-      Although the 3rd party jars are not dependencies of code in
-      the analysis-extras contrib, they must remain here in order to
-      populate the Solr distribution
-     -->
-    <dependency org="org.carrot2" name="morfologik-polish" rev="${/org.carrot2/morfologik-polish}" conf="compile"/>
-    <dependency org="org.carrot2" name="morfologik-fsa" rev="${/org.carrot2/morfologik-fsa}" conf="compile"/>
-    <dependency org="org.carrot2" name="morfologik-stemming" rev="${/org.carrot2/morfologik-stemming}" conf="compile"/>
-    <dependency org="ua.net.nlp"  name="morfologik-ukrainian-search" rev="${/ua.net.nlp/morfologik-ukrainian-search}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java b/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
index 2fdbd01..575d908 100644
--- a/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
+++ b/solr/contrib/analysis-extras/src/java/org/apache/solr/update/processor/OpenNLPExtractNamedEntitiesUpdateProcessorFactory.java
@@ -73,14 +73,14 @@
  *     &lt;/analyzer&gt;
  *   &lt;/fieldType&gt;
  * </pre>
- * 
+ *
  * <p>See the <a href="http://opennlp.apache.org/models.html">OpenNLP website</a>
  * for information on downloading pre-trained models.</p>
  *
- * Note that in order to use model files larger than 1MB on SolrCloud, 
- * <a href="https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble#increasing-zookeeper-s-1mb-file-size-limit"
+ * Note that in order to use model files larger than 1MB on SolrCloud,
+ * <a href="https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html#increasing-the-file-size-limit"
  * >ZooKeeper server and client configuration is required</a>.
- * 
+ *
  * <p>
  * The <code>source</code> field(s) can be configured as either:
  * </p>
diff --git a/solr/contrib/analytics/build.xml b/solr/contrib/analytics/build.xml
deleted file mode 100644
index 27da8d5..0000000
--- a/solr/contrib/analytics/build.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-analytics" default="default">
-
-  <description>
-    Analytics Package
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-</project>
diff --git a/solr/contrib/analytics/ivy.xml b/solr/contrib/analytics/ivy.xml
deleted file mode 100644
index 914f08e..0000000
--- a/solr/contrib/analytics/ivy.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="analytics"/>
-    <configurations defaultconfmapping="compile->master;test->master">
-      <conf name="compile" transitive="false"/> <!-- keep unused 'compile' configuration to allow build to succeed -->
-      <conf name="test" transitive="false"/>
-    </configurations>
-
-   <dependencies>
-     <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-   </dependencies>
-</ivy-module>
diff --git a/solr/contrib/analytics/src/java/org/apache/solr/handler/AnalyticsHandler.java b/solr/contrib/analytics/src/java/org/apache/solr/handler/AnalyticsHandler.java
index 9ffbaf4..ede5041 100644
--- a/solr/contrib/analytics/src/java/org/apache/solr/handler/AnalyticsHandler.java
+++ b/solr/contrib/analytics/src/java/org/apache/solr/handler/AnalyticsHandler.java
@@ -72,11 +72,8 @@
   }
 
   public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp) throws Exception {
-   
-    long timeAllowed = req.getParams().getLong(CommonParams.TIME_ALLOWED, -1L);
-    if (timeAllowed >= 0L) {
-      SolrQueryTimeoutImpl.set(timeAllowed);
-    }
+
+    SolrQueryTimeoutImpl.set(req);
     try {
       DocSet docs;
       try {
diff --git a/solr/contrib/clustering/build.xml b/solr/contrib/clustering/build.xml
deleted file mode 100644
index 7340a1f..0000000
--- a/solr/contrib/clustering/build.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-clustering" default="default">
-
-  <description>
-    Clustering Integraton
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-</project>
diff --git a/solr/contrib/clustering/ivy.xml b/solr/contrib/clustering/ivy.xml
deleted file mode 100644
index 1de378c..0000000
--- a/solr/contrib/clustering/ivy.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="clustering"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.carrot2" name="carrot2-mini" rev="${/org.carrot2/carrot2-mini}" conf="compile"/>
-    <dependency org="org.carrot2.shaded" name="carrot2-guava" rev="${/org.carrot2.shaded/carrot2-guava}" conf="compile"/>
-    <dependency org="org.carrot2.attributes" name="attributes-binder" rev="${/org.carrot2.attributes/attributes-binder}" conf="compile"/>
-
-    <dependency org="com.carrotsearch.thirdparty" name="simple-xml-safe" rev="${/com.carrotsearch.thirdparty/simple-xml-safe}" conf="compile"/>
-
-    <dependency org="com.fasterxml.jackson.core" name="jackson-annotations"  rev="${/com.fasterxml.jackson.core/jackson-annotations}"   conf="compile"/>
-    <dependency org="com.fasterxml.jackson.core" name="jackson-databind"     rev="${/com.fasterxml.jackson.core/jackson-databind}"      conf="compile"/>
-
-    <!--
-    NOTE: There are dependencies that are part of core Solr server (jackson-core, HPPC, etc.).
-    -->
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/contrib-build.xml b/solr/contrib/contrib-build.xml
deleted file mode 100644
index 6a3fb11..0000000
--- a/solr/contrib/contrib-build.xml
+++ /dev/null
@@ -1,57 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-contrib-build" xmlns:ivy="antlib:org.apache.ivy.ant">
-  <dirname file="${ant.file.solr-contrib-build}" property="solr-contrib.dir"/>
-  <property name="build.dir" location="${solr-contrib.dir}/../build/contrib/${ant.project.name}"/>
-
-  <property name="test.lib.dir" location="test-lib"/>
-
-  <import file="../common-build.xml"/>
-
-  <target name="compile-core" depends="compile-solr-core,compile-solrj,common-solr.compile-core"/>
-
-  <dirname file="${ant.file}" property="antfile.dir"/>
-
-  <available property="contrib.has.webapp" type="dir" file="${antfile.dir}/src/webapp" />
-  <target name="add-to-webapp" if="contrib.has.webapp">
-    <copy todir="${dest}/web" failonerror="false">
-      <fileset dir="${antfile.dir}/src/webapp"/>
-    </copy>
-  </target>
-
-  <available property="contrib.has.lucene-libs" type="dir" file="${build.dir}/lucene-libs"/>
-  <target name="add-lucene-libs-to-package" if="contrib.has.lucene-libs">
-    <pathconvert property="contrib.dir">
-      <path path="${antfile.dir}"/>
-      <flattenmapper/>
-    </pathconvert>
-    <copy todir="${dest}/contrib-lucene-libs-to-package/contrib/${contrib.dir}">
-      <fileset dir="${build.dir}" includes="lucene-libs/**"/>
-    </copy>
-  </target>
-
-  <target name="resolve" depends="ivy-availability-check,ivy-fail,ivy-configure">
-    <sequential>
-      <ivy:retrieve conf="compile" type="jar,bundle" sync="${ivy.sync}" log="download-only" symlink="${ivy.symlink}"/>
-      <ivy:retrieve conf="test" type="jar,bundle,test" sync="${ivy.sync}" log="download-only" symlink="${ivy.symlink}"
-                    pattern="${test.lib.dir}/[artifact]-[revision](-[classifier]).[ext]"/>
-    </sequential>
-  </target>
-</project>
diff --git a/solr/contrib/dataimporthandler-extras/build.gradle b/solr/contrib/dataimporthandler-extras/build.gradle
deleted file mode 100644
index fde00c3..0000000
--- a/solr/contrib/dataimporthandler-extras/build.gradle
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-apply plugin: 'java-library'
-
-description = 'Data Import Handler Extras'
-
-dependencies {
-  implementation project(':solr:core')
-
-  implementation project(':solr:contrib:dataimporthandler')
-  implementation project(':solr:contrib:extraction')
-
-  implementation ('javax.activation:activation')
-  implementation ('com.sun.mail:javax.mail')
-  implementation ('com.sun.mail:gimap')
-
-  testImplementation project(':solr:test-framework')
-}
diff --git a/solr/contrib/dataimporthandler-extras/build.xml b/solr/contrib/dataimporthandler-extras/build.xml
deleted file mode 100644
index 5cdd838..0000000
--- a/solr/contrib/dataimporthandler-extras/build.xml
+++ /dev/null
@@ -1,96 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-dataimporthandler-extras" default="default">
-
-  <description>
-    Data Import Handler Extras
-  </description>
-  
-  <import file="../contrib-build.xml"/>
-
-  <solr-contrib-uptodate name="dataimporthandler" 
-                         property="solr-dataimporthandler.uptodate" 
-                         classpath.property="solr-dataimporthandler.jar"/>
-
-  <target name="compile-solr-dataimporthandler" unless="solr-dataimporthandler.uptodate">
-    <ant dir="${common-solr.dir}/contrib/dataimporthandler" target="compile-core" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- 
-       really sucks we do this always, without an up-to-date thing for tests
-       we should probably fix this, the same issue exists in modules
-   -->
-  <target name="compile-solr-dataimporthandler-tests">
-    <ant dir="${common-solr.dir}/contrib/dataimporthandler" target="compile-test" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <!-- we don't actually need to compile this thing, we just want its libs -->
-  <target name="resolve-extraction-libs">
-    <ant dir="${common-solr.dir}/contrib/extraction" target="resolve" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <path id="classpath">
-    <pathelement location="${common-solr.dir}/build/contrib/solr-dataimporthandler/classes/java"/>
-    <fileset dir="${common-solr.dir}/contrib/dataimporthandler-extras/lib" excludes="${common.classpath.excludes}"/>
-    <fileset dir="${common-solr.dir}/contrib/extraction/lib" excludes="${common.classpath.excludes}"/>
-    <path refid="solr.base.classpath"/>
-  </path>
-
-  <path id="test.classpath">
-    <path refid="solr.test.base.classpath"/>
-    <pathelement location="${common-solr.dir}/build/contrib/solr-dataimporthandler/classes/test"/>
-    <path refid="classpath"/>
-  </path>
-
-  <!-- TODO: make this nicer like lucene? -->
-  <target name="javadocs" depends="compile-core,define-lucene-javadoc-url,lucene-javadocs,javadocs-solr-core,javadocs-dataimporthandler,check-javadocs-uptodate" unless="javadocs-uptodate-${name}">
-        <sequential>
-      <mkdir dir="${javadoc.dir}/${name}"/>
-      <solr-invoke-javadoc>
-        <solrsources>
-          <packageset dir="${src.dir}"/>
-        </solrsources>
-        <links>
-          <link href="../solr-solrj"/>
-          <link href="../solr-core"/>
-          <link href="../solr-dataimporthandler"/>
-        </links>
-      </solr-invoke-javadoc>
-      <solr-jarify basedir="${javadoc.dir}/${name}" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-     </sequential>
-  </target>
-
-  <target name="javadocs-dataimporthandler">
-    <ant dir="${common-solr.dir}/contrib/dataimporthandler" target="javadocs" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="-ecj-javadoc-lint-tests" depends="compile-solr-dataimporthandler-tests,common.-ecj-javadoc-lint-tests"/>
-
-  <target name="compile-core" depends="compile-solr-dataimporthandler,resolve-extraction-libs,solr-contrib-build.compile-core"/>
-  <target name="compile-test" depends="compile-solr-dataimporthandler-tests, common-solr.compile-test"/>
-</project>
diff --git a/solr/contrib/dataimporthandler-extras/ivy.xml b/solr/contrib/dataimporthandler-extras/ivy.xml
deleted file mode 100644
index 63968d8..0000000
--- a/solr/contrib/dataimporthandler-extras/ivy.xml
+++ /dev/null
@@ -1,40 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-    <info organisation="org.apache.solr" module="dataimporthandler-extras"/>
-
-    <!--
-        NOTE: In order to reduce Jar duplication, the build.xml file for 
-        contrib/dataimporthandler-extras explicitly includes all deps from 
-        contrib/extraction rather then specify them here.
-        
-        https://issues.apache.org/jira/browse/SOLR-3848
-    -->
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="javax.activation" name="activation" rev="${/javax.activation/activation}" conf="compile"/>
-    <dependency org="com.sun.mail" name="javax.mail" rev="${/com.sun.mail/javax.mail}" conf="compile"/>
-    <dependency org="com.sun.mail" name="gimap" rev="${/com.sun.mail/gimap}" conf="compile"/>  
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/MailEntityProcessor.java b/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/MailEntityProcessor.java
deleted file mode 100644
index 6861ae3..0000000
--- a/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/MailEntityProcessor.java
+++ /dev/null
@@ -1,901 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import com.sun.mail.imap.IMAPMessage;
-
-import org.apache.solr.common.util.SuppressForbidden;
-import org.apache.solr.handler.dataimport.config.ConfigNameConstants;
-import org.apache.solr.util.RTimer;
-import org.apache.tika.Tika;
-import org.apache.tika.metadata.Metadata;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import javax.mail.*;
-import javax.mail.internet.AddressException;
-import javax.mail.internet.ContentType;
-import javax.mail.internet.InternetAddress;
-import javax.mail.internet.MimeMessage;
-import javax.mail.search.*;
-
-import java.io.InputStream;
-import java.lang.invoke.MethodHandles;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.*;
-import java.util.function.Supplier;
-
-import com.sun.mail.gimap.GmailFolder;
-import com.sun.mail.gimap.GmailRawSearchTerm;
-
-/**
- * An EntityProcessor instance which can index emails along with their
- * attachments from POP3 or IMAP sources. Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler"
- * >http://wiki.apache.org/solr/DataImportHandler</a> for more details. <b>This
- * API is experimental and subject to change</b>
- * 
- * @since solr 1.4
- */
-public class MailEntityProcessor extends EntityProcessorBase {
-  
-  private static final SimpleDateFormat sinceDateParser = 
-      new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.ROOT);
-  private static final SimpleDateFormat afterFmt = 
-      new SimpleDateFormat("yyyy/MM/dd", Locale.ROOT);
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  public static interface CustomFilter {
-    public SearchTerm getCustomSearch(Folder folder);
-  }
-  
-  public void init(Context context) {
-    super.init(context);
-    // set attributes using XXX getXXXFromContext(attribute, defaultValue);
-    // applies variable resolver and return default if value is not found or null
-    // REQUIRED : connection and folder info
-    user = getStringFromContext("user", null);
-    password = getStringFromContext("password", null);
-    host = getStringFromContext("host", null);
-    protocol = getStringFromContext("protocol", null);
-    folderNames = getStringFromContext("folders", null);
-    // validate
-    if (host == null || protocol == null || user == null || password == null
-        || folderNames == null) throw new DataImportHandlerException(
-        DataImportHandlerException.SEVERE,
-        "'user|password|protocol|host|folders' are required attributes");
-    
-    // OPTIONAL : have defaults and are optional
-    recurse = getBoolFromContext("recurse", true);
-    
-    exclude.clear();
-    String excludes = getStringFromContext("exclude", "");
-    if (excludes != null && !excludes.trim().equals("")) {
-      exclude = Arrays.asList(excludes.split(","));
-    }
-    
-    include.clear();
-    String includes = getStringFromContext("include", "");
-    if (includes != null && !includes.trim().equals("")) {
-      include = Arrays.asList(includes.split(","));
-    }
-    batchSize = getIntFromContext("batchSize", 20);
-    customFilter = getStringFromContext("customFilter", "");
-    if (filters != null) filters.clear();
-    folderIter = null;
-    msgIter = null;
-            
-    String lastIndexTime = null;
-    String command = 
-        String.valueOf(context.getRequestParameters().get("command"));
-    if (!DataImporter.FULL_IMPORT_CMD.equals(command))
-      throw new IllegalArgumentException(this.getClass().getSimpleName()+
-          " only supports "+DataImporter.FULL_IMPORT_CMD);
-    
-    // Read the last_index_time out of the dataimport.properties if available
-    String cname = getStringFromContext("name", "mailimporter");
-    String varName = ConfigNameConstants.IMPORTER_NS_SHORT + "." + cname + "."
-        + DocBuilder.LAST_INDEX_TIME;
-    Object varValue = context.getVariableResolver().resolve(varName);
-    log.info("{}={}", varName, varValue);
-    
-    if (varValue != null && !"".equals(varValue) && 
-        !"".equals(getStringFromContext("fetchMailsSince", ""))) {
-
-      // need to check if varValue is the epoch, which we'll take to mean the
-      // initial value, in which case means we should use fetchMailsSince instead
-      Date tmp = null;
-      try {
-        tmp = sinceDateParser.parse((String)varValue);
-        if (tmp.getTime() == 0) {
-          log.info("Ignoring initial value {} for {} in favor of fetchMailsSince config parameter"
-              , varValue, varName);
-          tmp = null; // don't use this value
-        }
-      } catch (ParseException e) {
-        // probably ok to ignore this since we have other options below
-        // as we're just trying to figure out if the date is 0
-        log.warn("Failed to parse {} from {} due to", varValue, varName, e);
-      }    
-      
-      if (tmp == null) {
-        // favor fetchMailsSince in this case because the value from
-        // dataimport.properties is the default/init value
-        varValue = getStringFromContext("fetchMailsSince", "");
-        log.info("fetchMailsSince={}", varValue);
-      }
-    }
-    
-    if (varValue == null || "".equals(varValue)) {
-      varName = ConfigNameConstants.IMPORTER_NS_SHORT + "."
-          + DocBuilder.LAST_INDEX_TIME;
-      varValue = context.getVariableResolver().resolve(varName);
-      log.info("{}={}", varName, varValue);
-    }
-      
-    if (varValue != null && varValue instanceof String) {
-      lastIndexTime = (String)varValue;
-      if (lastIndexTime != null && lastIndexTime.length() == 0)
-        lastIndexTime = null;
-    }
-            
-    if (lastIndexTime == null) 
-      lastIndexTime = getStringFromContext("fetchMailsSince", "");
-
-    log.info("Using lastIndexTime {} for mail import", lastIndexTime);
-    
-    this.fetchMailsSince = null;
-    if (lastIndexTime != null && lastIndexTime.length() > 0) {
-      try {
-        fetchMailsSince = sinceDateParser.parse(lastIndexTime);
-        log.info("Parsed fetchMailsSince={}", lastIndexTime);
-      } catch (ParseException e) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-            "Invalid value for fetchMailSince: " + lastIndexTime, e);
-      }
-    }
-        
-    fetchSize = getIntFromContext("fetchSize", 32 * 1024);
-    cTimeout = getIntFromContext("connectTimeout", 30 * 1000);
-    rTimeout = getIntFromContext("readTimeout", 60 * 1000);
-    
-    String tmp = context.getEntityAttribute("includeOtherUserFolders");
-    includeOtherUserFolders = (tmp != null && Boolean.valueOf(tmp.trim()));
-    tmp = context.getEntityAttribute("includeSharedFolders");
-    includeSharedFolders = (tmp != null && Boolean.valueOf(tmp.trim()));
-    
-    setProcessAttachmentConfig();
-    includeContent = getBoolFromContext("includeContent", true);
-          
-    logConfig();
-  }
-  
-  private void setProcessAttachmentConfig() {
-    processAttachment = true;
-    String tbval = context.getEntityAttribute("processAttachments");
-    if (tbval == null) {
-      tbval = context.getEntityAttribute("processAttachement");
-      if (tbval != null) processAttachment = Boolean.valueOf(tbval);
-    } else processAttachment = Boolean.valueOf(tbval);
-  }
-    
-  @Override
-  public Map<String,Object> nextRow() {
-    Message mail = null;
-    Map<String,Object> row = null;
-    do {
-      // try till there is a valid document or folders get exhausted.
-      // when mail == NULL, it means end of processing
-      mail = getNextMail();   
-      
-      if (mail != null)
-        row = getDocumentFromMail(mail);
-      
-      if (row != null && row.get("folder") == null) 
-        row.put("folder", mail.getFolder().getFullName());
-      
-    } while (row == null && mail != null);
-    return row;
-  }
-  
-  private Message getNextMail() {
-    if (!connected) {
-      // this is needed to load the activation mail stuff correctly
-      // otherwise, the JavaMail multipart support doesn't get configured
-      // correctly, which leads to a class cast exception when processing
-      // multipart messages: IMAPInputStream cannot be cast to
-      // javax.mail.Multipart    
-      if (false == withContextClassLoader(getClass().getClassLoader(), this::connectToMailBox)) {
-        return null;
-      }
-      connected = true;
-    }
-    if (folderIter == null) {
-      createFilters();
-      folderIter = new FolderIterator(mailbox);
-    }
-    // get next message from the folder
-    // if folder is exhausted get next folder
-    // loop till a valid mail or all folders exhausted.
-    while (msgIter == null || !msgIter.hasNext()) {
-      Folder next = folderIter.hasNext() ? folderIter.next() : null;
-      if (next == null) return null;
-      
-      msgIter = new MessageIterator(next, batchSize);
-    }
-    return msgIter.next();
-  }
-  
-  private Map<String,Object> getDocumentFromMail(Message mail) {
-    Map<String,Object> row = new HashMap<>();
-    try {
-      addPartToDocument(mail, row, true);
-      return row;
-    } catch (Exception e) {
-      log.error("Failed to convert message [{}] to document due to: {}"
-          , mail, e, e);
-      return null;
-    }
-  }
-  
-  @SuppressWarnings({"unchecked"})
-  public void addPartToDocument(Part part, Map<String,Object> row, boolean outerMost) throws Exception {
-    if (part instanceof Message) {
-      addEnvelopeToDocument(part, row);
-    }
-    
-    String ct = part.getContentType().toLowerCase(Locale.ROOT);
-    ContentType ctype = new ContentType(ct);
-    if (part.isMimeType("multipart/*")) {
-      Object content = part.getContent();
-      if (content != null && content instanceof Multipart) {
-        Multipart mp = (Multipart) part.getContent();
-        int count = mp.getCount();
-        if (part.isMimeType("multipart/alternative")) count = 1;
-        for (int i = 0; i < count; i++)
-          addPartToDocument(mp.getBodyPart(i), row, false);
-      } else {
-        log.warn("Multipart content is a not an instance of Multipart! Content is: {}"
-                + ". Typically, this is due to the Java Activation JAR being loaded by the wrong classloader."
-            , (content != null ? content.getClass().getName() : "null"));
-      }
-    } else if (part.isMimeType("message/rfc822")) {
-      addPartToDocument((Part) part.getContent(), row, false);
-    } else {
-      String disp = part.getDisposition();
-      if (includeContent
-          && !(disp != null && disp.equalsIgnoreCase(Part.ATTACHMENT))) {
-        InputStream is = part.getInputStream();
-        Metadata contentTypeHint = new Metadata();
-        contentTypeHint.set(Metadata.CONTENT_TYPE, ctype.getBaseType()
-            .toLowerCase(Locale.ENGLISH));
-        String content = (new Tika()).parseToString(is, contentTypeHint);
-        if (row.get(CONTENT) == null) row.put(CONTENT, new ArrayList<String>());
-        List<String> contents = (List<String>) row.get(CONTENT);
-        contents.add(content.trim());
-        row.put(CONTENT, contents);
-      }
-      if (!processAttachment || disp == null
-          || !disp.equalsIgnoreCase(Part.ATTACHMENT)) return;
-      InputStream is = part.getInputStream();
-      String fileName = part.getFileName();
-      Metadata contentTypeHint = new Metadata();
-      contentTypeHint.set(Metadata.CONTENT_TYPE, ctype.getBaseType()
-          .toLowerCase(Locale.ENGLISH));
-      String content = (new Tika()).parseToString(is, contentTypeHint);
-      if (content == null || content.trim().length() == 0) return;
-      
-      if (row.get(ATTACHMENT) == null) row.put(ATTACHMENT,
-          new ArrayList<String>());
-      List<String> contents = (List<String>) row.get(ATTACHMENT);
-      contents.add(content.trim());
-      row.put(ATTACHMENT, contents);
-      if (row.get(ATTACHMENT_NAMES) == null) row.put(ATTACHMENT_NAMES,
-          new ArrayList<String>());
-      List<String> names = (List<String>) row.get(ATTACHMENT_NAMES);
-      names.add(fileName);
-      row.put(ATTACHMENT_NAMES, names);
-    }
-  }
-  
-  private void addEnvelopeToDocument(Part part, Map<String,Object> row)
-      throws MessagingException {
-    MimeMessage mail = (MimeMessage) part;
-    Address[] adresses;
-    if ((adresses = mail.getFrom()) != null && adresses.length > 0) row.put(
-        FROM, adresses[0].toString());
-    
-    List<String> to = new ArrayList<>();
-    if ((adresses = mail.getRecipients(Message.RecipientType.TO)) != null) addAddressToList(
-        adresses, to);
-    if ((adresses = mail.getRecipients(Message.RecipientType.CC)) != null) addAddressToList(
-        adresses, to);
-    if ((adresses = mail.getRecipients(Message.RecipientType.BCC)) != null) addAddressToList(
-        adresses, to);
-    if (to.size() > 0) row.put(TO_CC_BCC, to);
-    
-    row.put(MESSAGE_ID, mail.getMessageID());
-    row.put(SUBJECT, mail.getSubject());
-    
-    Date d = mail.getSentDate();
-    if (d != null) {
-      row.put(SENT_DATE, d);
-    }
-    
-    List<String> flags = new ArrayList<>();
-    for (Flags.Flag flag : mail.getFlags().getSystemFlags()) {
-      if (flag == Flags.Flag.ANSWERED) flags.add(FLAG_ANSWERED);
-      else if (flag == Flags.Flag.DELETED) flags.add(FLAG_DELETED);
-      else if (flag == Flags.Flag.DRAFT) flags.add(FLAG_DRAFT);
-      else if (flag == Flags.Flag.FLAGGED) flags.add(FLAG_FLAGGED);
-      else if (flag == Flags.Flag.RECENT) flags.add(FLAG_RECENT);
-      else if (flag == Flags.Flag.SEEN) flags.add(FLAG_SEEN);
-    }
-    flags.addAll(Arrays.asList(mail.getFlags().getUserFlags()));
-    if (flags.size() == 0) flags.add(FLAG_NONE);
-    row.put(FLAGS, flags);
-    
-    String[] hdrs = mail.getHeader("X-Mailer");
-    if (hdrs != null) row.put(XMAILER, hdrs[0]);
-  }
-  
-  private void addAddressToList(Address[] adresses, List<String> to)
-      throws AddressException {
-    for (Address address : adresses) {
-      to.add(address.toString());
-      InternetAddress ia = (InternetAddress) address;
-      if (ia.isGroup()) {
-        InternetAddress[] group = ia.getGroup(false);
-        for (InternetAddress member : group)
-          to.add(member.toString());
-      }
-    }
-  }
-  
-  private boolean connectToMailBox() {
-    try {
-      Properties props = new Properties();
-      if (System.getProperty("mail.debug") != null) 
-        props.setProperty("mail.debug", System.getProperty("mail.debug"));
-      
-      if (("imap".equals(protocol) || "imaps".equals(protocol))
-          && "imap.gmail.com".equals(host)) {
-        log.info("Consider using 'gimaps' protocol instead of '{}' for enabling GMail specific extensions for {}"
-            , protocol, host);
-      }
-      
-      props.setProperty("mail.store.protocol", protocol);
-      
-      String imapPropPrefix = protocol.startsWith("gimap") ? "gimap" : "imap";
-      props.setProperty("mail." + imapPropPrefix + ".fetchsize", "" + fetchSize);
-      props.setProperty("mail." + imapPropPrefix + ".timeout", "" + rTimeout);
-      props.setProperty("mail." + imapPropPrefix + ".connectiontimeout", "" + cTimeout);
-      
-      int port = -1;
-      int colonAt = host.indexOf(":");
-      if (colonAt != -1) {
-        port = Integer.parseInt(host.substring(colonAt + 1));
-        host = host.substring(0, colonAt);
-      }
-      
-      Session session = Session.getDefaultInstance(props, null);
-      mailbox = session.getStore(protocol);
-      if (port != -1) {
-        mailbox.connect(host, port, user, password);
-      } else {
-        mailbox.connect(host, user, password);
-      }
-      log.info("Connected to {}'s mailbox on {}", user, host);
-      
-      return true;
-    } catch (MessagingException e) {      
-      String errMsg = String.format(Locale.ENGLISH,
-          "Failed to connect to %s server %s as user %s due to: %s", protocol,
-          host, user, e.toString());
-      log.error(errMsg, e);
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-          errMsg, e);
-    }
-  }
-  
-  private void createFilters() {
-    if (fetchMailsSince != null) {
-      filters.add(new MailsSinceLastCheckFilter(fetchMailsSince));
-    }
-    if (customFilter != null && !customFilter.equals("")) {
-      try {
-        Class<?> cf = Class.forName(customFilter);
-        Object obj = cf.getConstructor().newInstance();
-        if (obj instanceof CustomFilter) {
-          filters.add((CustomFilter) obj);
-        }
-      } catch (Exception e) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-            "Custom filter could not be created", e);
-      }
-    }
-  }
-  
-  private void logConfig() {
-    if (!log.isInfoEnabled()) return;
-    
-    String lineSep = System.getProperty("line.separator"); 
-    
-    StringBuffer config = new StringBuffer();
-    config.append("user : ").append(user).append(lineSep);
-    config
-        .append("pwd : ")
-        .append(
-            password != null && password.length() > 0 ? "<non-null>" : "<null>")
-        .append(lineSep);
-    config.append("protocol : ").append(protocol)
-        .append(lineSep);
-    config.append("host : ").append(host)
-        .append(lineSep);
-    config.append("folders : ").append(folderNames)
-        .append(lineSep);
-    config.append("recurse : ").append(recurse)
-        .append(lineSep);
-    config.append("exclude : ").append(exclude.toString())
-        .append(lineSep);
-    config.append("include : ").append(include.toString())
-        .append(lineSep);
-    config.append("batchSize : ").append(batchSize)
-        .append(lineSep);
-    config.append("fetchSize : ").append(fetchSize)
-        .append(lineSep);
-    config.append("read timeout : ").append(rTimeout)
-        .append(lineSep);
-    config.append("conection timeout : ").append(cTimeout)
-        .append(lineSep);
-    config.append("custom filter : ").append(customFilter)
-        .append(lineSep);
-    config.append("fetch mail since : ").append(fetchMailsSince)
-        .append(lineSep);
-    config.append("includeContent : ").append(includeContent)
-        .append(lineSep);
-    config.append("processAttachments : ").append(processAttachment)
-        .append(lineSep);
-    config.append("includeOtherUserFolders : ").append(includeOtherUserFolders)
-        .append(lineSep);
-    config.append("includeSharedFolders : ").append(includeSharedFolders)
-        .append(lineSep);
-    log.info("{}", config);
-  }
-  
-  class FolderIterator implements Iterator<Folder> {
-    private Store mailbox;
-    private List<String> topLevelFolders;
-    private List<Folder> folders = null;
-    private Folder lastFolder = null;
-    
-    public FolderIterator(Store mailBox) {
-      this.mailbox = mailBox;
-      folders = new ArrayList<>();
-      getTopLevelFolders(mailBox);
-      if (includeOtherUserFolders) getOtherUserFolders();
-      if (includeSharedFolders) getSharedFolders();
-    }
-    
-    public boolean hasNext() {
-      return !folders.isEmpty();
-    }
-    
-    public Folder next() {
-      try {
-        boolean hasMessages = false;
-        Folder next;
-        do {
-          if (lastFolder != null) {
-            lastFolder.close(false);
-            lastFolder = null;
-          }
-          if (folders.isEmpty()) {
-            mailbox.close();
-            return null;
-          }
-          next = folders.remove(0);
-          if (next != null) {
-            String fullName = next.getFullName();
-            if (!excludeFolder(fullName)) {
-              hasMessages = (next.getType() & Folder.HOLDS_MESSAGES) != 0;
-              next.open(Folder.READ_ONLY);
-              lastFolder = next;
-              log.info("Opened folder : {}", fullName);
-            }
-            if (recurse && ((next.getType() & Folder.HOLDS_FOLDERS) != 0)) {
-              Folder[] children = next.list();
-              log.info("Added its children to list  : ");
-              for (int i = children.length - 1; i >= 0; i--) {
-                folders.add(0, children[i]);
-                if (log.isInfoEnabled()) {
-                  log.info("child name : {}", children[i].getFullName());
-                }
-              }
-              if (children.length == 0) log.info("NO children : ");
-            }
-          }
-        } while (!hasMessages);
-        return next;
-      } catch (Exception e) {
-        log.warn("Failed to read folders due to: {}", e);
-        // throw new
-        // DataImportHandlerException(DataImportHandlerException.SEVERE,
-        // "Folder open failed", e);
-      }
-      return null;
-    }
-    
-    public void remove() {
-      throw new UnsupportedOperationException("It's read only mode...");
-    }
-    
-    private void getTopLevelFolders(Store mailBox) {
-      if (folderNames != null) topLevelFolders = Arrays.asList(folderNames
-          .split(","));
-      for (int i = 0; topLevelFolders != null && i < topLevelFolders.size(); i++) {
-        try {
-          folders.add(mailbox.getFolder(topLevelFolders.get(i)));
-        } catch (MessagingException e) {
-          // skip bad ones unless it's the last one and still no good folder
-          if (folders.size() == 0 && i == topLevelFolders.size() - 1) throw new DataImportHandlerException(
-              DataImportHandlerException.SEVERE, "Folder retreival failed");
-        }
-      }
-      if (topLevelFolders == null || topLevelFolders.size() == 0) {
-        try {
-          folders.add(mailBox.getDefaultFolder());
-        } catch (MessagingException e) {
-          throw new DataImportHandlerException(
-              DataImportHandlerException.SEVERE, "Folder retreival failed");
-        }
-      }
-    }
-    
-    private void getOtherUserFolders() {
-      try {
-        Folder[] ufldrs = mailbox.getUserNamespaces(null);
-        if (ufldrs != null) {
-          log.info("Found {} user namespace folders", ufldrs.length);
-          for (Folder ufldr : ufldrs)
-            folders.add(ufldr);
-        }
-      } catch (MessagingException me) {
-        log.warn("Messaging exception retrieving user namespaces: ", me);
-      }
-    }
-    
-    private void getSharedFolders() {
-      try {
-        Folder[] sfldrs = mailbox.getSharedNamespaces();
-        if (sfldrs != null) {
-          log.info("Found {} shared namespace folders", sfldrs.length);
-          for (Folder sfldr : sfldrs)
-            folders.add(sfldr);
-        }
-      } catch (MessagingException me) {
-        log.warn("Messaging exception retrieving shared namespaces: ", me);
-      }
-    }
-    
-    private boolean excludeFolder(String name) {
-      for (String s : exclude) {
-        if (name.matches(s)) return true;
-      }
-      for (String s : include) {
-        if (name.matches(s)) return false;
-      }
-      return include.size() > 0;
-    }
-  }
-  
-  class MessageIterator extends SearchTerm implements Iterator<Message> {
-    private Folder folder;
-    private Message[] messagesInCurBatch = null;
-    private int current = 0;
-    private int currentBatch = 0;
-    private int batchSize = 0;
-    private int totalInFolder = 0;
-    private boolean doBatching = true;
-    
-    public MessageIterator(Folder folder, int batchSize) {
-      super();
-      
-      try {
-        this.folder = folder;
-        this.batchSize = batchSize;
-        SearchTerm st = getSearchTerm();
-        
-        log.info("SearchTerm={}", st);
-        
-        if (st != null || folder instanceof GmailFolder) {
-          doBatching = false;
-          // Searching can still take a while even though we're only pulling
-          // envelopes; unless you're using gmail server-side filter, which is
-          // fast
-          if (log.isInfoEnabled()) {
-            log.info("Searching folder {} for messages", folder.getName());
-          }
-          final RTimer searchTimer = new RTimer();
-
-          // If using GMail, speed up the envelope processing by doing a
-          // server-side
-          // search for messages occurring on or after the fetch date (at
-          // midnight),
-          // which reduces the number of envelopes we need to pull from the
-          // server
-          // to apply the precise DateTerm filter; GMail server-side search has
-          // date
-          // granularity only but the local filters are also applied
-                    
-          if (folder instanceof GmailFolder && fetchMailsSince != null) {
-            String afterCrit = "after:" + afterFmt.format(fetchMailsSince);
-            log.info("Added server-side gmail filter: {}", afterCrit);
-            Message[] afterMessages = folder.search(new GmailRawSearchTerm(
-                afterCrit));
-
-            if (log.isInfoEnabled()) {
-              log.info("GMail server-side filter found {} messages received {} in folder {}"
-                  , afterMessages.length, afterCrit, folder.getName());
-            }
-            
-            // now pass in the server-side filtered messages to the local filter
-            messagesInCurBatch = folder.search((st != null ? st : this), afterMessages);
-          } else {
-            messagesInCurBatch = folder.search(st);
-          }          
-          totalInFolder = messagesInCurBatch.length;
-          folder.fetch(messagesInCurBatch, fp);
-          current = 0;
-          if (log.isInfoEnabled()) {
-            log.info("Total messages : {}", totalInFolder);
-            log.info("Search criteria applied. Batching disabled. Took {} (ms)", searchTimer.getTime()); // logOk
-          }
-        } else {
-          totalInFolder = folder.getMessageCount();
-          log.info("Total messages : {}", totalInFolder);
-          getNextBatch(batchSize, folder);
-        }
-      } catch (MessagingException e) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-            "Message retreival failed", e);
-      }
-    }
-    
-    private void getNextBatch(int batchSize, Folder folder)
-        throws MessagingException {
-      // after each batch invalidate cache
-      if (messagesInCurBatch != null) {
-        for (Message m : messagesInCurBatch) {
-          if (m instanceof IMAPMessage) ((IMAPMessage) m).invalidateHeaders();
-        }
-      }
-      int lastMsg = (currentBatch + 1) * batchSize;
-      lastMsg = lastMsg > totalInFolder ? totalInFolder : lastMsg;
-      messagesInCurBatch = folder.getMessages(currentBatch * batchSize + 1,
-          lastMsg);
-      folder.fetch(messagesInCurBatch, fp);
-      current = 0;
-      currentBatch++;
-      log.info("Current Batch  : {}", currentBatch);
-      log.info("Messages in this batch  : {}", messagesInCurBatch.length);
-    }
-    
-    public boolean hasNext() {
-      boolean hasMore = current < messagesInCurBatch.length;
-      if (!hasMore && doBatching && currentBatch * batchSize < totalInFolder) {
-        // try next batch
-        try {
-          getNextBatch(batchSize, folder);
-          hasMore = current < messagesInCurBatch.length;
-        } catch (MessagingException e) {
-          throw new DataImportHandlerException(
-              DataImportHandlerException.SEVERE, "Message retreival failed", e);
-        }
-      }
-      return hasMore;
-    }
-    
-    public Message next() {
-      return hasNext() ? messagesInCurBatch[current++] : null;
-    }
-    
-    public void remove() {
-      throw new UnsupportedOperationException("It's read only mode...");
-    }
-    
-    private SearchTerm getSearchTerm() {
-      if (filters.size() == 0) return null;
-      if (filters.size() == 1) return filters.get(0).getCustomSearch(folder);
-      SearchTerm last = filters.get(0).getCustomSearch(folder);
-      for (int i = 1; i < filters.size(); i++) {
-        CustomFilter filter = filters.get(i);
-        SearchTerm st = filter.getCustomSearch(folder);
-        if (st != null) {
-          last = new AndTerm(last, st);
-        }
-      }
-      return last;
-    }
-    
-    public boolean match(Message message) {
-      return true;
-    }
-  }
-
-  static class MailsSinceLastCheckFilter implements CustomFilter {
-    
-    private Date since;
-    
-    public MailsSinceLastCheckFilter(Date date) {
-      since = date;
-    }
-    
-    @SuppressWarnings("serial")
-    public SearchTerm getCustomSearch(final Folder folder) {
-      if (log.isInfoEnabled()) {
-        log.info("Building mail filter for messages in {} that occur after {}"
-            , folder.getName(), sinceDateParser.format(since));
-      }
-      return new DateTerm(ComparisonTerm.GE, since) {
-        private int matched = 0;
-        private int seen = 0;
-        
-        @Override
-        public boolean match(Message msg) {
-          boolean isMatch = false;
-          ++seen;
-          try {
-            Date msgDate = msg.getReceivedDate();
-            if (msgDate == null) msgDate = msg.getSentDate();
-            
-            if (msgDate != null && msgDate.getTime() >= since.getTime()) {
-              ++matched;
-              isMatch = true;
-            } else {
-              String msgDateStr = (msgDate != null) ? sinceDateParser.format(msgDate) : "null";
-              String sinceDateStr = (since != null) ? sinceDateParser.format(since) : "null";
-              if (log.isDebugEnabled()) {
-                log.debug("Message {} was received at [{}], since filter is [{}]"
-                    , msg.getSubject(), msgDateStr, sinceDateStr);
-              }
-            }
-          } catch (MessagingException e) {
-            log.warn("Failed to process message due to: {}", e, e);
-          }
-          
-          if (seen % 100 == 0) {
-            if (log.isInfoEnabled()) {
-              log.info("Matched {} of {} messages since: {}"
-                  , matched, seen, sinceDateParser.format(since));
-            }
-          }
-          
-          return isMatch;
-        }
-      };
-    }
-  }
-  
-  // user settings stored in member variables
-  private String user;
-  private String password;
-  private String host;
-  private String protocol;
-  
-  private String folderNames;
-  private List<String> exclude = new ArrayList<>();
-  private List<String> include = new ArrayList<>();
-  private boolean recurse;
-  
-  private int batchSize;
-  private int fetchSize;
-  private int cTimeout;
-  private int rTimeout;
-  
-  private Date fetchMailsSince;
-  private String customFilter;
-  
-  private boolean processAttachment = true;
-  private boolean includeContent = true;
-  private boolean includeOtherUserFolders = false;
-  private boolean includeSharedFolders = false;
-  
-  // holds the current state
-  private Store mailbox;
-  private boolean connected = false;
-  private FolderIterator folderIter;
-  private MessageIterator msgIter;
-  private List<CustomFilter> filters = new ArrayList<>();
-  private static FetchProfile fp = new FetchProfile();
-  
-  static {
-    fp.add(FetchProfile.Item.ENVELOPE);
-    fp.add(FetchProfile.Item.FLAGS);
-    fp.add("X-Mailer");
-  }
-  
-  // Fields To Index
-  // single valued
-  private static final String MESSAGE_ID = "messageId";
-  private static final String SUBJECT = "subject";
-  private static final String FROM = "from";
-  private static final String SENT_DATE = "sentDate";
-  private static final String XMAILER = "xMailer";
-  // multi valued
-  private static final String TO_CC_BCC = "allTo";
-  private static final String FLAGS = "flags";
-  private static final String CONTENT = "content";
-  private static final String ATTACHMENT = "attachment";
-  private static final String ATTACHMENT_NAMES = "attachmentNames";
-  // flag values
-  private static final String FLAG_NONE = "none";
-  private static final String FLAG_ANSWERED = "answered";
-  private static final String FLAG_DELETED = "deleted";
-  private static final String FLAG_DRAFT = "draft";
-  private static final String FLAG_FLAGGED = "flagged";
-  private static final String FLAG_RECENT = "recent";
-  private static final String FLAG_SEEN = "seen";
-  
-  private int getIntFromContext(String prop, int ifNull) {
-    int v = ifNull;
-    try {
-      String val = context.getEntityAttribute(prop);
-      if (val != null) {
-        val = context.replaceTokens(val);
-        v = Integer.parseInt(val);
-      }
-    } catch (NumberFormatException e) {
-      // do nothing
-    }
-    return v;
-  }
-  
-  private boolean getBoolFromContext(String prop, boolean ifNull) {
-    boolean v = ifNull;
-    String val = context.getEntityAttribute(prop);
-    if (val != null) {
-      val = context.replaceTokens(val);
-      v = Boolean.valueOf(val);
-    }
-    return v;
-  }
-  
-  private String getStringFromContext(String prop, String ifNull) {
-    String v = ifNull;
-    String val = context.getEntityAttribute(prop);
-    if (val != null) {
-      val = context.replaceTokens(val);
-      v = val;
-    }
-    return v;
-  }
-
-  @SuppressForbidden(reason = "Uses context class loader as a workaround to inject correct classloader to 3rd party libs")
-  private static <T> T withContextClassLoader(ClassLoader loader, Supplier<T> action) {
-    Thread ct = Thread.currentThread();
-    ClassLoader prev = ct.getContextClassLoader();
-    try {
-      ct.setContextClassLoader(loader);
-      return action.get();
-    } finally {
-      ct.setContextClassLoader(prev);
-    }
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/TikaEntityProcessor.java b/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/TikaEntityProcessor.java
deleted file mode 100644
index 78a53fa..0000000
--- a/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/TikaEntityProcessor.java
+++ /dev/null
@@ -1,253 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.commons.io.IOUtils;
-import org.apache.tika.config.TikaConfig;
-import org.apache.tika.metadata.Metadata;
-import org.apache.tika.parser.AutoDetectParser;
-import org.apache.tika.parser.EmptyParser;
-import org.apache.tika.parser.ParseContext;
-import org.apache.tika.parser.Parser;
-import org.apache.tika.parser.html.HtmlMapper;
-import org.apache.tika.parser.html.IdentityHtmlMapper;
-import org.apache.tika.sax.BodyContentHandler;
-import org.apache.tika.sax.ContentHandlerDecorator;
-import org.apache.tika.sax.XHTMLContentHandler;
-import org.xml.sax.Attributes;
-import org.xml.sax.ContentHandler;
-import org.xml.sax.SAXException;
-import org.xml.sax.helpers.DefaultHandler;
-
-import javax.xml.transform.OutputKeys;
-import javax.xml.transform.TransformerConfigurationException;
-import javax.xml.transform.TransformerFactory;
-import javax.xml.transform.sax.SAXTransformerFactory;
-import javax.xml.transform.sax.TransformerHandler;
-import javax.xml.transform.stream.StreamResult;
-import java.io.File;
-import java.io.InputStream;
-import java.io.StringWriter;
-import java.io.Writer;
-import java.util.HashMap;
-import java.util.Locale;
-import java.util.Map;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.DataImporter.COLUMN;
-import static org.apache.solr.handler.dataimport.XPathEntityProcessor.URL;
-/**
- * <p>An implementation of {@link EntityProcessor} which reads data from rich docs
- * using <a href="http://tika.apache.org/">Apache Tika</a>
- *
- * <p>To index latitude/longitude data that might
- * be extracted from a file's metadata, identify
- * the geo field for this information with this attribute:
- * <code>spatialMetadataField</code>
- *
- * @since solr 3.1
- */
-public class TikaEntityProcessor extends EntityProcessorBase {
-  private static Parser EMPTY_PARSER = new EmptyParser();
-  private TikaConfig tikaConfig;
-  private String format = "text";
-  private boolean done = false;
-  private boolean extractEmbedded = false;
-  private String parser;
-  static final String AUTO_PARSER = "org.apache.tika.parser.AutoDetectParser";
-  private String htmlMapper;
-  private String spatialMetadataField;
-
-  @Override
-  public void init(Context context) {
-    super.init(context);
-    done = false;
-  }
-
-  @Override
-  protected void firstInit(Context context) {
-    super.firstInit(context);
-    // See similar code in ExtractingRequestHandler.inform
-    try {
-      String tikaConfigLoc = context.getResolvedEntityAttribute("tikaConfig");
-      if (tikaConfigLoc == null) {
-        ClassLoader classLoader = context.getSolrCore().getResourceLoader().getClassLoader();
-        try (InputStream is = classLoader.getResourceAsStream("solr-default-tika-config.xml")) {
-          tikaConfig = new TikaConfig(is);
-        }
-      } else {
-        File configFile = new File(tikaConfigLoc);
-        if (configFile.isAbsolute()) {
-          tikaConfig = new TikaConfig(configFile);
-        } else { // in conf/
-          try (InputStream is = context.getSolrCore().getResourceLoader().openResource(tikaConfigLoc)) {
-            tikaConfig = new TikaConfig(is);
-          }
-        }
-      }
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e,"Unable to load Tika Config");
-    }
-
-    String extractEmbeddedString = context.getResolvedEntityAttribute("extractEmbedded");
-    if ("true".equals(extractEmbeddedString)) {
-      extractEmbedded = true;
-    }
-    format = context.getResolvedEntityAttribute("format");
-    if(format == null)
-      format = "text";
-    if (!"html".equals(format) && !"xml".equals(format) && !"text".equals(format)&& !"none".equals(format) )
-      throw new DataImportHandlerException(SEVERE, "'format' can be one of text|html|xml|none");
-
-    htmlMapper = context.getResolvedEntityAttribute("htmlMapper");
-    if (htmlMapper == null)
-      htmlMapper = "default";
-    if (!"default".equals(htmlMapper) && !"identity".equals(htmlMapper))
-      throw new DataImportHandlerException(SEVERE, "'htmlMapper', if present, must be 'default' or 'identity'");
-
-    parser = context.getResolvedEntityAttribute("parser");
-    if(parser == null) {
-      parser = AUTO_PARSER;
-    }
-
-    spatialMetadataField = context.getResolvedEntityAttribute("spatialMetadataField");
-  }
-
-  @Override
-  public Map<String, Object> nextRow() {
-    if(done) return null;
-    Map<String, Object> row = new HashMap<>();
-    @SuppressWarnings({"unchecked"})
-    DataSource<InputStream> dataSource = context.getDataSource();
-    InputStream is = dataSource.getData(context.getResolvedEntityAttribute(URL));
-    ContentHandler contentHandler = null;
-    Metadata metadata = new Metadata();
-    StringWriter sw = new StringWriter();
-    try {
-      if ("html".equals(format)) {
-        contentHandler = getHtmlHandler(sw);
-      } else if ("xml".equals(format)) {
-        contentHandler = getXmlContentHandler(sw);
-      } else if ("text".equals(format)) {
-        contentHandler = getTextContentHandler(sw);
-      } else if("none".equals(format)){
-        contentHandler = new DefaultHandler();        
-      }
-    } catch (TransformerConfigurationException e) {
-      wrapAndThrow(SEVERE, e, "Unable to create content handler");
-    }
-    Parser tikaParser = null;
-    if(parser.equals(AUTO_PARSER)){
-      tikaParser = new AutoDetectParser(tikaConfig);
-    } else {
-      tikaParser = context.getSolrCore().getResourceLoader().newInstance(parser, Parser.class);
-    }
-    try {
-        ParseContext context = new ParseContext();
-        if ("identity".equals(htmlMapper)){
-          context.set(HtmlMapper.class, IdentityHtmlMapper.INSTANCE);
-        }
-        if (extractEmbedded) {
-          context.set(Parser.class, tikaParser);
-        } else {
-          context.set(Parser.class, EMPTY_PARSER);
-        }
-        tikaParser.parse(is, contentHandler, metadata , context);
-    } catch (Exception e) {
-      if(SKIP.equals(onError)) {
-        throw new DataImportHandlerException(DataImportHandlerException.SKIP_ROW,
-            "Document skipped :" + e.getMessage());
-      }
-      wrapAndThrow(SEVERE, e, "Unable to read content");
-    }
-    IOUtils.closeQuietly(is);
-    for (Map<String, String> field : context.getAllEntityFields()) {
-      if (!"true".equals(field.get("meta"))) continue;
-      String col = field.get(COLUMN);
-      String s = metadata.get(col);
-      if (s != null) row.put(col, s);
-    }
-    if(!"none".equals(format) ) row.put("text", sw.toString());
-    tryToAddLatLon(metadata, row);
-    done = true;
-    return row;
-  }
-
-  private void tryToAddLatLon(Metadata metadata, Map<String, Object> row) {
-    if (spatialMetadataField == null) return;
-    String latString = metadata.get(Metadata.LATITUDE);
-    String lonString = metadata.get(Metadata.LONGITUDE);
-    if (latString != null && lonString != null) {
-      row.put(spatialMetadataField, String.format(Locale.ROOT, "%s,%s", latString, lonString));
-    }
-  }
-
-  private static ContentHandler getHtmlHandler(Writer writer)
-          throws TransformerConfigurationException {
-    SAXTransformerFactory factory = (SAXTransformerFactory)
-            TransformerFactory.newInstance();
-    TransformerHandler handler = factory.newTransformerHandler();
-    handler.getTransformer().setOutputProperty(OutputKeys.METHOD, "html");
-    handler.setResult(new StreamResult(writer));
-    return new ContentHandlerDecorator(handler) {
-      @Override
-      public void startElement(
-              String uri, String localName, String name, Attributes atts)
-              throws SAXException {
-        if (XHTMLContentHandler.XHTML.equals(uri)) {
-          uri = null;
-        }
-        if (!"head".equals(localName)) {
-          super.startElement(uri, localName, name, atts);
-        }
-      }
-
-      @Override
-      public void endElement(String uri, String localName, String name)
-              throws SAXException {
-        if (XHTMLContentHandler.XHTML.equals(uri)) {
-          uri = null;
-        }
-        if (!"head".equals(localName)) {
-          super.endElement(uri, localName, name);
-        }
-      }
-
-      @Override
-      public void startPrefixMapping(String prefix, String uri) {/*no op*/ }
-
-      @Override
-      public void endPrefixMapping(String prefix) {/*no op*/ }
-    };
-  }
-
-  private static ContentHandler getTextContentHandler(Writer writer) {
-    return new BodyContentHandler(writer);
-  }
-
-  private static ContentHandler getXmlContentHandler(Writer writer)
-          throws TransformerConfigurationException {
-    SAXTransformerFactory factory = (SAXTransformerFactory)
-            TransformerFactory.newInstance();
-    TransformerHandler handler = factory.newTransformerHandler();
-    handler.getTransformer().setOutputProperty(OutputKeys.METHOD, "xml");
-    handler.setResult(new StreamResult(writer));
-    return handler;
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/package.html b/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/package.html
deleted file mode 100644
index 9a7f6f2..0000000
--- a/solr/contrib/dataimporthandler-extras/src/java/org/apache/solr/handler/dataimport/package.html
+++ /dev/null
@@ -1,23 +0,0 @@
-<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<!-- not a package-info.java, because we already defined this package in core/ -->
-<html>
-<body>
-Plugins for <code>DataImportHandler</code> that have additional dependencies.
-</body>
-</html>
diff --git a/solr/contrib/dataimporthandler-extras/src/java/overview.html b/solr/contrib/dataimporthandler-extras/src/java/overview.html
deleted file mode 100644
index 5a55432..0000000
--- a/solr/contrib/dataimporthandler-extras/src/java/overview.html
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<html>
-<body>
-Apache Solr Search Server: DataImportHandler Extras contrib. <b>This contrib module is deprecated as of 8.6</b>
-</body>
-</html>
diff --git a/solr/contrib/dataimporthandler-extras/src/resources/solr-default-tika-config.xml b/solr/contrib/dataimporthandler-extras/src/resources/solr-default-tika-config.xml
deleted file mode 100644
index b598d9e..0000000
--- a/solr/contrib/dataimporthandler-extras/src/resources/solr-default-tika-config.xml
+++ /dev/null
@@ -1,20 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-<properties>
-  <service-loader initializableProblemHandler="ignore"/>
-</properties>
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/bad.doc b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/bad.doc
deleted file mode 100644
index 5944c24..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/bad.doc
+++ /dev/null
Binary files differ
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr-word.pdf b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr-word.pdf
deleted file mode 100644
index bd8b865..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr-word.pdf
+++ /dev/null
Binary files differ
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr/collection1/conf/dataimport-schema-no-unique-key.xml b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr/collection1/conf/dataimport-schema-no-unique-key.xml
deleted file mode 100644
index 793482a..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr/collection1/conf/dataimport-schema-no-unique-key.xml
+++ /dev/null
@@ -1,205 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
--->
-
-<schema name="test" version="1.2">
-  <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       Applications should change this to reflect the nature of the search collection.
-       version="1.1" is Solr's version number for the schema syntax and semantics.  It should
-       not normally be changed by applications.
-       1.0: multiValued attribute did not exist, all fields are multiValued by nature
-       1.1: multiValued attribute introduced, false by default -->
-
-
-  <!-- field type definitions. The "name" attribute is
-     just a label to be used by field definitions.  The "class"
-     attribute and any other attributes determine the real
-     behavior of the fieldType.
-       Class names starting with "solr" refer to java classes in the
-     org.apache.solr.analysis package.
-  -->
-
-  <!-- The StrField type is not analyzed, but indexed/stored verbatim.  
-     - StrField and TextField support an optional compressThreshold which
-     limits compression (if enabled in the derived fields) to values which
-     exceed a certain size (in characters).
-  -->
-  <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
-
-  <!-- boolean type: "true" or "false" -->
-  <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
-
-  <!-- The optional sortMissingLast and sortMissingFirst attributes are
-       currently supported on types that are sorted internally as strings.
-     - If sortMissingLast="true", then a sort on this field will cause documents
-       without the field to come after documents with the field,
-       regardless of the requested sort order (asc or desc).
-     - If sortMissingFirst="true", then a sort on this field will cause documents
-       without the field to come before documents with the field,
-       regardless of the requested sort order.
-     - If sortMissingLast="false" and sortMissingFirst="false" (the default),
-       then default lucene sorting will be used which places docs without the
-       field first in an ascending sort and last in a descending sort.
-  -->
-
-
-  <!--
-    Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
-  -->
-  <fieldType name="int" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="float" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="double" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="latLon" class="solr.LatLonType" subFieldType="double"/>
-
-
-  <!--
-   Numeric field types that index each value at various levels of precision
-   to accelerate range queries when the number of values between the range
-   endpoints is large. See the javadoc for NumericRangeQuery for internal
-   implementation details.
-
-   Smaller precisionStep values (specified in bits) will lead to more tokens
-   indexed per value, slightly larger index size, and faster range queries.
-   A precisionStep of 0 disables indexing at different precision levels.
-  -->
-  <fieldType name="tint" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tfloat" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tlong" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tdouble" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-
-
-  <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
-       is a more restricted form of the canonical representation of dateTime
-       http://www.w3.org/TR/xmlschema-2/#dateTime    
-       The trailing "Z" designates UTC time and is mandatory.
-       Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
-       All other components are mandatory.
-
-       Expressions can also be used to denote calculations that should be
-       performed relative to "NOW" to determine the value, ie...
-
-             NOW/HOUR
-                ... Round to the start of the current hour
-             NOW-1DAY
-                ... Exactly 1 day prior to now
-             NOW/DAY+6MONTHS+3DAYS
-                ... 6 months and 3 days in the future from the start of
-                    the current day
-                    
-       Consult the TrieDateField javadocs for more information.
-    -->
-  <fieldType name="date" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" sortMissingLast="true" omitNorms="true"/>
-
-
-  <!-- The "RandomSortField" is not used to store or search any
-       data.  You can declare fields of this type it in your schema
-       to generate psuedo-random orderings of your docs for sorting 
-       purposes.  The ordering is generated based on the field name 
-       and the version of the index, As long as the index version
-       remains unchanged, and the same field name is reused,
-       the ordering of the docs will be consistent.  
-       If you want differend psuedo-random orderings of documents,
-       for the same version of the index, use a dynamicField and
-       change the name
-   -->
-  <fieldType name="random" class="solr.RandomSortField" indexed="true"/>
-
-  <!-- solr.TextField allows the specification of custom text analyzers
-       specified as a tokenizer and a list of token filters. Different
-       analyzers may be specified for indexing and querying.
-
-       The optional positionIncrementGap puts space between multiple fields of
-       this type on the same document, with the purpose of preventing false phrase
-       matching across fields.
-
-       For more info on customizing your analyzer chain, please see
-       http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-   -->
-
-  <!-- One can also specify an existing Analyzer class that has a
-       default constructor via the class attribute on the analyzer element
-  <fieldType name="text_greek" class="solr.TextField">
-    <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
-  </fieldType>
-  -->
-
-  <!-- A text field that only splits on whitespace for exact matching of words -->
-  <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
-    <analyzer>
-      <tokenizer class="solr.MockTokenizerFactory"/>
-    </analyzer>
-  </fieldType>
-
-  <!-- A text field that uses WordDelimiterGraphFilter to enable splitting and matching of
-      words on case-change, alpha numeric boundaries, and non-alphanumeric chars,
-      so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi".
-      Synonyms and stopwords are customized by external files, and stemming is enabled.
-      Duplicate tokens at the same position (which may result from Stemmed Synonyms or
-      WordDelim parts) are removed.
-      -->
-  <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <!-- in this example, we will only use synonyms at query time
-      <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-      -->
-      <!--<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1"
-              catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!--<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
-      <filter class="solr.PorterStemFilterFactory"/>-->
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory"/>
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <!--<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>-->
-      <!--<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0"
-              catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!--<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
-      <filter class="solr.PorterStemFilterFactory"/>-->
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-  <!-- since fields of this type are by default not stored or indexed, any data added to 
-       them will be ignored outright 
-   -->
-  <fieldType name="ignored" stored="false" indexed="false" class="solr.StrField"/>
-
-  <field name="title" type="string" indexed="true" stored="true"/>
-  <field name="author" type="string" indexed="true" stored="true"/>
-  <field name="text" type="text" indexed="true" stored="true"/>
-  <field name="foo_i" type="int" indexed="true" stored="false"/>
-  <field name="home" type="latLon" indexed="true" stored="true"/>
-</schema>
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr/collection1/conf/dataimport-solrconfig.xml b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr/collection1/conf/dataimport-solrconfig.xml
deleted file mode 100644
index f9f5304..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/solr/collection1/conf/dataimport-solrconfig.xml
+++ /dev/null
@@ -1,277 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <indexConfig>
-    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>
-  </indexConfig>
-
-  <!-- Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.
-       If replication is in use, this should match the replication configuration. -->
-       <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <!-- the default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- A prefix of "solr." for class names is an alias that
-         causes solr to search appropriate packages, including
-         org.apache.solr.(search|update|request|core|analysis)
-     -->
-
-    <!-- Limit the number of deletions Solr will buffer during doc updating.
-        
-        Setting this lower can help bound memory use during indexing.
-    -->
-    <maxPendingDeletes>100000</maxPendingDeletes>
-
-  </updateHandler>
-
-
-  <query>
-    <!-- Maximum number of clauses in a boolean query... can affect
-        range or prefix queries that expand to big boolean
-        queries.  An exception is thrown if exceeded.  -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-    
-    <!-- Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.
-         When a new searcher is opened, its caches may be prepopulated
-         or "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.  For CaffeineCache,
-         the autowarmed items will be the most recently accessed items.
-       Parameters:
-         class - the SolrCache implementation (currently only CaffeineCache)
-         size - the maximum number of entries in the cache
-         initialSize - the initial capacity (number of entries) of
-           the cache.  (seel java.util.HashMap)
-         autowarmCount - the number of entries to prepopulate from
-           and old cache.
-         -->
-    <filterCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-   <!-- queryResultCache caches results of searches - ordered lists of
-         document ids (DocList) based on a query, a sort, and the range
-         of documents requested.  -->
-    <queryResultCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-  <!-- documentCache caches Lucene Document objects (the stored fields for each document).
-       Since Lucene internal document ids are transient, this cache will not be autowarmed.  -->
-    <documentCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="0"/>
-
-    <!-- If true, stored fields that are not requested will be loaded lazily.
-
-    This can result in a significant speed improvement if the usual case is to
-    not load all stored fields, especially if the skipped fields are large compressed
-    text fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-    <!-- Example of a generic cache.  These caches may be accessed by name
-         through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().
-         The purpose is to enable easy caching of user/application level data.
-         The regenerator argument should be specified as an implementation
-         of solr.search.CacheRegenerator if autowarming is desired.  -->
-    <!--
-    <cache name="myUserCache"
-      class="solr.CaffeineCache"
-      size="4096"
-      initialSize="1024"
-      autowarmCount="1024"
-      regenerator="org.mycompany.mypackage.MyRegenerator"
-      />
-    -->
-
-   <!-- An optimization that attempts to use a filter to satisfy a search.
-         If the requested sort does not include score, then the filterCache
-         will be checked for a filter matching the query. If found, the filter
-         will be used as the source of document ids, and then the sort will be
-         applied to that.
-    <useFilterForSortedQuery>true</useFilterForSortedQuery>
-   -->
-
-   <!-- An optimization for use with the queryResultCache.  When a search
-         is requested, a superset of the requested number of document ids
-         are collected.  For example, if a search for a particular query
-         requests matching documents 10 through 19, and queryWindowSize is 50,
-         then documents 0 through 49 will be collected and cached.  Any further
-         requests in that range can be satisfied via the cache.  -->
-    <queryResultWindowSize>50</queryResultWindowSize>
-    
-    <!-- Maximum number of documents to cache for any entry in the
-         queryResultCache. -->
-    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-    <!-- a newSearcher event is fired whenever a new searcher is being prepared
-         and there is a current searcher handling requests (aka registered). -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence. -->
-    <!--<listener event="newSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-        <!--<lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst>-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- a firstSearcher event is fired whenever a new searcher is being
-         prepared but there is no current registered searcher to handle
-         requests or to gain autowarming data from. -->
-    <!--<listener event="firstSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- If a search request comes in and there is no current registered searcher,
-         then immediately register the still warming searcher and use it.  If
-         "false" then all requests will block until the first searcher is done
-         warming. -->
-    <useColdSearcher>false</useColdSearcher>
-
-    <!-- Maximum number of searchers that may be warming in the background
-      concurrently.  An error is returned if this limit is exceeded. Recommend
-      1-2 for read-only slaves, higher for masters w/o cache warming. -->
-    <maxWarmingSearchers>4</maxWarmingSearchers>
-
-  </query>
-
-  <requestDispatcher>
-    <!--Make sure your system has some authentication before enabling remote streaming!
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1" />
-    -->
-        
-    <!-- Set HTTP caching related parameters (for proxy caches and clients).
-          
-         To get the behaviour of Solr 1.2 (ie: no caching related headers)
-         use the never304="true" option and do not specify a value for
-         <cacheControl>
-    -->
-    <httpCaching never304="true">
-    <!--httpCaching lastModifiedFrom="openTime"
-                 etagSeed="Solr"-->
-       <!-- lastModFrom="openTime" is the default, the Last-Modified value
-            (and validation against If-Modified-Since requests) will all be
-            relative to when the current Searcher was opened.
-            You can change it to lastModFrom="dirLastMod" if you want the
-            value to exactly corrispond to when the physical index was last
-            modified.
-               
-            etagSeed="..." is an option you can change to force the ETag
-            header (and validation against If-None-Match requests) to be
-            differnet even if the index has not changed (ie: when making
-            significant changes to your config file)
-
-            lastModifiedFrom and etagSeed are both ignored if you use the
-            never304="true" option.
-       -->
-       <!-- If you include a <cacheControl> directive, it will be used to
-            generate a Cache-Control header, as well as an Expires header
-            if the value contains "max-age="
-               
-            By default, no Cache-Control header is generated.
-
-            You can use the <cacheControl> option even if you have set
-            never304="true"
-       -->
-       <!-- <cacheControl>max-age=30, public</cacheControl> -->
-    </httpCaching>
-  </requestDispatcher>
-  
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <!-- 
-       <int name="rows">10</int>
-       <str name="fl">*</str>
-       <str name="version">2.1</str>
-        -->
-     </lst>
-  </requestHandler>
-  
-  <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
-  </requestHandler>
-    
-  <!--
-   
-   Search components are registered to SolrCore and used by Search Handlers
-   
-   By default, the following components are avaliable:
-    
-   <searchComponent name="query"     class="org.apache.solr.handler.component.QueryComponent" />
-   <searchComponent name="facet"     class="org.apache.solr.handler.component.FacetComponent" />
-   <searchComponent name="mlt"       class="org.apache.solr.handler.component.MoreLikeThisComponent" />
-   <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
-   <searchComponent name="debug"     class="org.apache.solr.handler.component.DebugComponent" />
-  
-   If you register a searchComponent to one of the standard names, that will be used instead.
-  
-   -->
- 
-  <requestHandler name="/search" class="org.apache.solr.handler.component.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-    </lst>
-    <!--
-    By default, this will register the following components:
-    
-    <arr name="components">
-      <str>query</str>
-      <str>facet</str>
-      <str>mlt</str>
-      <str>highlight</str>
-      <str>debug</str>
-    </arr>
-    
-    To insert handlers before or after the 'standard' components, use:
-    
-    <arr name="first-components">
-      <str>first</str>
-    </arr>
-    
-    <arr name="last-components">
-      <str>last</str>
-    </arr>
-    
-    -->
-  </requestHandler>
-  
-  <!-- config for the admin interface -->
-  <admin>
-    <defaultQuery>*:*</defaultQuery>
-  </admin>
-
-</config>
-
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/structured.html b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/structured.html
deleted file mode 100644
index 1037481..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/structured.html
+++ /dev/null
@@ -1,29 +0,0 @@
-<!DOCTYPE html>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<html>
-<head>
-    <title>Title in the header</title>
-</head>
-<body>
-<h1>H1 Header</h1>
-<div>Basic div</div>
-<div class="classAttribute">Div with attribute</div>
-</body>
-</html>
-
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_jpeg.jpg b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_jpeg.jpg
deleted file mode 100644
index 10d1ebb..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_jpeg.jpg
+++ /dev/null
Binary files differ
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_recursive_embedded.docx b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_recursive_embedded.docx
deleted file mode 100644
index cd562cb..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_recursive_embedded.docx
+++ /dev/null
Binary files differ
diff --git a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_vsdx.vsdx b/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_vsdx.vsdx
deleted file mode 100644
index 659ecdd..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test-files/dihextras/test_vsdx.vsdx
+++ /dev/null
Binary files differ
diff --git a/solr/contrib/dataimporthandler-extras/src/test/org/apache/solr/handler/dataimport/TestMailEntityProcessor.java b/solr/contrib/dataimporthandler-extras/src/test/org/apache/solr/handler/dataimport/TestMailEntityProcessor.java
deleted file mode 100644
index 027a8d7..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test/org/apache/solr/handler/dataimport/TestMailEntityProcessor.java
+++ /dev/null
@@ -1,199 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrInputDocument;
-import org.junit.Ignore;
-import org.junit.Test;
-
-import java.text.ParseException;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-// Test mailbox is like this: foldername(mailcount)
-// top1(2) -> child11(6)
-//         -> child12(0)
-// top2(2) -> child21(1)
-//                 -> grandchild211(2)
-//                 -> grandchild212(1)
-//         -> child22(2)
-
-/**
- * Test for MailEntityProcessor. The tests are marked as ignored because we'd need a mail server (real or mocked) for
- * these to work.
- *
- * TODO: Find a way to make the tests actually test code
- *
- *
- * @see org.apache.solr.handler.dataimport.MailEntityProcessor
- * @since solr 1.4
- */
-@Ignore("Needs a Mock Mail Server to work")
-public class TestMailEntityProcessor extends AbstractDataImportHandlerTestCase {
-
-  // Credentials
-  private static final String user = "user";
-  private static final String password = "password";
-  private static final String host = "host";
-  private static final String protocol = "imaps";
-
-  private static Map<String, String> paramMap = new HashMap<>();
-
-  @Test
-  @Ignore("Needs a Mock Mail Server to work")
-  public void testConnection() {
-    // also tests recurse = false and default settings
-    paramMap.put("folders", "top2");
-    paramMap.put("recurse", "false");
-    paramMap.put("processAttachement", "false");
-    DataImporter di = new DataImporter();
-    di.loadAndInit(getConfigFromMap(paramMap));
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals("top1 did not return 2 messages", swi.docs.size(), 2);
-  }
-
-  @Test
-  @Ignore("Needs a Mock Mail Server to work")
-  public void testRecursion() {
-    paramMap.put("folders", "top2");
-    paramMap.put("recurse", "true");
-    paramMap.put("processAttachement", "false");
-    DataImporter di = new DataImporter();
-    di.loadAndInit(getConfigFromMap(paramMap));
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals("top2 and its children did not return 8 messages", swi.docs.size(), 8);
-  }
-
-  @Test
-  @Ignore("Needs a Mock Mail Server to work")
-  public void testExclude() {
-    paramMap.put("folders", "top2");
-    paramMap.put("recurse", "true");
-    paramMap.put("processAttachement", "false");
-    paramMap.put("exclude", ".*grandchild.*");
-    DataImporter di = new DataImporter();
-    di.loadAndInit(getConfigFromMap(paramMap));
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals("top2 and its direct children did not return 5 messages", swi.docs.size(), 5);
-  }
-
-  @Test
-  @Ignore("Needs a Mock Mail Server to work")
-  public void testInclude() {
-    paramMap.put("folders", "top2");
-    paramMap.put("recurse", "true");
-    paramMap.put("processAttachement", "false");
-    paramMap.put("include", ".*grandchild.*");
-    DataImporter di = new DataImporter();
-    di.loadAndInit(getConfigFromMap(paramMap));
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals("top2 and its direct children did not return 3 messages", swi.docs.size(), 3);
-  }
-
-  @Test
-  @Ignore("Needs a Mock Mail Server to work")
-  public void testIncludeAndExclude() {
-    paramMap.put("folders", "top1,top2");
-    paramMap.put("recurse", "true");
-    paramMap.put("processAttachement", "false");
-    paramMap.put("exclude", ".*top1.*");
-    paramMap.put("include", ".*grandchild.*");
-    DataImporter di = new DataImporter();
-    di.loadAndInit(getConfigFromMap(paramMap));
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals("top2 and its direct children did not return 3 messages", swi.docs.size(), 3);
-  }
-
-  @Test
-  @Ignore("Needs a Mock Mail Server to work")
-  @SuppressWarnings({"unchecked"})
-  public void testFetchTimeSince() throws ParseException {
-    paramMap.put("folders", "top1/child11");
-    paramMap.put("recurse", "true");
-    paramMap.put("processAttachement", "false");
-    paramMap.put("fetchMailsSince", "2008-12-26 00:00:00");
-    DataImporter di = new DataImporter();
-    di.loadAndInit(getConfigFromMap(paramMap));
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals("top2 and its direct children did not return 3 messages", swi.docs.size(), 3);
-  }
-
-  private String getConfigFromMap(Map<String, String> params) {
-    String conf =
-            "<dataConfig>" +
-                    "<document>" +
-                    "<entity processor=\"org.apache.solr.handler.dataimport.MailEntityProcessor\" " +
-                    "someconfig" +
-                    "/>" +
-                    "</document>" +
-                    "</dataConfig>";
-    params.put("user", user);
-    params.put("password", password);
-    params.put("host", host);
-    params.put("protocol", protocol);
-    StringBuilder attribs = new StringBuilder("");
-    for (String key : params.keySet())
-      attribs.append(" ").append(key).append("=" + "\"").append(params.get(key)).append("\"");
-    attribs.append(" ");
-    return conf.replace("someconfig", attribs.toString());
-  }
-
-  static class SolrWriterImpl extends SolrWriter {
-    List<SolrInputDocument> docs = new ArrayList<>();
-    Boolean deleteAllCalled;
-    Boolean commitCalled;
-
-    public SolrWriterImpl() {
-      super(null, null);
-    }
-
-    @Override
-    public boolean upload(SolrInputDocument doc) {
-      return docs.add(doc);
-    }
-
-
-    @Override
-    public void doDeleteAll() {
-      deleteAllCalled = Boolean.TRUE;
-    }
-
-    @Override
-    public void commit(boolean b) {
-      commitCalled = Boolean.TRUE;
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler-extras/src/test/org/apache/solr/handler/dataimport/TestTikaEntityProcessor.java b/solr/contrib/dataimporthandler-extras/src/test/org/apache/solr/handler/dataimport/TestTikaEntityProcessor.java
deleted file mode 100644
index 05acfca..0000000
--- a/solr/contrib/dataimporthandler-extras/src/test/org/apache/solr/handler/dataimport/TestTikaEntityProcessor.java
+++ /dev/null
@@ -1,221 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.util.Locale;
-
-/**Testcase for TikaEntityProcessor
- *
- * @since solr 3.1
- */
-public class TestTikaEntityProcessor extends AbstractDataImportHandlerTestCase {
-  private String conf =
-  "<dataConfig>" +
-  "  <dataSource type=\"BinFileDataSource\"/>" +
-  "  <document>" +
-  "    <entity name=\"Tika\" processor=\"TikaEntityProcessor\" url=\"" + getFile("dihextras/solr-word.pdf").getAbsolutePath() + "\" >" +
-  "      <field column=\"Author\" meta=\"true\" name=\"author\"/>" +
-  "      <field column=\"title\" meta=\"true\" name=\"title\"/>" +
-  "      <field column=\"text\"/>" +
-  "     </entity>" +
-  "  </document>" +
-  "</dataConfig>";
-
-  private String skipOnErrConf =
-      "<dataConfig>" +
-          "  <dataSource type=\"BinFileDataSource\"/>" +
-          "  <document>" +
-          "    <entity name=\"Tika\" onError=\"skip\"  processor=\"TikaEntityProcessor\" url=\"" + getFile("dihextras/bad.doc").getAbsolutePath() + "\" >" +
-          "<field column=\"content\" name=\"text\"/>" +
-          " </entity>" +
-          " <entity name=\"Tika\" processor=\"TikaEntityProcessor\" url=\"" + getFile("dihextras/solr-word.pdf").getAbsolutePath() + "\" >" +
-          "      <field column=\"text\"/>" +
-          "</entity>" +
-          "  </document>" +
-          "</dataConfig>";
-
-  private String spatialConf =
-      "<dataConfig>" +
-          "  <dataSource type=\"BinFileDataSource\"/>" +
-          "  <document>" +
-          "    <entity name=\"Tika\" processor=\"TikaEntityProcessor\" url=\"" +
-          getFile("dihextras/test_jpeg.jpg").getAbsolutePath() + "\" spatialMetadataField=\"home\">" +
-          "      <field column=\"text\"/>" +
-          "     </entity>" +
-          "  </document>" +
-          "</dataConfig>";
-
-  private String vsdxConf =
-      "<dataConfig>" +
-          "  <dataSource type=\"BinFileDataSource\"/>" +
-          "  <document>" +
-          "    <entity name=\"Tika\" processor=\"TikaEntityProcessor\" url=\"" + getFile("dihextras/test_vsdx.vsdx").getAbsolutePath() + "\" >" +
-          "      <field column=\"text\"/>" +
-          "     </entity>" +
-          "  </document>" +
-          "</dataConfig>";
-
-  private String[] tests = {
-      "//*[@numFound='1']"
-      ,"//str[@name='author'][.='Grant Ingersoll']"
-      ,"//str[@name='title'][.='solr-word']"
-      ,"//str[@name='text']"
-  };
-
-  private String[] testsHTMLDefault = {
-      "//*[@numFound='1']"
-      , "//str[@name='text'][contains(.,'Basic div')]"
-      , "//str[@name='text'][contains(.,'<h1>')]"
-      , "//str[@name='text'][not(contains(.,'<div>'))]" //default mapper lower-cases elements as it maps
-      , "//str[@name='text'][not(contains(.,'<DIV>'))]"
-  };
-
-  private String[] testsHTMLIdentity = {
-      "//*[@numFound='1']"
-      , "//str[@name='text'][contains(.,'Basic div')]"
-      , "//str[@name='text'][contains(.,'<h1>')]"
-      , "//str[@name='text'][contains(.,'<div>')]"
-      , "//str[@name='text'][contains(.,'class=\"classAttribute\"')]" //attributes are lower-cased
-  };
-
-  private String[] testsSpatial = {
-      "//*[@numFound='1']"
-  };
-
-  private String[] testsEmbedded = {
-      "//*[@numFound='1']",
-      "//str[@name='text'][contains(.,'When in the Course')]"
-  };
-
-  private String[] testsIgnoreEmbedded = {
-      "//*[@numFound='1']",
-      "//str[@name='text'][not(contains(.,'When in the Course'))]"
-  };
-
-  private String[] testsVSDX = {
-      "//*[@numFound='1']",
-      "//str[@name='text'][contains(.,'Arrears')]"
-  };
-
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    assumeFalse("This test fails on UNIX with Turkish default locale (https://issues.apache.org/jira/browse/SOLR-6387)",
-        new Locale("tr").getLanguage().equals(Locale.getDefault().getLanguage()));
-    initCore("dataimport-solrconfig.xml", "dataimport-schema-no-unique-key.xml", getFile("dihextras/solr").getAbsolutePath());
-  }
-
-  @Test
-  public void testIndexingWithTikaEntityProcessor() throws Exception {
-    runFullImport(conf);
-    assertQ(req("*:*"), tests );
-  }
-
-  @Test
-  public void testSkip() throws Exception {
-    runFullImport(skipOnErrConf);
-    assertQ(req("*:*"), "//*[@numFound='1']");
-  }
-
-  @Test
-  public void testVSDX() throws Exception {
-    //this ensures that we've included the curvesapi dependency
-    //and that the ConnectsType class is bundled with poi-ooxml-schemas.
-    runFullImport(vsdxConf);
-    assertQ(req("*:*"), testsVSDX);
-  }
-
-  @Test
-  public void testTikaHTMLMapperEmpty() throws Exception {
-    runFullImport(getConfigHTML(null));
-    assertQ(req("*:*"), testsHTMLDefault);
-  }
-
-  @Test
-  public void testTikaHTMLMapperDefault() throws Exception {
-    runFullImport(getConfigHTML("default"));
-    assertQ(req("*:*"), testsHTMLDefault);
-  }
-
-  @Test
-  public void testTikaHTMLMapperIdentity() throws Exception {
-    runFullImport(getConfigHTML("identity"));
-    assertQ(req("*:*"), testsHTMLIdentity);
-  }
-
-  @Test
-  public void testTikaGeoMetadata() throws Exception {
-    runFullImport(spatialConf);
-    String pt = "38.97,-77.018";
-    Double distance = 5.0d;
-    assertQ(req("q", "*:* OR foo_i:" + random().nextInt(100), "fq",
-        "{!geofilt sfield=\"home\"}\"",
-        "pt", pt, "d", String.valueOf(distance)), testsSpatial);
-  }
-
-  private String getConfigHTML(String htmlMapper) {
-    return
-        "<dataConfig>" +
-            "  <dataSource type='BinFileDataSource'/>" +
-            "  <document>" +
-            "    <entity name='Tika' format='xml' processor='TikaEntityProcessor' " +
-            "       url='" + getFile("dihextras/structured.html").getAbsolutePath() + "' " +
-            ((htmlMapper == null) ? "" : (" htmlMapper='" + htmlMapper + "'")) + ">" +
-            "      <field column='text'/>" +
-            "     </entity>" +
-            "  </document>" +
-            "</dataConfig>";
-
-  }
-
-  @Test
-  public void testEmbeddedDocsLegacy() throws Exception {
-    //test legacy behavior: ignore embedded docs
-    runFullImport(conf);
-    assertQ(req("*:*"), testsIgnoreEmbedded);
-  }
-
-  @Test
-  public void testEmbeddedDocsTrue() throws Exception {
-    runFullImport(getConfigEmbedded(true));
-    assertQ(req("*:*"), testsEmbedded);
-  }
-
-  @Test
-  public void testEmbeddedDocsFalse() throws Exception {
-    runFullImport(getConfigEmbedded(false));
-    assertQ(req("*:*"), testsIgnoreEmbedded);
-  }
-
-  private String getConfigEmbedded(boolean extractEmbedded) {
-    return
-        "<dataConfig>" +
-            "  <dataSource type=\"BinFileDataSource\"/>" +
-            "  <document>" +
-            "    <entity name=\"Tika\" processor=\"TikaEntityProcessor\" url=\"" +
-                    getFile("dihextras/test_recursive_embedded.docx").getAbsolutePath() + "\" " +
-            "       extractEmbedded=\""+extractEmbedded+"\">" +
-            "      <field column=\"Author\" meta=\"true\" name=\"author\"/>" +
-            "      <field column=\"title\" meta=\"true\" name=\"title\"/>" +
-            "      <field column=\"text\"/>" +
-            "     </entity>" +
-            "  </document>" +
-            "</dataConfig>";
-  }
-}
diff --git a/solr/contrib/dataimporthandler/README.md b/solr/contrib/dataimporthandler/README.md
deleted file mode 100644
index 8dc9391..0000000
--- a/solr/contrib/dataimporthandler/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Apache Solr - DataImportHandler
-================================
-
-Introduction
-------------
-DataImportHandler is a data import tool for Solr which makes importing data from Databases, XML files and
-HTTP data sources quick and easy.
-
-Important Note
---------------
-Although Solr strives to be agnostic of the Locale where the server is
-running, some code paths in DataImportHandler are known to depend on the
-System default Locale, Timezone, or Charset.  It is recommended that when
-running Solr you set the following system properties:
-  -Duser.language=xx -Duser.country=YY -Duser.timezone=ZZZ
-
-where xx, YY, and ZZZ are consistent with any database server's configuration.
-
-Deprecation notice
-------------------
-This contrib module is deprecated as of v8.6, scheduled for removal in Solr 9.0.
-The reason is that DIH is no longer being maintained in a manner we feel is necessary in order to keep it
-healthy and secure. Also it was not designed to work with SolrCloud and does not meet current performance requirements.
-
-The project hopes that the community will take over maintenance of DIH as a 3rd party package (See SOLR-14066 for more details). Please reach out to us at the dev@ mailing list if you want to help.
-
diff --git a/solr/contrib/dataimporthandler/build.gradle b/solr/contrib/dataimporthandler/build.gradle
deleted file mode 100644
index 9286d43..0000000
--- a/solr/contrib/dataimporthandler/build.gradle
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-apply plugin: 'java-library'
-
-description = 'Data Import Handler'
-
-dependencies {
-  implementation project(':solr:core')
-
-  testImplementation project(':solr:test-framework')
-
-  testImplementation('org.mockito:mockito-core', {
-    exclude group: "net.bytebuddy", module: "byte-buddy-agent"
-  })
-  testImplementation ('org.hsqldb:hsqldb')
-  testImplementation ('org.apache.derby:derby')
-  testImplementation ('org.objenesis:objenesis')
-}
diff --git a/solr/contrib/dataimporthandler/build.xml b/solr/contrib/dataimporthandler/build.xml
deleted file mode 100644
index a07e534..0000000
--- a/solr/contrib/dataimporthandler/build.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
- 
-        http://www.apache.org/licenses/LICENSE-2.0
- 
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-dataimporthandler" default="default">
-  
-  <description>
-    Data Import Handler
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-  <path id="test.classpath">
-    <path refid="solr.test.base.classpath"/>
-    <fileset dir="${test.lib.dir}" includes="*.jar"/>
-  </path>
-</project>
diff --git a/solr/contrib/dataimporthandler/ivy.xml b/solr/contrib/dataimporthandler/ivy.xml
deleted file mode 100644
index 67af77b..0000000
--- a/solr/contrib/dataimporthandler/ivy.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="dataimporthandler"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/> <!-- keep unused 'compile' configuration to allow build to succeed -->
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.hsqldb" name="hsqldb" rev="${/org.hsqldb/hsqldb}" conf="test"/>
-    <dependency org="org.apache.derby" name="derby" rev="${/org.apache.derby/derby}" conf="test"/>
-
-    <dependency org="org.mockito" name="mockito-core" rev="${/org.mockito/mockito-core}" conf="test"/>
-    <dependency org="net.bytebuddy" name="byte-buddy" rev="${/net.bytebuddy/byte-buddy}" conf="test"/>
-    <dependency org="org.objenesis" name="objenesis" rev="${/org.objenesis/objenesis}" conf="test"/>
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinContentStreamDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinContentStreamDataSource.java
deleted file mode 100644
index f4b1d7a..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinContentStreamDataSource.java
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.util.ContentStream;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.io.InputStream;
-import java.io.IOException;
-import java.util.Properties;
-/**
- * <p> A data source implementation which can be used to read binary stream from content streams. </p> <p> Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a> for more
- * details. </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 3.1
- */
-
-public class BinContentStreamDataSource extends DataSource<InputStream> {
-  private ContextImpl context;
-  private ContentStream contentStream;
-  private InputStream in;
-
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    this.context = (ContextImpl) context;
-  }
-
-  @Override
-  public InputStream getData(String query) {
-     contentStream = context.getDocBuilder().getReqParams().getContentStream();
-    if (contentStream == null)
-      throw new DataImportHandlerException(SEVERE, "No stream available. The request has no body");
-    try {
-      return in = contentStream.getStream();
-    } catch (IOException e) {
-      DataImportHandlerException.wrapAndThrow(SEVERE, e);
-      return null;
-    }
-  }
-
-  @Override
-  public void close() {
-     if (contentStream != null) {
-      try {
-        if (in == null) in = contentStream.getStream();
-        in.close();
-      } catch (IOException e) {
-        /*no op*/
-      }
-    } 
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinFileDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinFileDataSource.java
deleted file mode 100644
index dc7a0f5..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinFileDataSource.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.io.InputStream;
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileNotFoundException;
-import java.util.Properties;
-/**
- * <p>
- * A DataSource which reads from local files
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 3.1
- */
-
-public class BinFileDataSource extends DataSource<InputStream>{
-   protected String basePath;
-  @Override
-  public void init(Context context, Properties initProps) {
-     basePath = initProps.getProperty(FileDataSource.BASE_PATH);
-  }
-
-  @Override
-  public InputStream getData(String query) {
-    File f = FileDataSource.getFile(basePath,query);
-    try {
-      return new FileInputStream(f);
-    } catch (FileNotFoundException e) {
-      wrapAndThrow(SEVERE,e,"Unable to open file "+f.getAbsolutePath());
-      return null;
-    }
-  }
-
-  @Override
-  public void close() {
-
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinURLDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinURLDataSource.java
deleted file mode 100644
index 03a30ab..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/BinURLDataSource.java
+++ /dev/null
@@ -1,104 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.*;
-import static org.apache.solr.handler.dataimport.URLDataSource.*;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.InputStream;
-import java.lang.invoke.MethodHandles;
-import java.net.URL;
-import java.net.URLConnection;
-import java.util.Properties;
-/**
- * <p> A data source implementation which can be used to read binary streams using HTTP. </p> <p> Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a> for more
- * details. </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 3.1
- */
-public class BinURLDataSource extends DataSource<InputStream>{
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private String baseUrl;
-  private int connectionTimeout = CONNECTION_TIMEOUT;
-
-  private int readTimeout = READ_TIMEOUT;
-
-  private Context context;
-
-  private Properties initProps;
-
-  public BinURLDataSource() { }
-
-  @Override
-  public void init(Context context, Properties initProps) {
-      this.context = context;
-    this.initProps = initProps;
-
-    baseUrl = getInitPropWithReplacements(BASE_URL);
-    String cTimeout = getInitPropWithReplacements(CONNECTION_TIMEOUT_FIELD_NAME);
-    String rTimeout = getInitPropWithReplacements(READ_TIMEOUT_FIELD_NAME);
-    if (cTimeout != null) {
-      try {
-        connectionTimeout = Integer.parseInt(cTimeout);
-      } catch (NumberFormatException e) {
-        log.warn("Invalid connection timeout: {}", cTimeout);
-      }
-    }
-    if (rTimeout != null) {
-      try {
-        readTimeout = Integer.parseInt(rTimeout);
-      } catch (NumberFormatException e) {
-        log.warn("Invalid read timeout: {}", rTimeout);
-      }
-    }
-  }
-
-  @Override
-  public InputStream getData(String query) {
-    URL url = null;
-    try {
-      if (URIMETHOD.matcher(query).find()) url = new URL(query);
-      else url = new URL(baseUrl + query);
-      log.debug("Accessing URL: {}", url);
-      URLConnection conn = url.openConnection();
-      conn.setConnectTimeout(connectionTimeout);
-      conn.setReadTimeout(readTimeout);
-      return conn.getInputStream();
-    } catch (Exception e) {
-      log.error("Exception thrown while getting data", e);
-      wrapAndThrow (SEVERE, e, "Exception in invoking url " + url);
-      return null;//unreachable
-    }
-  }
-
-  @Override
-  public void close() { }
-
-  private String getInitPropWithReplacements(String propertyName) {
-    final String expr = initProps.getProperty(propertyName);
-    if (expr == null) {
-      return null;
-    }
-    return context.replaceTokens(expr);
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/CachePropertyUtil.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/CachePropertyUtil.java
deleted file mode 100644
index 544761f..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/CachePropertyUtil.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-public class CachePropertyUtil {
-  public static String getAttributeValueAsString(Context context, String attr) {
-    Object o = context.getSessionAttribute(attr, Context.SCOPE_ENTITY);
-    if (o == null) {
-      o = context.getResolvedEntityAttribute(attr);
-    }
-    if (o == null && context.getRequestParameters() != null) {
-      o = context.getRequestParameters().get(attr);
-    }
-    if (o == null) {
-      return null;
-    }
-    return o.toString();
-  }
-  
-  public static Object getAttributeValue(Context context, String attr) {
-    Object o = context.getSessionAttribute(attr, Context.SCOPE_ENTITY);
-    if (o == null) {
-      o = context.getResolvedEntityAttribute(attr);
-    }
-    if (o == null && context.getRequestParameters() != null) {
-      o = context.getRequestParameters().get(attr);
-    }
-    if (o == null) {
-      return null;
-    }
-    return o;
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ClobTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ClobTransformer.java
deleted file mode 100644
index 2e9d93a0..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ClobTransformer.java
+++ /dev/null
@@ -1,85 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.HTMLStripTransformer.TRUE;
-
-import java.io.IOException;
-import java.io.Reader;
-import java.sql.Clob;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-/**
- * {@link Transformer} instance which converts a {@link Clob} to a {@link String}.
- * <p>
- * Refer to <a href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.4
- */
-public class ClobTransformer extends Transformer {
-  @Override
-  public Object transformRow(Map<String, Object> aRow, Context context) {
-    for (Map<String, String> map : context.getAllEntityFields()) {
-      if (!TRUE.equals(map.get(CLOB))) continue;
-      String column = map.get(DataImporter.COLUMN);
-      String srcCol = map.get(RegexTransformer.SRC_COL_NAME);
-      if (srcCol == null)
-        srcCol = column;
-      Object o = aRow.get(srcCol);
-      if (o instanceof List) {
-        @SuppressWarnings({"unchecked"})
-        List<Clob> inputs = (List<Clob>) o;
-        List<String> results = new ArrayList<>();
-        for (Object input : inputs) {
-          if (input instanceof Clob) {
-            Clob clob = (Clob) input;
-            results.add(readFromClob(clob));
-          }
-        }
-        aRow.put(column, results);
-      } else {
-        if (o instanceof Clob) {
-          Clob clob = (Clob) o;
-          aRow.put(column, readFromClob(clob));
-        }
-      }
-    }
-    return aRow;
-  }
-
-  private String readFromClob(Clob clob) {
-    Reader reader = FieldReaderDataSource.readCharStream(clob);
-    StringBuilder sb = new StringBuilder();
-    char[] buf = new char[1024];
-    int len;
-    try {
-      while ((len = reader.read(buf)) != -1) {
-        sb.append(buf, 0, len);
-      }
-    } catch (IOException e) {
-      DataImportHandlerException.wrapAndThrow(DataImportHandlerException.SEVERE, e);
-    }
-    return sb.toString();
-  }
-
-  public static final String CLOB = "clob";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ConfigParseUtil.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ConfigParseUtil.java
deleted file mode 100644
index 179df23..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ConfigParseUtil.java
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-
-import org.w3c.dom.Element;
-import org.w3c.dom.NamedNodeMap;
-import org.w3c.dom.Node;
-import org.w3c.dom.NodeList;
-
-public class ConfigParseUtil {
-  public static String getStringAttribute(Element e, String name, String def) {
-    String r = e.getAttribute(name);
-    if (r == null || "".equals(r.trim()))
-      r = def;
-    return r;
-  }
-
-  public static HashMap<String, String> getAllAttributes(Element e) {
-    HashMap<String, String> m = new HashMap<>();
-    NamedNodeMap nnm = e.getAttributes();
-    for (int i = 0; i < nnm.getLength(); i++) {
-      m.put(nnm.item(i).getNodeName(), nnm.item(i).getNodeValue());
-    }
-    return m;
-  }
-
-  public static String getText(Node elem, StringBuilder buffer) {
-    if (elem.getNodeType() != Node.CDATA_SECTION_NODE) {
-      NodeList childs = elem.getChildNodes();
-      for (int i = 0; i < childs.getLength(); i++) {
-        Node child = childs.item(i);
-        short childType = child.getNodeType();
-        if (childType != Node.COMMENT_NODE
-                && childType != Node.PROCESSING_INSTRUCTION_NODE) {
-          getText(child, buffer);
-        }
-      }
-    } else {
-      buffer.append(elem.getNodeValue());
-    }
-
-    return buffer.toString();
-  }
-
-  public static List<Element> getChildNodes(Element e, String byName) {
-    List<Element> result = new ArrayList<>();
-    NodeList l = e.getChildNodes();
-    for (int i = 0; i < l.getLength(); i++) {
-      if (e.equals(l.item(i).getParentNode())
-              && byName.equals(l.item(i).getNodeName()))
-        result.add((Element) l.item(i));
-    }
-    return result;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ContentStreamDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ContentStreamDataSource.java
deleted file mode 100644
index 4482160..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ContentStreamDataSource.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.util.ContentStream;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.io.IOException;
-import java.io.Reader;
-import java.util.Properties;
-
-/**
- * A DataSource implementation which reads from the ContentStream of a POST request
- * <p>
- * Refer to <a href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.4
- */
-public class ContentStreamDataSource extends DataSource<Reader> {
-  private ContextImpl context;
-  private ContentStream contentStream;
-  private Reader reader;
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    this.context = (ContextImpl) context;
-  }
-
-  @Override
-  public Reader getData(String query) {
-    contentStream = context.getDocBuilder().getReqParams().getContentStream();
-    if (contentStream == null)
-      throw new DataImportHandlerException(SEVERE, "No stream available. The request has no body");
-    try {
-      return reader = contentStream.getReader();
-    } catch (IOException e) {
-      DataImportHandlerException.wrapAndThrow(SEVERE, e);
-      return null;
-    }
-  }
-
-  @Override
-  public void close() {
-    if (contentStream != null) {
-      try {
-        if (reader == null) reader = contentStream.getReader();
-        reader.close();
-      } catch (IOException e) {
-      }
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Context.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Context.java
deleted file mode 100644
index 70dbbcb..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Context.java
+++ /dev/null
@@ -1,221 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.core.SolrCore;
-
-import java.util.List;
-import java.util.Map;
-
-/**
- * <p>
- * This abstract class gives access to all available objects. So any
- * component implemented by a user can have the full power of DataImportHandler
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public abstract class Context {
-  public static final String FULL_DUMP = "FULL_DUMP", DELTA_DUMP = "DELTA_DUMP", FIND_DELTA = "FIND_DELTA";
-
-  /**
-   * An object stored in entity scope is valid only for the current entity for the current document only.
-   */
-  public static final String SCOPE_ENTITY = "entity";
-
-  /**
-   * An object stored in global scope is available for the current import only but across entities and documents.
-   */
-  public static final String SCOPE_GLOBAL = "global";
-
-  /**
-   * An object stored in document scope is available for the current document only but across entities.
-   */
-  public static final String SCOPE_DOC = "document";
-
-  /**
-   * An object stored in 'solrcore' scope is available across imports, entities and documents throughout the life of
-   * a solr core. A solr core unload or reload will destroy this data.
-   */
-  public static final String SCOPE_SOLR_CORE = "solrcore";
-
-  /**
-   * Get the value of any attribute put into this entity
-   *
-   * @param name name of the attribute eg: 'name'
-   * @return value of named attribute in entity
-   */
-  public abstract String getEntityAttribute(String name);
-
-  /**
-   * Get the value of any attribute put into this entity after resolving all variables found in the attribute value
-   * @param name name of the attribute
-   * @return value of the named attribute after resolving all variables
-   */
-  public abstract String getResolvedEntityAttribute(String name);
-
-  /**
-   * Returns all the fields put into an entity. each item (which is a map ) in
-   * the list corresponds to one field. each if the map contains the attribute
-   * names and values in a field
-   *
-   * @return all fields in an entity
-   */
-  public abstract List<Map<String, String>> getAllEntityFields();
-
-  /**
-   * Returns the VariableResolver used in this entity which can be used to
-   * resolve the tokens in ${&lt;namespce.name&gt;}
-   *
-   * @return a VariableResolver instance
-   * @see org.apache.solr.handler.dataimport.VariableResolver
-   */
-
-  public abstract VariableResolver getVariableResolver();
-
-  /**
-   * Gets the datasource instance defined for this entity. Do not close() this instance.
-   * Transformers should use the getDataSource(String name) method.
-   *
-   * @return a new DataSource instance as configured for the current entity
-   * @see org.apache.solr.handler.dataimport.DataSource
-   * @see #getDataSource(String)
-   */
-  @SuppressWarnings({"rawtypes"})
-  public abstract DataSource getDataSource();
-
-  /**
-   * Gets a new DataSource instance with a name. Ensure that you close() this after use
-   * because this is created just for this method call.
-   *
-   * @param name Name of the dataSource as defined in the dataSource tag
-   * @return a new DataSource instance
-   * @see org.apache.solr.handler.dataimport.DataSource
-   */
-  @SuppressWarnings({"rawtypes"})
-  public abstract DataSource getDataSource(String name);
-
-  /**
-   * Returns the instance of EntityProcessor used for this entity
-   *
-   * @return instance of EntityProcessor used for the current entity
-   * @see org.apache.solr.handler.dataimport.EntityProcessor
-   */
-  public abstract EntityProcessor getEntityProcessor();
-
-  /**
-   * Store values in a certain name and scope (entity, document,global)
-   *
-   * @param name  the key
-   * @param val   the value
-   * @param scope the scope in which the given key, value pair is to be stored
-   */
-  public abstract void setSessionAttribute(String name, Object val, String scope);
-
-  /**
-   * get a value by name in the given scope (entity, document,global)
-   *
-   * @param name  the key
-   * @param scope the scope from which the value is to be retrieved
-   * @return the object stored in the given scope with the given key
-   */
-  public abstract Object getSessionAttribute(String name, String scope);
-
-  /**
-   * Get the context instance for the parent entity. works only in the full dump
-   * If the current entity is rootmost a null is returned
-   *
-   * @return parent entity's Context
-   */
-  public abstract Context getParentContext();
-
-  /**
-   * The request parameters passed over HTTP for this command the values in the
-   * map are either String(for single valued parameters) or List&lt;String&gt; (for
-   * multi-valued parameters)
-   *
-   * @return the request parameters passed in the URL to initiate this process
-   */
-  public abstract Map<String, Object> getRequestParameters();
-
-  /**
-   * Returns if the current entity is the root entity
-   *
-   * @return true if current entity is the root entity, false otherwise
-   */
-  public abstract boolean isRootEntity();
-
-  /**
-   * Returns the current process FULL_DUMP, DELTA_DUMP, FIND_DELTA
-   *
-   * @return the type of the current running process
-   */
-  public abstract String currentProcess();
-
-  /**
-   * Exposing the actual SolrCore to the components
-   *
-   * @return the core
-   */
-  public abstract SolrCore getSolrCore();
-
-  /**
-   * Makes available some basic running statistics such as "docCount",
-   * "deletedDocCount", "rowCount", "queryCount" and "skipDocCount"
-   *
-   * @return a Map containing running statistics of the current import
-   */
-  public abstract Map<String, Object> getStats();
-
-  /**
-   * Returns the text specified in the script tag in the data-config.xml 
-   */
-  public abstract String getScript();
-
-  /**
-   * Returns the language of the script as specified in the script tag in data-config.xml
-   */
-  public abstract String getScriptLanguage();
-
-  /**delete a document by id
-   */
-  public abstract void deleteDoc(String id);
-
-  /**delete documents by query
-   */
-  public abstract void deleteDocByQuery(String query);
-
-  /**Use this directly to  resolve variable
-   * @param var the variable name 
-   * @return the resolved value
-   */
-  public abstract Object resolve(String var);
-
-  /** Resolve variables in a template
-   *
-   * @return The string w/ variables resolved
-   */
-  public abstract String replaceTokens(String template);
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ContextImpl.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ContextImpl.java
deleted file mode 100644
index 3d9f386..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ContextImpl.java
+++ /dev/null
@@ -1,264 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.handler.dataimport.config.Script;
-
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * <p>
- * An implementation for the Context
- * </p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class ContextImpl extends Context {
-  protected EntityProcessorWrapper epw;
-
-  private ContextImpl parent;
-
-  private VariableResolver resolver;
-
-  @SuppressWarnings({"rawtypes"})
-  private DataSource ds;
-
-  private String currProcess;
-
-  private Map<String, Object> requestParams;
-
-  private DataImporter dataImporter;
-
-  private Map<String, Object> entitySession, globalSession;
-
-  private Exception lastException = null;
-
-  DocBuilder.DocWrapper doc;
-
-  DocBuilder docBuilder;
-
-
-
-  public ContextImpl(EntityProcessorWrapper epw, VariableResolver resolver,
-                     @SuppressWarnings({"rawtypes"})DataSource ds, String currProcess,
-                     Map<String, Object> global, ContextImpl parentContext, DocBuilder docBuilder) {
-    this.epw = epw;
-    this.docBuilder = docBuilder;
-    this.resolver = resolver;
-    this.ds = ds;
-    this.currProcess = currProcess;
-    if (docBuilder != null) {
-      this.requestParams = docBuilder.getReqParams().getRawParams();
-      dataImporter = docBuilder.dataImporter;
-    }
-    globalSession = global;
-    parent = parentContext;
-  }
-
-  @Override
-  public String getEntityAttribute(String name) {
-    return epw==null || epw.getEntity() == null ? null : epw.getEntity().getAllAttributes().get(name);
-  }
-
-  @Override
-  public String getResolvedEntityAttribute(String name) {
-    return epw==null || epw.getEntity() == null ? null : resolver.replaceTokens(epw.getEntity().getAllAttributes().get(name));
-  }
-
-  @Override
-  public List<Map<String, String>> getAllEntityFields() {
-    return epw==null || epw.getEntity() == null ? Collections.emptyList() : epw.getEntity().getAllFieldsList();
-  }
-
-  @Override
-  public VariableResolver getVariableResolver() {
-    return resolver;
-  }
-
-  @Override
-  @SuppressWarnings({"rawtypes"})
-  public DataSource getDataSource() {
-    if (ds != null) return ds;
-    if(epw==null) { return null; }
-    if (epw!=null && epw.getDatasource() == null) {
-      epw.setDatasource(dataImporter.getDataSourceInstance(epw.getEntity(), epw.getEntity().getDataSourceName(), this));
-    }
-    if (epw!=null && epw.getDatasource() != null && docBuilder != null && docBuilder.verboseDebug &&
-             Context.FULL_DUMP.equals(currentProcess())) {
-      //debug is not yet implemented properly for deltas
-      epw.setDatasource(docBuilder.getDebugLogger().wrapDs(epw.getDatasource()));
-    }
-    return epw.getDatasource();
-  }
-
-  @Override
-  @SuppressWarnings({"rawtypes"})
-  public DataSource getDataSource(String name) {
-    return dataImporter.getDataSourceInstance(epw==null ? null : epw.getEntity(), name, this);
-  }
-
-  @Override
-  public boolean isRootEntity() {
-    return epw==null ? false : epw.getEntity().isDocRoot();
-  }
-
-  @Override
-  public String currentProcess() {
-    return currProcess;
-  }
-
-  @Override
-  public Map<String, Object> getRequestParameters() {
-    return requestParams;
-  }
-
-  @Override
-  public EntityProcessor getEntityProcessor() {
-    return epw;
-  }
-
-  @Override
-  public void setSessionAttribute(String name, Object val, String scope) {
-    if(name == null) {
-      return;
-    }
-    if (Context.SCOPE_ENTITY.equals(scope)) {
-      if (entitySession == null) {
-        entitySession = new HashMap<>();
-      }
-      entitySession.put(name, val);
-    } else if (Context.SCOPE_GLOBAL.equals(scope)) {
-      if (globalSession != null) {
-        globalSession.put(name, val);
-      }
-    } else if (Context.SCOPE_DOC.equals(scope)) {
-      DocBuilder.DocWrapper doc = getDocument();
-      if (doc != null) {
-        doc.setSessionAttribute(name, val);
-      }
-    } else if (SCOPE_SOLR_CORE.equals(scope)){
-      if(dataImporter != null) {
-        dataImporter.putToCoreScopeSession(name, val);
-      }
-    }
-  }
-
-  @Override
-  public Object getSessionAttribute(String name, String scope) {
-    if (Context.SCOPE_ENTITY.equals(scope)) {
-      if (entitySession == null)
-        return null;
-      return entitySession.get(name);
-    } else if (Context.SCOPE_GLOBAL.equals(scope)) {
-      if (globalSession != null) {
-        return globalSession.get(name);
-      }
-    } else if (Context.SCOPE_DOC.equals(scope)) {
-      DocBuilder.DocWrapper doc = getDocument();      
-      return doc == null ? null: doc.getSessionAttribute(name);
-    } else if (SCOPE_SOLR_CORE.equals(scope)){
-       return dataImporter == null ? null : dataImporter.getFromCoreScopeSession(name);
-    }
-    return null;
-  }
-
-  @Override
-  public Context getParentContext() {
-    return parent;
-  }
-
-  private DocBuilder.DocWrapper getDocument() {
-    ContextImpl c = this;
-    while (true) {
-      if (c.doc != null)
-        return c.doc;
-      if (c.parent != null)
-        c = c.parent;
-      else
-        return null;
-    }
-  }
-
-  void setDoc(DocBuilder.DocWrapper docWrapper) {
-    this.doc = docWrapper;
-  }
-
-
-  @Override
-  public SolrCore getSolrCore() {
-    return dataImporter == null ? null : dataImporter.getCore();
-  }
-
-
-  @Override
-  public Map<String, Object> getStats() {
-    return docBuilder != null ? docBuilder.importStatistics.getStatsSnapshot() : Collections.<String, Object>emptyMap();
-  }
-
-  @Override
-  public String getScript() {
-    if (dataImporter != null) {
-      Script script = dataImporter.getConfig().getScript();
-      return script == null ? null : script.getText();
-    }
-    return null;
-  }
-  
-  @Override
-  public String getScriptLanguage() {
-    if (dataImporter != null) {
-      Script script = dataImporter.getConfig().getScript();
-      return script == null ? null : script.getLanguage();
-    }
-    return null;
-  }
-
-  @Override
-  public void deleteDoc(String id) {
-    if(docBuilder != null){
-      docBuilder.writer.deleteDoc(id);
-    }
-  }
-
-  @Override
-  public void deleteDocByQuery(String query) {
-    if(docBuilder != null){
-      docBuilder.writer.deleteByQuery(query);
-    } 
-  }
-
-  DocBuilder getDocBuilder(){
-    return docBuilder;
-  }
-  @Override
-  public Object resolve(String var) {
-    return resolver.resolve(var);
-  }
-
-  @Override
-  public String replaceTokens(String template) {
-    return resolver.replaceTokens(template);
-  }
-
-  public Exception getLastException() { return lastException; }
-
-  public void setLastException(Exception lastException) {this.lastException = lastException; }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHCache.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHCache.java
deleted file mode 100644
index a67b3e4..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHCache.java
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.Iterator;
-import java.util.Map;
-
-/**
- * <p>
- * A cache that allows a DIH entity's data to persist locally prior being joined
- * to other data and/or indexed.
- * </p>
- * 
- * @lucene.experimental
- */
-public interface DIHCache extends Iterable<Map<String,Object>> {
-  
-  /**
-   * <p>
-   * Opens the cache using the specified properties. The {@link Context}
-   * includes any parameters needed by the cache impl. This must be called
-   * before any read/write operations are permitted.
-   */
-  void open(Context context);
-  
-  /**
-   * <p>
-   * Releases resources used by this cache, if possible. The cache is flushed
-   * but not destroyed.
-   * </p>
-   */
-  void close();
-  
-  /**
-   * <p>
-   * Persists any pending data to the cache
-   * </p>
-   */
-  void flush();
-  
-  /**
-   * <p>
-   * Closes the cache, if open. Then removes all data, possibly removing the
-   * cache entirely from persistent storage.
-   * </p>
-   */
-  public void destroy();
-  
-  /**
-   * <p>
-   * Adds a document. If a document already exists with the same key, both
-   * documents will exist in the cache, as the cache allows duplicate keys. To
-   * update a key's documents, first call delete(Object key).
-   * </p>
-   */
-  void add(Map<String, Object> rec);
-  
-  /**
-   * <p>
-   * Returns an iterator, allowing callers to iterate through the entire cache
-   * in key, then insertion, order.
-   * </p>
-   */
-  @Override
-  Iterator<Map<String,Object>> iterator();
-  
-  /**
-   * <p>
-   * Returns an iterator, allowing callers to iterate through all documents that
-   * match the given key in insertion order.
-   * </p>
-   */
-  Iterator<Map<String,Object>> iterator(Object key);
-  
-  /**
-   * <p>
-   * Delete all documents associated with the given key
-   * </p>
-   */
-  void delete(Object key);
-  
-  /**
-   * <p>
-   * Delete all data from the cache,leaving the empty cache intact.
-   * </p>
-   */
-  void deleteAll();
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHCacheSupport.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHCacheSupport.java
deleted file mode 100644
index 2f3d957..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHCacheSupport.java
+++ /dev/null
@@ -1,279 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-import java.lang.invoke.MethodHandles;
-import java.lang.reflect.Constructor;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.Map;
-
-import org.apache.solr.common.SolrException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class DIHCacheSupport {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private String cacheForeignKey;
-  private String cacheImplName;
-  private Map<String,DIHCache> queryVsCache = new HashMap<>();
-  private Map<String,Iterator<Map<String,Object>>> queryVsCacheIterator;
-  private Iterator<Map<String,Object>> dataSourceRowCache;
-  private boolean cacheDoKeyLookup;
-  
-  public DIHCacheSupport(Context context, String cacheImplName) {
-    this.cacheImplName = cacheImplName;
-    
-    Relation r = new Relation(context);
-    cacheDoKeyLookup = r.doKeyLookup;
-    String cacheKey = r.primaryKey;
-    cacheForeignKey = r.foreignKey;
-    
-    context.setSessionAttribute(DIHCacheSupport.CACHE_PRIMARY_KEY, cacheKey,
-        Context.SCOPE_ENTITY);
-    context.setSessionAttribute(DIHCacheSupport.CACHE_FOREIGN_KEY, cacheForeignKey,
-        Context.SCOPE_ENTITY);
-    context.setSessionAttribute(DIHCacheSupport.CACHE_DELETE_PRIOR_DATA,
-        "true", Context.SCOPE_ENTITY);
-    context.setSessionAttribute(DIHCacheSupport.CACHE_READ_ONLY, "false",
-        Context.SCOPE_ENTITY);
-  }
-  
-  static class Relation{
-    protected final boolean doKeyLookup;
-    protected final String foreignKey;
-    protected final String primaryKey;
-    
-    public Relation(Context context) {
-      String where = context.getEntityAttribute("where");
-      String cacheKey = context.getEntityAttribute(DIHCacheSupport.CACHE_PRIMARY_KEY);
-      String lookupKey = context.getEntityAttribute(DIHCacheSupport.CACHE_FOREIGN_KEY);
-      if (cacheKey != null && lookupKey == null) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-            "'cacheKey' is specified for the entity "
-                + context.getEntityAttribute("name")
-                + " but 'cacheLookup' is missing");
-        
-      }
-      if (where == null && cacheKey == null) {
-        doKeyLookup = false;
-        primaryKey = null;
-        foreignKey = null;
-      } else {
-        if (where != null) {
-          String[] splits = where.split("=");
-          primaryKey = splits[0];
-          foreignKey = splits[1].trim();
-        } else {
-          primaryKey = cacheKey;
-          foreignKey = lookupKey;
-        }
-        doKeyLookup = true;
-      }
-    }
-
-    @Override
-    public String toString() {
-      return "Relation "
-          + primaryKey + "="+foreignKey  ;
-    }
-    
-    
-  }
-  
-  private DIHCache instantiateCache(Context context) {
-    DIHCache cache = null;
-    try {
-      @SuppressWarnings("unchecked")
-      Class<DIHCache> cacheClass = DocBuilder.loadClass(cacheImplName, context
-          .getSolrCore());
-      Constructor<DIHCache> constr = cacheClass.getConstructor();
-      cache = constr.newInstance();
-      cache.open(context);
-    } catch (Exception e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-          "Unable to load Cache implementation:" + cacheImplName, e);
-    }
-    return cache;
-  }
-  
-  public void initNewParent(Context context) {
-    dataSourceRowCache = null;
-    queryVsCacheIterator = new HashMap<>();
-    for (Map.Entry<String,DIHCache> entry : queryVsCache.entrySet()) {
-      queryVsCacheIterator.put(entry.getKey(), entry.getValue().iterator());
-    }
-  }
-  
-  public void destroyAll() {
-    if (queryVsCache != null) {
-      for (DIHCache cache : queryVsCache.values()) {
-        cache.destroy();
-      }
-    }
-    queryVsCache = null;
-    dataSourceRowCache = null;
-    cacheForeignKey = null;
-  }
-  
-  /**
-   * <p>
-   * Get all the rows from the datasource for the given query and cache them
-   * </p>
-   */
-  public void populateCache(String query,
-      Iterator<Map<String,Object>> rowIterator) {
-    Map<String,Object> aRow = null;
-    DIHCache cache = queryVsCache.get(query);
-    while ((aRow = getNextFromCache(query, rowIterator)) != null) {
-      cache.add(aRow);
-    }
-  }
-  
-  private Map<String,Object> getNextFromCache(String query,
-      Iterator<Map<String,Object>> rowIterator) {
-    try {
-      if (rowIterator == null) return null;
-      if (rowIterator.hasNext()) return rowIterator.next();
-      return null;
-    } catch (Exception e) {
-      SolrException.log(log, "getNextFromCache() failed for query '" + query
-          + "'", e);
-      wrapAndThrow(DataImportHandlerException.WARN, e);
-      return null;
-    }
-  }
-  
-  public Map<String,Object> getCacheData(Context context, String query,
-      Iterator<Map<String,Object>> rowIterator) {
-    if (cacheDoKeyLookup) {
-      return getIdCacheData(context, query, rowIterator);
-    } else {
-      return getSimpleCacheData(context, query, rowIterator);
-    }
-  }
-  
-  /**
-   * If the where clause is present the cache is sql Vs Map of key Vs List of
-   * Rows.
-   * 
-   * @param query
-   *          the query string for which cached data is to be returned
-   * 
-   * @return the cached row corresponding to the given query after all variables
-   *         have been resolved
-   */
-  protected Map<String,Object> getIdCacheData(Context context, String query,
-      Iterator<Map<String,Object>> rowIterator) {
-    Object key = context.resolve(cacheForeignKey);
-    if (key == null) {
-      throw new DataImportHandlerException(DataImportHandlerException.WARN,
-          "The cache lookup value : " + cacheForeignKey
-              + " is resolved to be null in the entity :"
-              + context.getEntityAttribute("name"));
-      
-    }
-    if (dataSourceRowCache == null) {
-      DIHCache cache = queryVsCache.get(query);
-      
-      if (cache == null) {        
-        cache = instantiateCache(context);        
-        queryVsCache.put(query, cache);        
-        populateCache(query, rowIterator);        
-      }
-      dataSourceRowCache = cache.iterator(key);
-    }    
-    return getFromRowCacheTransformed();
-  }
-  
-  /**
-   * If where clause is not present the cache is a Map of query vs List of Rows.
-   * 
-   * @param query
-   *          string for which cached row is to be returned
-   * 
-   * @return the cached row corresponding to the given query
-   */
-  protected Map<String,Object> getSimpleCacheData(Context context,
-      String query, Iterator<Map<String,Object>> rowIterator) {
-    if (dataSourceRowCache == null) {      
-      DIHCache cache = queryVsCache.get(query);      
-      if (cache == null) {        
-        cache = instantiateCache(context);        
-        queryVsCache.put(query, cache);        
-        populateCache(query, rowIterator);        
-        queryVsCacheIterator.put(query, cache.iterator());        
-      }      
-      Iterator<Map<String,Object>> cacheIter = queryVsCacheIterator.get(query);      
-      dataSourceRowCache = cacheIter;
-    }
-    
-    return getFromRowCacheTransformed();
-  }
-  
-  protected Map<String,Object> getFromRowCacheTransformed() {
-    if (dataSourceRowCache == null || !dataSourceRowCache.hasNext()) {
-      dataSourceRowCache = null;
-      return null;
-    }
-    Map<String,Object> r = dataSourceRowCache.next();
-    return r;
-  }
-  
-  /**
-   * <p>
-   * Specify the class for the cache implementation
-   * </p>
-   */
-  public static final String CACHE_IMPL = "cacheImpl";
-
-  /**
-   * <p>
-   * If the cache supports persistent data, set to "true" to delete any prior
-   * persisted data before running the entity.
-   * </p>
-   */
-  
-  public static final String CACHE_DELETE_PRIOR_DATA = "cacheDeletePriorData";
-  /**
-   * <p>
-   * Specify the Foreign Key from the parent entity to join on. Use if the cache
-   * is on a child entity.
-   * </p>
-   */
-  public static final String CACHE_FOREIGN_KEY = "cacheLookup";
-
-  /**
-   * <p>
-   * Specify the Primary Key field from this Entity to map the input records
-   * with
-   * </p>
-   */
-  public static final String CACHE_PRIMARY_KEY = "cacheKey";
-  /**
-   * <p>
-   * If true, a pre-existing cache is re-opened for read-only access.
-   * </p>
-   */
-  public static final String CACHE_READ_ONLY = "cacheReadOnly";
-
-
-
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHLogLevels.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHLogLevels.java
deleted file mode 100644
index 24732d1..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHLogLevels.java
+++ /dev/null
@@ -1,21 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-public enum DIHLogLevels {
-  START_ENTITY, END_ENTITY, TRANSFORMED_ROW, ENTITY_META, PRE_TRANSFORMER_ROW, START_DOC, END_DOC, ENTITY_OUT, ROW_END, TRANSFORMER_EXCEPTION, ENTITY_EXCEPTION, DISABLE_LOGGING, ENABLE_LOGGING, NONE
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHProperties.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHProperties.java
deleted file mode 100644
index f51ef07..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHProperties.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.Date;
-import java.util.Map;
-
-/**
- * Implementations write out properties about the last data import
- * for use by the next import.  ex: to persist the last import timestamp
- * so that future delta imports can know what needs to be updated.
- * 
- * @lucene.experimental
- */
-public abstract class DIHProperties {
-  
-  public abstract void init(DataImporter dataImporter, Map<String, String> initParams);
-  
-  public abstract boolean isWritable();
-  
-  public abstract void persist(Map<String, Object> props);
-  
-  public abstract Map<String, Object> readIndexerProperties();
-  
-  public abstract String convertDateToString(Date d);
-  
-  public Date getCurrentTimestamp() {
-    return new Date();
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHWriter.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHWriter.java
deleted file mode 100644
index bdb988d..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHWriter.java
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-import java.util.Map;
-import java.util.Set;
-
-import org.apache.solr.common.SolrInputDocument;
-
-/**
- * @lucene.experimental
- *
- */
-public interface DIHWriter {
-
-  /**
-   * <p>
-   *  If this writer supports transactions or commit points, then commit any changes,
-   *  optionally optimizing the data for read/write performance
-   * </p>
-   */
-  public void commit(boolean optimize);
-
-  /**
-   * <p>
-   *  Release resources used by this writer.  After calling close, reads &amp; updates will throw exceptions.
-   * </p>
-   */
-  public void close();
-
-  /**
-   * <p>
-   *  If this writer supports transactions or commit points, then roll back any uncommitted changes.
-   * </p>
-   */
-  public void rollback();
-
-  /**
-   * <p>
-   *  Delete from the writer's underlying data store based the passed-in writer-specific query. (Optional Operation)
-   * </p>
-   */
-  public void deleteByQuery(String q);
-
-  /**
-   * <p>
-   *  Delete everything from the writer's underlying data store
-   * </p>
-   */
-  public void doDeleteAll();
-
-  /**
-   * <p>
-   *  Delete from the writer's underlying data store based on the passed-in Primary Key
-   * </p>
-   */
-  public void deleteDoc(Object key);
-
-
-
-  /**
-   * <p>
-   *  Add a document to this writer's underlying data store.
-   * </p>
-   * @return true on success, false on failure
-   */
-  public boolean upload(SolrInputDocument doc);
-
-
-
-  /**
-   * <p>
-   *  Provide context information for this writer.  init() should be called before using the writer.
-   * </p>
-   */
-  public void init(Context context) ;
-
-
-  /**
-   * <p>
-   *  Specify the keys to be modified by a delta update (required by writers that can store duplicate keys)
-   * </p>
-   */
-  public void setDeltaKeys(Set<Map<String, Object>> deltaKeys) ;
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHWriterBase.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHWriterBase.java
deleted file mode 100644
index 43e92c3..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DIHWriterBase.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.HashSet;
-import java.util.Map;
-import java.util.Set;
-
-public abstract class DIHWriterBase implements DIHWriter {
-  protected String keyFieldName;
-  protected Set<Object> deltaKeys = null;
-  
-  @Override
-  public void setDeltaKeys(Set<Map<String,Object>> passedInDeltaKeys) {
-    deltaKeys = new HashSet<>();
-    for (Map<String,Object> aMap : passedInDeltaKeys) {
-      if (aMap.size() > 0) {
-        Object key = null;
-        if (keyFieldName != null) {
-          key = aMap.get(keyFieldName);
-        } else {
-          key = aMap.entrySet().iterator().next();
-        }
-        if (key != null) {
-          deltaKeys.add(key);
-        }
-      }
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImportHandler.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImportHandler.java
deleted file mode 100644
index 278de7d..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImportHandler.java
+++ /dev/null
@@ -1,318 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.lang.reflect.Constructor;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.MapSolrParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.common.util.ContentStream;
-import org.apache.solr.common.util.ContentStreamBase;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.core.SolrResourceLoader;
-import org.apache.solr.handler.RequestHandlerBase;
-import org.apache.solr.metrics.MetricsMap;
-import org.apache.solr.metrics.SolrMetricsContext;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.response.RawResponseWriter;
-import org.apache.solr.response.SolrQueryResponse;
-import org.apache.solr.update.processor.UpdateRequestProcessor;
-import org.apache.solr.update.processor.UpdateRequestProcessorChain;
-import org.apache.solr.util.plugin.SolrCoreAware;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.handler.dataimport.DataImporter.IMPORT_CMD;
-
-/**
- * <p>
- * Solr Request Handler for data import from databases and REST data sources.
- * </p>
- * <p>
- * It is configured in solrconfig.xml
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @deprecated since 8.6
- * @since solr 1.3
- */
-@Deprecated(since = "8.6")
-public class DataImportHandler extends RequestHandlerBase implements
-        SolrCoreAware {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private DataImporter importer;
-
-  private boolean debugEnabled = true;
-
-  private String myName = "dataimport";
-
-  private MetricsMap metrics;
-
-  private static final String PARAM_WRITER_IMPL = "writerImpl";
-  private static final String DEFAULT_WRITER_NAME = "SolrWriter";
-  static final String ENABLE_DIH_DATA_CONFIG_PARAM = "enable.dih.dataConfigParam";
-
-  final boolean dataConfigParam_enabled = Boolean.getBoolean(ENABLE_DIH_DATA_CONFIG_PARAM);
-
-  public DataImporter getImporter() {
-    return this.importer;
-  }
-
-  @Override
-
-  public void init(@SuppressWarnings({"rawtypes"})NamedList args) {
-    super.init(args);
-    Map<String,String> macro = new HashMap<>();
-    macro.put("expandMacros", "false");
-    defaults = SolrParams.wrapDefaults(defaults, new MapSolrParams(macro));
-    log.warn("Data Import Handler is deprecated as of Solr 8.6. See SOLR-14066 for more details.");
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public void inform(SolrCore core) {
-    try {
-      String name = getPluginInfo().name;
-      if (name.startsWith("/")) {
-        myName = name.substring(1);
-      }
-      // some users may have '/' in the handler name. replace with '_'
-      myName = myName.replaceAll("/", "_");
-      debugEnabled = StrUtils.parseBool((String)initArgs.get(ENABLE_DEBUG), true);
-      importer = new DataImporter(core, myName);         
-    } catch (Exception e) {
-      log.error( DataImporter.MSG.LOAD_EXP, e);
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, DataImporter.MSG.LOAD_EXP, e);
-    }
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp)
-          throws Exception {
-    rsp.setHttpCaching(false);
-    
-    //TODO: figure out why just the first one is OK...
-    ContentStream contentStream = null;
-    Iterable<ContentStream> streams = req.getContentStreams();
-    if(streams != null){
-      for (ContentStream stream : streams) {
-          contentStream = stream;
-          break;
-      }
-    }
-    SolrParams params = req.getParams();
-    @SuppressWarnings({"rawtypes"})
-    NamedList defaultParams = (NamedList) initArgs.get("defaults");
-    RequestInfo requestParams = new RequestInfo(req, getParamsMap(params), contentStream);
-    String command = requestParams.getCommand();
-    
-    if (DataImporter.SHOW_CONF_CMD.equals(command)) {    
-      String dataConfigFile = params.get("config");
-      String dataConfig = params.get("dataConfig"); // needn't check dataConfigParam_enabled; we don't execute it
-      if(dataConfigFile != null) {
-        dataConfig = SolrWriter.getResourceAsString(req.getCore().getResourceLoader().openResource(dataConfigFile));
-      }
-      if(dataConfig==null)  {
-        rsp.add("status", DataImporter.MSG.NO_CONFIG_FOUND);
-      } else {
-        // Modify incoming request params to add wt=raw
-        ModifiableSolrParams rawParams = new ModifiableSolrParams(req.getParams());
-        rawParams.set(CommonParams.WT, "raw");
-        req.setParams(rawParams);
-        ContentStreamBase content = new ContentStreamBase.StringStream(dataConfig);
-        rsp.add(RawResponseWriter.CONTENT, content);
-      }
-      return;
-    }
-
-    if (params.get("dataConfig") != null && dataConfigParam_enabled == false) {
-      throw new SolrException(SolrException.ErrorCode.FORBIDDEN,
-          "Use of the dataConfig param (DIH debug mode) requires the system property " +
-              ENABLE_DIH_DATA_CONFIG_PARAM + " because it's a security risk.");
-    }
-
-    rsp.add("initArgs", initArgs);
-    String message = "";
-
-    if (command != null) {
-      rsp.add("command", command);
-    }
-    // If importer is still null
-    if (importer == null) {
-      rsp.add("status", DataImporter.MSG.NO_INIT);
-      return;
-    }
-
-    if (command != null && DataImporter.ABORT_CMD.equals(command)) {
-      importer.runCmd(requestParams, null);
-    } else if (importer.isBusy()) {
-      message = DataImporter.MSG.CMD_RUNNING;
-    } else if (command != null) {
-      if (DataImporter.FULL_IMPORT_CMD.equals(command)
-              || DataImporter.DELTA_IMPORT_CMD.equals(command) ||
-              IMPORT_CMD.equals(command)) {
-        importer.maybeReloadConfiguration(requestParams, defaultParams);
-        UpdateRequestProcessorChain processorChain =
-                req.getCore().getUpdateProcessorChain(params);
-        UpdateRequestProcessor processor = processorChain.createProcessor(req, rsp);
-        SolrResourceLoader loader = req.getCore().getResourceLoader();
-        DIHWriter sw = getSolrWriter(processor, loader, requestParams, req);
-        
-        if (requestParams.isDebug()) {
-          if (debugEnabled) {
-            // Synchronous request for the debug mode
-            importer.runCmd(requestParams, sw);
-            rsp.add("mode", "debug");
-            rsp.add("documents", requestParams.getDebugInfo().debugDocuments);
-            if (requestParams.getDebugInfo().debugVerboseOutput != null) {
-              rsp.add("verbose-output", requestParams.getDebugInfo().debugVerboseOutput);
-            }
-          } else {
-            message = DataImporter.MSG.DEBUG_NOT_ENABLED;
-          }
-        } else {
-          // Asynchronous request for normal mode
-          if(requestParams.getContentStream() == null && !requestParams.isSyncMode()){
-            importer.runAsync(requestParams, sw);
-          } else {
-            importer.runCmd(requestParams, sw);
-          }
-        }
-      } else if (DataImporter.RELOAD_CONF_CMD.equals(command)) { 
-        if(importer.maybeReloadConfiguration(requestParams, defaultParams)) {
-          message = DataImporter.MSG.CONFIG_RELOADED;
-        } else {
-          message = DataImporter.MSG.CONFIG_NOT_RELOADED;
-        }
-      }
-    }
-    rsp.add("status", importer.isBusy() ? "busy" : "idle");
-    rsp.add("importResponse", message);
-    rsp.add("statusMessages", importer.getStatusMessages());
-  }
-
-  /** The value is converted to a String or {@code List<String>} if multi-valued. */
-  private Map<String, Object> getParamsMap(SolrParams params) {
-    Map<String, Object> result = new HashMap<>();
-    for (Map.Entry<String, String[]> pair : params){
-        String s = pair.getKey();
-        String[] val = pair.getValue();
-        if (val == null || val.length < 1)
-          continue;
-        if (val.length == 1)
-          result.put(s, val[0]);
-        else
-          result.put(s, Arrays.asList(val));
-    }
-    return result;
-  }
-
-  private DIHWriter getSolrWriter(final UpdateRequestProcessor processor,
-      final SolrResourceLoader loader, final RequestInfo requestParams,
-      SolrQueryRequest req) {
-    SolrParams reqParams = req.getParams();
-    String writerClassStr = null;
-    if (reqParams != null && reqParams.get(PARAM_WRITER_IMPL) != null) {
-      writerClassStr = reqParams.get(PARAM_WRITER_IMPL);
-    }
-    DIHWriter writer;
-    if (writerClassStr != null
-        && !writerClassStr.equals(DEFAULT_WRITER_NAME)
-        && !writerClassStr.equals(DocBuilder.class.getPackage().getName() + "."
-            + DEFAULT_WRITER_NAME)) {
-      try {
-        @SuppressWarnings("unchecked")
-        Class<DIHWriter> writerClass = DocBuilder.loadClass(writerClassStr, req.getCore());
-        @SuppressWarnings({"rawtypes"})
-        Constructor<DIHWriter> cnstr = writerClass.getConstructor(new Class[] {
-            UpdateRequestProcessor.class, SolrQueryRequest.class});
-        return cnstr.newInstance((Object) processor, (Object) req);
-      } catch (Exception e) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-            "Unable to load Writer implementation:" + writerClassStr, e);
-      }
-    } else {
-      return new SolrWriter(processor, req) {
-        @Override
-        public boolean upload(SolrInputDocument document) {
-          try {
-            return super.upload(document);
-          } catch (RuntimeException e) {
-            log.error("Exception while adding: {}", document, e);
-            return false;
-          }
-        }
-      };
-    }
-  }
-
-  @Override
-  public void initializeMetrics(SolrMetricsContext parentContext, String scope) {
-    super.initializeMetrics(parentContext, scope);
-    metrics = new MetricsMap((detailed, map) -> {
-      if (importer != null) {
-        DocBuilder.Statistics cumulative = importer.cumulativeStatistics;
-
-        map.put("Status", importer.getStatus().toString());
-
-        if (importer.docBuilder != null) {
-          DocBuilder.Statistics running = importer.docBuilder.importStatistics;
-          map.put("Documents Processed", running.docCount);
-          map.put("Requests made to DataSource", running.queryCount);
-          map.put("Rows Fetched", running.rowsCount);
-          map.put("Documents Deleted", running.deletedDocCount);
-          map.put("Documents Skipped", running.skipDocCount);
-        }
-
-        map.put(DataImporter.MSG.TOTAL_DOC_PROCESSED, cumulative.docCount);
-        map.put(DataImporter.MSG.TOTAL_QUERIES_EXECUTED, cumulative.queryCount);
-        map.put(DataImporter.MSG.TOTAL_ROWS_EXECUTED, cumulative.rowsCount);
-        map.put(DataImporter.MSG.TOTAL_DOCS_DELETED, cumulative.deletedDocCount);
-        map.put(DataImporter.MSG.TOTAL_DOCS_SKIPPED, cumulative.skipDocCount);
-      }
-    });
-    solrMetricsContext.gauge(metrics, true, "importer", getCategory().toString(), scope);
-  }
-
-  // //////////////////////SolrInfoMBeans methods //////////////////////
-
-  @Override
-  public String getDescription() {
-    return DataImporter.MSG.JMX_DESC;
-  }
-
-  public static final String ENABLE_DEBUG = "enableDebug";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImportHandlerException.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImportHandlerException.java
deleted file mode 100644
index e69b3fd..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImportHandlerException.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-/**
- * <p> Exception class for all DataImportHandler exceptions </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class DataImportHandlerException extends RuntimeException {
-  private int errCode;
-
-  public boolean debugged = false;
-
-  public static final int SEVERE = 500, WARN = 400, SKIP = 300, SKIP_ROW =301;
-
-  public DataImportHandlerException(int err) {
-    super();
-    errCode = err;
-  }
-
-  public DataImportHandlerException(int err, String message) {
-    super(message + (SolrWriter.getDocCount() == null ? "" : MSG + SolrWriter.getDocCount()));
-    errCode = err;
-  }
-
-  public DataImportHandlerException(int err, String message, Throwable cause) {
-    super(message + (SolrWriter.getDocCount() == null ? "" : MSG + SolrWriter.getDocCount()), cause);
-    errCode = err;
-  }
-
-  public DataImportHandlerException(int err, Throwable cause) {
-    super(cause);
-    errCode = err;
-  }
-
-  public int getErrCode() {
-    return errCode;
-  }
-
-  public static DataImportHandlerException wrapAndThrow(int err, Exception e) {
-    if (e instanceof DataImportHandlerException) {
-      throw (DataImportHandlerException) e;
-    } else {
-      throw new DataImportHandlerException(err, e);
-    }
-  }
-
-  public static DataImportHandlerException wrapAndThrow(int err, Exception e, String msg) {
-    if (e instanceof DataImportHandlerException) {
-      throw (DataImportHandlerException) e;
-    } else {
-      throw new DataImportHandlerException(err, msg, e);
-    }
-  }
-
-
-  public static final String MSG = " Processing Document # ";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImporter.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImporter.java
deleted file mode 100644
index c5b2f70..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataImporter.java
+++ /dev/null
@@ -1,628 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.EmptyEntityResolver;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.schema.IndexSchema;
-import org.apache.solr.util.SystemIdResolver;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.XMLErrorLogger;
-import org.apache.solr.handler.dataimport.config.ConfigNameConstants;
-import org.apache.solr.handler.dataimport.config.ConfigParseUtil;
-import org.apache.solr.handler.dataimport.config.DIHConfiguration;
-import org.apache.solr.handler.dataimport.config.Entity;
-import org.apache.solr.handler.dataimport.config.PropertyWriter;
-import org.apache.solr.handler.dataimport.config.Script;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DocBuilder.loadClass;
-import static org.apache.solr.handler.dataimport.config.ConfigNameConstants.CLASS;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.w3c.dom.Document;
-import org.w3c.dom.Element;
-import org.w3c.dom.NodeList;
-import org.xml.sax.InputSource;
-import org.apache.commons.io.IOUtils;
-
-import javax.xml.parsers.DocumentBuilder;
-import javax.xml.parsers.DocumentBuilderFactory;
-
-import java.io.IOException;
-import java.io.StringReader;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.atomic.AtomicLong;
-import java.util.concurrent.locks.ReentrantLock;
-
-/**
- * <p> Stores all configuration information for pulling and indexing data. </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class DataImporter {
-
-  public enum Status {
-    IDLE, RUNNING_FULL_DUMP, RUNNING_DELTA_DUMP, JOB_FAILED
-  }
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private static final XMLErrorLogger XMLLOG = new XMLErrorLogger(log);
-
-  private Status status = Status.IDLE;
-  private DIHConfiguration config;
-  private Date indexStartTime;
-  private Properties store = new Properties();
-  private Map<String, Map<String,String>> requestLevelDataSourceProps = new HashMap<>();
-  private IndexSchema schema;
-  public DocBuilder docBuilder;
-  public DocBuilder.Statistics cumulativeStatistics = new DocBuilder.Statistics();
-  private SolrCore core;  
-  private Map<String, Object> coreScopeSession = new ConcurrentHashMap<>();
-  private ReentrantLock importLock = new ReentrantLock();
-  private boolean isDeltaImportSupported = false;  
-  private final String handlerName;  
-
-  /**
-   * Only for testing purposes
-   */
-  DataImporter() {
-    this.handlerName = "dataimport" ;
-  }
-  
-  DataImporter(SolrCore core, String handlerName) {
-    this.handlerName = handlerName;
-    this.core = core;
-    this.schema = core.getLatestSchema();
-  }
-  
-  
-
-  
-  boolean maybeReloadConfiguration(RequestInfo params,
-      NamedList<?> defaultParams) throws IOException {
-  if (importLock.tryLock()) {
-      boolean success = false;
-      try {        
-        if (null != params.getRequest()) {
-          if (schema != params.getRequest().getSchema()) {
-            schema = params.getRequest().getSchema();
-          }
-        }
-        String dataConfigText = params.getDataConfig();
-        String dataconfigFile = params.getConfigFile();        
-        InputSource is = null;
-        if(dataConfigText!=null && dataConfigText.length()>0) {
-          is = new InputSource(new StringReader(dataConfigText));
-        } else if(dataconfigFile!=null) {
-          is = new InputSource(core.getResourceLoader().openResource(dataconfigFile));
-          is.setSystemId(SystemIdResolver.createSystemIdFromResourceName(dataconfigFile));
-          log.info("Loading DIH Configuration: {}", dataconfigFile);
-        }
-        if(is!=null) {          
-          config = loadDataConfig(is);
-          success = true;
-        }      
-        
-        Map<String,Map<String,String>> dsProps = new HashMap<>();
-        if(defaultParams!=null) {
-          int position = 0;
-          while (position < defaultParams.size()) {
-            if (defaultParams.getName(position) == null) {
-              break;
-            }
-            String name = defaultParams.getName(position);            
-            if (name.equals("datasource")) {
-              success = true;
-              @SuppressWarnings({"rawtypes"})
-              NamedList dsConfig = (NamedList) defaultParams.getVal(position);
-              log.info("Getting configuration for Global Datasource...");
-              Map<String,String> props = new HashMap<>();
-              for (int i = 0; i < dsConfig.size(); i++) {
-                props.put(dsConfig.getName(i), dsConfig.getVal(i).toString());
-              }
-              log.info("Adding properties to datasource: {}", props);
-              dsProps.put((String) dsConfig.get("name"), props);
-            }
-            position++;
-          }
-        }
-        requestLevelDataSourceProps = Collections.unmodifiableMap(dsProps);
-      } catch(IOException ioe) {
-        throw ioe;
-      } finally {
-        importLock.unlock();
-      }
-      return success;
-    } else {
-      return false;
-    }
-  }
-  
-  
-  
-  public String getHandlerName() {
-    return handlerName;
-  }
-
-  public IndexSchema getSchema() {
-    return schema;
-  }
-
-  /**
-   * Used by tests
-   */
-  void loadAndInit(String configStr) {
-    config = loadDataConfig(new InputSource(new StringReader(configStr)));
-  }
-
-  void loadAndInit(InputSource configFile) {
-    config = loadDataConfig(configFile);
-  }
-
-  public DIHConfiguration loadDataConfig(InputSource configFile) {
-
-    DIHConfiguration dihcfg = null;
-    try {
-      DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
-      dbf.setValidating(false);
-      
-      // only enable xinclude, if XML is coming from safe source (local file)
-      // and a a SolrCore and SystemId is present (makes no sense otherwise):
-      if (core != null && configFile.getSystemId() != null) {
-        try {
-          dbf.setXIncludeAware(true);
-          dbf.setNamespaceAware(true);
-        } catch( UnsupportedOperationException e ) {
-          log.warn( "XML parser doesn't support XInclude option" );
-        }
-      }
-      
-      DocumentBuilder builder = dbf.newDocumentBuilder();
-      // only enable xinclude / external entities, if XML is coming from
-      // safe source (local file) and a a SolrCore and SystemId is present:
-      if (core != null && configFile.getSystemId() != null) {
-        builder.setEntityResolver(new SystemIdResolver(core.getResourceLoader()));
-      } else {
-        // Don't allow external entities without having a system ID:
-        builder.setEntityResolver(EmptyEntityResolver.SAX_INSTANCE);
-      }
-      builder.setErrorHandler(XMLLOG);
-      Document document;
-      try {
-        document = builder.parse(configFile);
-      } finally {
-        // some XML parsers are broken and don't close the byte stream (but they should according to spec)
-        IOUtils.closeQuietly(configFile.getByteStream());
-      }
-
-      dihcfg = readFromXml(document);
-      log.info("Data Configuration loaded successfully");
-    } catch (Exception e) {
-      throw new DataImportHandlerException(SEVERE,
-              "Data Config problem: " + e.getMessage(), e);
-    }
-    for (Entity e : dihcfg.getEntities()) {
-      if (e.getAllAttributes().containsKey(SqlEntityProcessor.DELTA_QUERY)) {
-        isDeltaImportSupported = true;
-        break;
-      }
-    }
-    return dihcfg;
-  }
-  
-  public DIHConfiguration readFromXml(Document xmlDocument) {
-    DIHConfiguration config;
-    List<Map<String, String >> functions = new ArrayList<>();
-    Script script = null;
-    Map<String, Map<String,String>> dataSources = new HashMap<>();
-    
-    NodeList dataConfigTags = xmlDocument.getElementsByTagName("dataConfig");
-    if(dataConfigTags == null || dataConfigTags.getLength() == 0) {
-      throw new DataImportHandlerException(SEVERE, "the root node '<dataConfig>' is missing");
-    }
-    Element e = (Element) dataConfigTags.item(0);
-    List<Element> documentTags = ConfigParseUtil.getChildNodes(e, "document");
-    if (documentTags.isEmpty()) {
-      throw new DataImportHandlerException(SEVERE, "DataImportHandler " +
-              "configuration file must have one <document> node.");
-    }
-
-    List<Element> scriptTags = ConfigParseUtil.getChildNodes(e, ConfigNameConstants.SCRIPT);
-    if (!scriptTags.isEmpty()) {
-      script = new Script(scriptTags.get(0));
-    }
-
-    // Add the provided evaluators
-    List<Element> functionTags = ConfigParseUtil.getChildNodes(e, ConfigNameConstants.FUNCTION);
-    if (!functionTags.isEmpty()) {
-      for (Element element : functionTags) {
-        String func = ConfigParseUtil.getStringAttribute(element, NAME, null);
-        String clz = ConfigParseUtil.getStringAttribute(element, ConfigNameConstants.CLASS, null);
-        if (func == null || clz == null){
-          throw new DataImportHandlerException(
-                  SEVERE,
-                  "<function> must have a 'name' and 'class' attributes");
-        } else {
-          functions.add(ConfigParseUtil.getAllAttributes(element));
-        }
-      }
-    }
-    List<Element> dataSourceTags = ConfigParseUtil.getChildNodes(e, ConfigNameConstants.DATA_SRC);
-    if (!dataSourceTags.isEmpty()) {
-      for (Element element : dataSourceTags) {
-        Map<String,String> p = new HashMap<>();
-        HashMap<String, String> attrs = ConfigParseUtil.getAllAttributes(element);
-        for (Map.Entry<String, String> entry : attrs.entrySet()) {
-          p.put(entry.getKey(), entry.getValue());
-        }
-        dataSources.put(p.get("name"), p);
-      }
-    }
-    if(dataSources.get(null) == null){
-      for (Map<String,String> properties : dataSources.values()) {
-        dataSources.put(null,properties);
-        break;        
-      } 
-    }
-    PropertyWriter pw = null;
-    List<Element> propertyWriterTags = ConfigParseUtil.getChildNodes(e, ConfigNameConstants.PROPERTY_WRITER);
-    if (propertyWriterTags.isEmpty()) {
-      boolean zookeeper = false;
-      if (this.core != null
-          && this.core.getCoreContainer().isZooKeeperAware()) {
-        zookeeper = true;
-      }
-      pw = new PropertyWriter(zookeeper ? "ZKPropertiesWriter"
-          : "SimplePropertiesWriter", Collections.<String,String> emptyMap());
-    } else if (propertyWriterTags.size() > 1) {
-      throw new DataImportHandlerException(SEVERE, "Only one "
-          + ConfigNameConstants.PROPERTY_WRITER + " can be configured.");
-    } else {
-      Element pwElement = propertyWriterTags.get(0);
-      String type = null;
-      Map<String,String> params = new HashMap<>();
-      for (Map.Entry<String,String> entry : ConfigParseUtil.getAllAttributes(
-          pwElement).entrySet()) {
-        if (TYPE.equals(entry.getKey())) {
-          type = entry.getValue();
-        } else {
-          params.put(entry.getKey(), entry.getValue());
-        }
-      }
-      if (type == null) {
-        throw new DataImportHandlerException(SEVERE, "The "
-            + ConfigNameConstants.PROPERTY_WRITER + " element must specify "
-            + TYPE);
-      }
-      pw = new PropertyWriter(type, params);
-    }
-    return new DIHConfiguration(documentTags.get(0), this, functions, script, dataSources, pw);
-  }
-    
-  @SuppressWarnings("unchecked")
-  private DIHProperties createPropertyWriter() {
-    DIHProperties propWriter = null;
-    PropertyWriter configPw = config.getPropertyWriter();
-    try {
-      Class<DIHProperties> writerClass = DocBuilder.loadClass(configPw.getType(), this.core);
-      propWriter = writerClass.getConstructor().newInstance();
-      propWriter.init(this, configPw.getParameters());
-    } catch (Exception e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE, "Unable to PropertyWriter implementation:" + configPw.getType(), e);
-    }
-    return propWriter;
-  }
-
-  public DIHConfiguration getConfig() {
-    return config;
-  }
-
-  Date getIndexStartTime() {
-    return indexStartTime;
-  }
-
-  void setIndexStartTime(Date indextStartTime) {
-    this.indexStartTime = indextStartTime;
-  }
-
-  void store(Object key, Object value) {
-    store.put(key, value);
-  }
-
-  Object retrieve(Object key) {
-    return store.get(key);
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  public DataSource getDataSourceInstance(Entity key, String name, Context ctx) {
-    Map<String,String> p = requestLevelDataSourceProps.get(name);
-    if (p == null)
-      p = config.getDataSources().get(name);
-    if (p == null)
-      p = requestLevelDataSourceProps.get(null);// for default data source
-    if (p == null)
-      p = config.getDataSources().get(null);
-    if (p == null)  
-      throw new DataImportHandlerException(SEVERE,
-              "No dataSource :" + name + " available for entity :" + key.getName());
-    String type = p.get(TYPE);
-    @SuppressWarnings({"rawtypes"})
-    DataSource dataSrc = null;
-    if (type == null) {
-      dataSrc = new JdbcDataSource();
-    } else {
-      try {
-        dataSrc = (DataSource) DocBuilder.loadClass(type, getCore()).getConstructor().newInstance();
-      } catch (Exception e) {
-        wrapAndThrow(SEVERE, e, "Invalid type for data source: " + type);
-      }
-    }
-    try {
-      Properties copyProps = new Properties();
-      copyProps.putAll(p);
-      Map<String, Object> map = ctx.getRequestParameters();
-      if (map.containsKey("rows")) {
-        int rows = Integer.parseInt((String) map.get("rows"));
-        if (map.containsKey("start")) {
-          rows += Integer.parseInt((String) map.get("start"));
-        }
-        copyProps.setProperty("maxRows", String.valueOf(rows));
-      }
-      dataSrc.init(ctx, copyProps);
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e, "Failed to initialize DataSource: " + key.getDataSourceName());
-    }
-    return dataSrc;
-  }
-
-  public Status getStatus() {
-    return status;
-  }
-
-  public void setStatus(Status status) {
-    this.status = status;
-  }
-
-  public boolean isBusy() {
-    return importLock.isLocked();
-  }
-
-  public void doFullImport(DIHWriter writer, RequestInfo requestParams) {
-    log.info("Starting Full Import");
-    setStatus(Status.RUNNING_FULL_DUMP);
-    try {
-      DIHProperties dihPropWriter = createPropertyWriter();
-      setIndexStartTime(dihPropWriter.getCurrentTimestamp());
-      docBuilder = new DocBuilder(this, writer, dihPropWriter, requestParams);
-      checkWritablePersistFile(writer, dihPropWriter);
-      docBuilder.execute();
-      if (!requestParams.isDebug())
-        cumulativeStatistics.add(docBuilder.importStatistics);
-    } catch (Exception e) {
-      SolrException.log(log, "Full Import failed", e);
-      docBuilder.handleError("Full Import failed", e);
-    } finally {
-      setStatus(Status.IDLE);
-      DocBuilder.INSTANCE.set(null);
-    }
-
-  }
-
-  private void checkWritablePersistFile(DIHWriter writer, DIHProperties dihPropWriter) {
-   if (isDeltaImportSupported && !dihPropWriter.isWritable()) {
-      throw new DataImportHandlerException(SEVERE,
-          "Properties is not writable. Delta imports are supported by data config but will not work.");
-    }
-  }
-
-  public void doDeltaImport(DIHWriter writer, RequestInfo requestParams) {
-    log.info("Starting Delta Import");
-    setStatus(Status.RUNNING_DELTA_DUMP);
-    try {
-      DIHProperties dihPropWriter = createPropertyWriter();
-      setIndexStartTime(dihPropWriter.getCurrentTimestamp());
-      docBuilder = new DocBuilder(this, writer, dihPropWriter, requestParams);
-      checkWritablePersistFile(writer, dihPropWriter);
-      docBuilder.execute();
-      if (!requestParams.isDebug())
-        cumulativeStatistics.add(docBuilder.importStatistics);
-    } catch (Exception e) {
-      log.error("Delta Import Failed", e);
-      docBuilder.handleError("Delta Import Failed", e);
-    } finally {
-      setStatus(Status.IDLE);
-      DocBuilder.INSTANCE.set(null);
-    }
-
-  }
-
-  public void runAsync(final RequestInfo reqParams, final DIHWriter sw) {
-    new Thread(() -> runCmd(reqParams, sw)).start();
-  }
-
-  void runCmd(RequestInfo reqParams, DIHWriter sw) {
-    String command = reqParams.getCommand();
-    if (command.equals(ABORT_CMD)) {
-      if (docBuilder != null) {
-        docBuilder.abort();
-      }
-      return;
-    }
-    if (!importLock.tryLock()){
-      log.warn("Import command failed . another import is running");
-      return;
-    }
-    try {
-      if (FULL_IMPORT_CMD.equals(command) || IMPORT_CMD.equals(command)) {
-        doFullImport(sw, reqParams);
-      } else if (command.equals(DELTA_IMPORT_CMD)) {
-        doDeltaImport(sw, reqParams);
-      }
-    } finally {
-      importLock.unlock();
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  Map<String, String> getStatusMessages() {
-    //this map object is a Collections.synchronizedMap(new LinkedHashMap()). if we
-    // synchronize on the object it must be safe to iterate through the map
-    @SuppressWarnings({"rawtypes"})
-    Map statusMessages = (Map) retrieve(STATUS_MSGS);
-    Map<String, String> result = new LinkedHashMap<>();
-    if (statusMessages != null) {
-      synchronized (statusMessages) {
-        for (Object o : statusMessages.entrySet()) {
-          @SuppressWarnings({"rawtypes"})
-          Map.Entry e = (Map.Entry) o;
-          //the toString is taken because some of the Objects create the data lazily when toString() is called
-          result.put((String) e.getKey(), e.getValue().toString());
-        }
-      }
-    }
-    return result;
-
-  }
-
-  public DocBuilder getDocBuilder() {
-    return docBuilder;
-  }
-
-  public DocBuilder getDocBuilder(DIHWriter writer, RequestInfo requestParams) {
-    DIHProperties dihPropWriter = createPropertyWriter();
-    return new DocBuilder(this, writer, dihPropWriter, requestParams);
-  }
-
-  Map<String, Evaluator> getEvaluators() {
-    return getEvaluators(config.getFunctions());
-  }
-  
-  /**
-   * used by tests.
-   */
-  @SuppressWarnings({"unchecked"})
-  Map<String, Evaluator> getEvaluators(List<Map<String,String>> fn) {
-    Map<String, Evaluator> evaluators = new HashMap<>();
-    evaluators.put(Evaluator.DATE_FORMAT_EVALUATOR, new DateFormatEvaluator());
-    evaluators.put(Evaluator.SQL_ESCAPE_EVALUATOR, new SqlEscapingEvaluator());
-    evaluators.put(Evaluator.URL_ENCODE_EVALUATOR, new UrlEvaluator());
-    evaluators.put(Evaluator.ESCAPE_SOLR_QUERY_CHARS, new SolrQueryEscapingEvaluator());
-    SolrCore core = docBuilder == null ? null : docBuilder.dataImporter.getCore();
-    for (Map<String, String> map : fn) {
-      try {
-        evaluators.put(map.get(NAME), (Evaluator) loadClass(map.get(CLASS), core).getConstructor().newInstance());
-      } catch (Exception e) {
-        wrapAndThrow(SEVERE, e, "Unable to instantiate evaluator: " + map.get(CLASS));
-      }
-    }
-    return evaluators;    
-  }
-
-  static final ThreadLocal<AtomicLong> QUERY_COUNT = new ThreadLocal<AtomicLong>() {
-    @Override
-    protected AtomicLong initialValue() {
-      return new AtomicLong();
-    }
-  };
-
-  
-
-  static final class MSG {
-    public static final String NO_CONFIG_FOUND = "Configuration not found";
-
-    public static final String NO_INIT = "DataImportHandler started. Not Initialized. No commands can be run";
-
-    public static final String INVALID_CONFIG = "FATAL: Could not create importer. DataImporter config invalid";
-
-    public static final String LOAD_EXP = "Exception while loading DataImporter";
-
-    public static final String JMX_DESC = "Manage data import from databases to Solr";
-
-    public static final String CMD_RUNNING = "A command is still running...";
-
-    public static final String DEBUG_NOT_ENABLED = "Debug not enabled. Add a tag <str name=\"enableDebug\">true</str> in solrconfig.xml";
-
-    public static final String CONFIG_RELOADED = "Configuration Re-loaded sucessfully";
-    
-    public static final String CONFIG_NOT_RELOADED = "Configuration NOT Re-loaded...Data Importer is busy.";
-
-    public static final String TOTAL_DOC_PROCESSED = "Total Documents Processed";
-
-    public static final String TOTAL_FAILED_DOCS = "Total Documents Failed";
-
-    public static final String TOTAL_QUERIES_EXECUTED = "Total Requests made to DataSource";
-
-    public static final String TOTAL_ROWS_EXECUTED = "Total Rows Fetched";
-
-    public static final String TOTAL_DOCS_DELETED = "Total Documents Deleted";
-
-    public static final String TOTAL_DOCS_SKIPPED = "Total Documents Skipped";
-  }
-
-  public SolrCore getCore() {
-    return core;
-  }
-  
-  void putToCoreScopeSession(String key, Object val) {
-    coreScopeSession.put(key, val);
-  }
-  Object getFromCoreScopeSession(String key) {
-    return coreScopeSession.get(key);
-  }
-
-  public static final String COLUMN = "column";
-
-  public static final String TYPE = "type";
-
-  public static final String DATA_SRC = "dataSource";
-
-  public static final String MULTI_VALUED = "multiValued";
-
-  public static final String NAME = "name";
-
-  public static final String STATUS_MSGS = "status-messages";
-
-  public static final String FULL_IMPORT_CMD = "full-import";
-
-  public static final String IMPORT_CMD = "import";
-
-  public static final String DELTA_IMPORT_CMD = "delta-import";
-
-  public static final String ABORT_CMD = "abort";
-
-  public static final String DEBUG_MODE = "debug";
-
-  public static final String RELOAD_CONF_CMD = "reload-config";
-
-  public static final String SHOW_CONF_CMD = "show-config";
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataSource.java
deleted file mode 100644
index e217ddd..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DataSource.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.Properties;
-
-/**
- * <p>
- * Provides data from a source with a given query.
- * </p>
- * <p>
- * Implementation of this abstract class must provide a default no-arg constructor
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public abstract class DataSource<T> {
-
-  /**
-   * Initializes the DataSource with the <code>Context</code> and
-   * initialization properties.
-   * <p>
-   * This is invoked by the <code>DataImporter</code> after creating an
-   * instance of this class.
-   */
-  public abstract void init(Context context, Properties initProps);
-
-  /**
-   * Get records for the given query.The return type depends on the
-   * implementation .
-   *
-   * @param query The query string. It can be a SQL for JdbcDataSource or a URL
-   *              for HttpDataSource or a file location for FileDataSource or a custom
-   *              format for your own custom DataSource.
-   * @return Depends on the implementation. For instance JdbcDataSource returns
-   *         an Iterator&lt;Map &lt;String,Object&gt;&gt;
-   */
-  public abstract T getData(String query);
-
-  /**
-   * Cleans up resources of this DataSource after use.
-   */
-  public abstract void close();
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DateFormatEvaluator.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DateFormatEvaluator.java
deleted file mode 100644
index f4df820..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DateFormatEvaluator.java
+++ /dev/null
@@ -1,180 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.IllformedLocaleException;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Set;
-import java.util.TimeZone;
-
-import org.apache.solr.common.util.SuppressForbidden;
-import org.apache.solr.handler.dataimport.config.EntityField;
-import org.apache.solr.util.DateMathParser;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-/**
- * <p>Formats values using a given date format. </p>
- * <p>Pass three parameters:
- * <ul>
- *  <li>An {@link EntityField} or a date expression to be parsed with 
- *      the {@link DateMathParser} class  If the value is in a String, 
- *      then it is assumed to be a datemath expression, otherwise it 
- *      resolved using a {@link VariableResolver} instance</li>
- *  <li>A date format see {@link SimpleDateFormat} for the syntax.</li>
- *  <li>The {@link Locale} to parse.  
- *      (optional. Defaults to the Root Locale) </li>
- * </ul>
- */
-public class DateFormatEvaluator extends Evaluator {
-  
-  public static final String DEFAULT_DATE_FORMAT = "yyyy-MM-dd HH:mm:ss";
-  protected Map<String, Locale> availableLocales = new HashMap<>();
-  protected Set<String> availableTimezones = new HashSet<>();
-
-  @SuppressForbidden(reason = "Usage of outdated locale parsing with Locale#toString() because of backwards compatibility")
-  public DateFormatEvaluator() {  
-    for (Locale locale : Locale.getAvailableLocales()) {
-      availableLocales.put(locale.toString(), locale);
-    }
-    for (String tz : TimeZone.getAvailableIDs()) {
-      availableTimezones.add(tz);
-    }
-  }
-  
-  private SimpleDateFormat getDateFormat(String pattern, TimeZone timezone, Locale locale) {
-    final SimpleDateFormat sdf = new SimpleDateFormat(pattern, locale);
-    sdf.setTimeZone(timezone);
-    return sdf;
-  }
-  
-  @Override
-  public String evaluate(String expression, Context context) {
-    List<Object> l = parseParams(expression, context.getVariableResolver());
-    if (l.size() < 2 || l.size() > 4) {
-      throw new DataImportHandlerException(SEVERE, "'formatDate()' must have two, three or four parameters ");
-    }
-    Object o = l.get(0);
-    Object format = l.get(1);
-    if (format instanceof VariableWrapper) {
-      VariableWrapper wrapper = (VariableWrapper) format;
-      o = wrapper.resolve();
-      format = o.toString();
-    }
-    Locale locale = Locale.ENGLISH; // we default to ENGLISH for dates for full Java 9 compatibility
-    if(l.size()>2) {
-      Object localeObj = l.get(2);
-      String localeStr = null;
-      if (localeObj  instanceof VariableWrapper) {
-        localeStr = ((VariableWrapper) localeObj).resolve().toString();
-      } else {
-        localeStr = localeObj.toString();
-      }
-      locale = availableLocales.get(localeStr);
-      if (locale == null) try {
-        locale = new Locale.Builder().setLanguageTag(localeStr).build();
-      } catch (IllformedLocaleException ex) {
-        throw new DataImportHandlerException(SEVERE, "Malformed / non-existent locale: " + localeStr, ex);
-      }
-    }
-    TimeZone tz = TimeZone.getDefault(); // DWS TODO: is this the right default for us?  Deserves explanation if so.
-    if(l.size()==4) {
-      Object tzObj = l.get(3);
-      String tzStr = null;
-      if (tzObj  instanceof VariableWrapper) {
-        tzStr = ((VariableWrapper) tzObj).resolve().toString();
-      } else {
-        tzStr = tzObj.toString();
-      }
-      if(availableTimezones.contains(tzStr)) {
-        tz = TimeZone.getTimeZone(tzStr);
-      } else {
-        throw new DataImportHandlerException(SEVERE, "Unsupported Timezone: " + tzStr);
-      }
-    }
-    String dateFmt = format.toString();
-    SimpleDateFormat fmt = getDateFormat(dateFmt, tz, locale);
-    Date date = null;
-    if (o instanceof VariableWrapper) {
-      date = evaluateWrapper((VariableWrapper) o, locale, tz);
-    } else {
-      date = evaluateString(o.toString(), locale, tz);
-    }
-    return fmt.format(date);
-  }
-
-  /**
-   * NOTE: declared as a method to allow for extensibility
-   *
-   * @lucene.experimental this API is experimental and subject to change
-   * @return the result of evaluating a string
-   */
-  protected Date evaluateWrapper(VariableWrapper variableWrapper, Locale locale, TimeZone tz) {
-    Date date = null;
-    Object variableval = resolveWrapper(variableWrapper,locale,tz);
-    if (variableval instanceof Date) {
-      date = (Date) variableval;
-    } else {
-      String s = variableval.toString();
-      try {
-        date = getDateFormat(DEFAULT_DATE_FORMAT, tz, locale).parse(s);
-      } catch (ParseException exp) {
-        wrapAndThrow(SEVERE, exp, "Invalid expression for date");
-      }
-    }
-    return date;
-  }
-
-  /**
-   * NOTE: declared as a method to allow for extensibility
-   * @lucene.experimental
-   * @return the result of evaluating a string
-   */
-  protected Date evaluateString(String datemathfmt, Locale locale, TimeZone tz) {
-    // note: DMP does not use the locale but perhaps a subclass might use it, for e.g. parsing a date in a custom
-    // string that doesn't necessarily have date math?
-    //TODO refactor DateMathParser.parseMath a bit to have a static method for this logic.
-    if (datemathfmt.startsWith("NOW")) {
-      datemathfmt = datemathfmt.substring("NOW".length());
-    }
-    try {
-      DateMathParser parser = new DateMathParser(tz);
-      parser.setNow(new Date());// thus do *not* use SolrRequestInfo
-      return parser.parseMath(datemathfmt);
-    } catch (ParseException e) {
-      throw wrapAndThrow(SEVERE, e, "Invalid expression for date");
-    }
-  }
-
-  /**
-   * NOTE: declared as a method to allow for extensibility
-   * @lucene.experimental
-   * @return the result of resolving the variable wrapper
-   */
-  protected Object resolveWrapper(VariableWrapper variableWrapper, Locale locale, TimeZone tz) {
-    return variableWrapper.resolve();
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DateFormatTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DateFormatTransformer.java
deleted file mode 100644
index 61edbe6..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DateFormatTransformer.java
+++ /dev/null
@@ -1,106 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.*;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * {@link Transformer} instance which creates {@link Date} instances out of {@link String}s.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class DateFormatTransformer extends Transformer {
-  private Map<String, SimpleDateFormat> fmtCache = new HashMap<>();
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public Object transformRow(Map<String, Object> aRow, Context context) {
-
-    for (Map<String, String> map : context.getAllEntityFields()) {
-      Locale locale = Locale.ENGLISH; // we default to ENGLISH for dates for full Java 9 compatibility
-      String customLocale = map.get(LOCALE);
-      if (customLocale != null) {
-        try {
-          locale = new Locale.Builder().setLanguageTag(customLocale).build();
-        } catch (IllformedLocaleException e) {
-          throw new DataImportHandlerException(DataImportHandlerException.SEVERE, "Invalid Locale specified: " + customLocale, e);
-        }
-      }
-
-      String fmt = map.get(DATE_TIME_FMT);
-      if (fmt == null)
-        continue;
-      VariableResolver resolver = context.getVariableResolver();
-      fmt = resolver.replaceTokens(fmt);
-      String column = map.get(DataImporter.COLUMN);
-      String srcCol = map.get(RegexTransformer.SRC_COL_NAME);
-      if (srcCol == null)
-        srcCol = column;
-      try {
-        Object o = aRow.get(srcCol);
-        if (o instanceof List) {
-          @SuppressWarnings({"rawtypes"})
-          List inputs = (List) o;
-          List<Date> results = new ArrayList<>();
-          for (Object input : inputs) {
-            results.add(process(input, fmt, locale));
-          }
-          aRow.put(column, results);
-        } else {
-          if (o != null) {
-            aRow.put(column, process(o, fmt, locale));
-          }
-        }
-      } catch (ParseException e) {
-        log.warn("Could not parse a Date field ", e);
-      }
-    }
-    return aRow;
-  }
-
-  private Date process(Object value, String format, Locale locale) throws ParseException {
-    if (value == null) return null;
-    String strVal = value.toString().trim();
-    if (strVal.length() == 0)
-      return null;
-    SimpleDateFormat fmt = fmtCache.get(format);
-    if (fmt == null) {
-      fmt = new SimpleDateFormat(format, locale);
-      fmtCache.put(format, fmt);
-    }
-    return fmt.parse(strVal);
-  }
-
-  public static final String DATE_TIME_FMT = "dateTimeFormat";
-  
-  public static final String LOCALE = "locale";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DebugInfo.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DebugInfo.java
deleted file mode 100644
index 623832f..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DebugInfo.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.AbstractList;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.StrUtils;
-
-public class DebugInfo {
-
-  private static final class ChildRollupDocs extends AbstractList<SolrInputDocument> {
-
-    private List<SolrInputDocument> delegate = new ArrayList<>();
-
-    @Override
-    public SolrInputDocument get(int index) {
-      return delegate.get(index);
-    }
-
-    @Override
-    public int size() {
-      return delegate.size();
-    }
-
-    public boolean add(SolrInputDocument e) {
-      SolrInputDocument transformed = e.deepCopy();
-      if (transformed.hasChildDocuments()) {
-        ChildRollupDocs childList = new ChildRollupDocs();
-        childList.addAll(transformed.getChildDocuments());
-        transformed.addField("_childDocuments_", childList);
-        transformed.getChildDocuments().clear();
-      }
-      return delegate.add(transformed);
-    }
-  }
-
-  public List<SolrInputDocument> debugDocuments = new ChildRollupDocs();
-
-  public NamedList<String> debugVerboseOutput = null;
-  public boolean verbose;
-  
-  public DebugInfo(Map<String,Object> requestParams) {
-    verbose = StrUtils.parseBool((String) requestParams.get("verbose"), false);
-    debugVerboseOutput = new NamedList<>();
-  }
-}
-
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DebugLogger.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DebugLogger.java
deleted file mode 100644
index 9de42fc..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DebugLogger.java
+++ /dev/null
@@ -1,295 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-import org.apache.solr.common.util.NamedList;
-
-import java.io.PrintWriter;
-import java.io.StringWriter;
-import java.text.MessageFormat;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Stack;
-
-/**
- * <p>
- * Implements most of the interactive development functionality
- * </p>
- * <p/>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p/>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-class DebugLogger {
-  private Stack<DebugInfo> debugStack;
-
-  @SuppressWarnings({"rawtypes"})
-  NamedList output;
-//  private final SolrWriter writer1;
-
-  private static final String LINE = "---------------------------------------------";
-
-  private MessageFormat fmt = new MessageFormat(
-          "----------- row #{0}-------------", Locale.ROOT);
-
-  boolean enabled = true;
-
-  @SuppressWarnings({"rawtypes"})
-  public DebugLogger() {
-//    writer = solrWriter;
-    output = new NamedList();
-    debugStack = new Stack<DebugInfo>() {
-
-      @Override
-      public DebugInfo pop() {
-        if (size() == 1)
-          throw new DataImportHandlerException(
-                  DataImportHandlerException.SEVERE, "Stack is becoming empty");
-        return super.pop();
-      }
-    };
-    debugStack.push(new DebugInfo(null, DIHLogLevels.NONE, null));
-    output = debugStack.peek().lst;
-  }
-
-    private DebugInfo peekStack() {
-    return debugStack.isEmpty() ? null : debugStack.peek();
-  }
-
-  @SuppressWarnings({"unchecked"})
-  public void log(DIHLogLevels event, String name, Object row) {
-    if (event == DIHLogLevels.DISABLE_LOGGING) {
-      enabled = false;
-      return;
-    } else if (event == DIHLogLevels.ENABLE_LOGGING) {
-      enabled = true;
-      return;
-    }
-
-    if (!enabled && event != DIHLogLevels.START_ENTITY
-            && event != DIHLogLevels.END_ENTITY) {
-      return;
-    }
-
-    if (event == DIHLogLevels.START_DOC) {
-      debugStack.push(new DebugInfo(null, DIHLogLevels.START_DOC, peekStack()));
-    } else if (DIHLogLevels.START_ENTITY == event) {
-      debugStack
-              .push(new DebugInfo(name, DIHLogLevels.START_ENTITY, peekStack()));
-    } else if (DIHLogLevels.ENTITY_OUT == event
-            || DIHLogLevels.PRE_TRANSFORMER_ROW == event) {
-      if (debugStack.peek().type == DIHLogLevels.START_ENTITY
-              || debugStack.peek().type == DIHLogLevels.START_DOC) {
-        debugStack.peek().lst.add(null, fmt.format(new Object[]{++debugStack
-                .peek().rowCount}));
-        addToNamedList(debugStack.peek().lst, row);
-        debugStack.peek().lst.add(null, LINE);
-      }
-    } else if (event == DIHLogLevels.ROW_END) {
-      popAllTransformers();
-    } else if (DIHLogLevels.END_ENTITY == event) {
-      while (debugStack.pop().type != DIHLogLevels.START_ENTITY)
-        ;
-    } else if (DIHLogLevels.END_DOC == event) {
-      while (debugStack.pop().type != DIHLogLevels.START_DOC)
-        ;
-    } else if (event == DIHLogLevels.TRANSFORMER_EXCEPTION) {
-      debugStack.push(new DebugInfo(name, event, peekStack()));
-      debugStack.peek().lst.add("EXCEPTION",
-              getStacktraceString((Exception) row));
-    } else if (DIHLogLevels.TRANSFORMED_ROW == event) {
-      debugStack.push(new DebugInfo(name, event, peekStack()));
-      debugStack.peek().lst.add(null, LINE);
-      addToNamedList(debugStack.peek().lst, row);
-      debugStack.peek().lst.add(null, LINE);
-      if (row instanceof DataImportHandlerException) {
-        DataImportHandlerException dataImportHandlerException = (DataImportHandlerException) row;
-        dataImportHandlerException.debugged = true;
-      }
-    } else if (DIHLogLevels.ENTITY_META == event) {
-      popAllTransformers();
-      debugStack.peek().lst.add(name, row);
-    } else if (DIHLogLevels.ENTITY_EXCEPTION == event) {
-      if (row instanceof DataImportHandlerException) {
-        DataImportHandlerException dihe = (DataImportHandlerException) row;
-        if (dihe.debugged)
-          return;
-        dihe.debugged = true;
-      }
-
-      popAllTransformers();
-      debugStack.peek().lst.add("EXCEPTION",
-              getStacktraceString((Exception) row));
-    }
-  }
-
-  private void popAllTransformers() {
-    while (true) {
-      DIHLogLevels type = debugStack.peek().type;
-      if (type == DIHLogLevels.START_DOC || type == DIHLogLevels.START_ENTITY)
-        break;
-      debugStack.pop();
-    }
-  }
-
-  @SuppressWarnings({"unchecked"})
-  private void addToNamedList(@SuppressWarnings({"rawtypes"})NamedList nl, Object row) {
-    if (row instanceof List) {
-      @SuppressWarnings({"rawtypes"})
-      List list = (List) row;
-      @SuppressWarnings({"rawtypes"})
-      NamedList l = new NamedList();
-      nl.add(null, l);
-      for (Object o : list) {
-        Map<String, Object> map = (Map<String, Object>) o;
-        for (Map.Entry<String, Object> entry : map.entrySet())
-          nl.add(entry.getKey(), entry.getValue());
-      }
-    } else if (row instanceof Map) {
-      Map<String, Object> map = (Map<String, Object>) row;
-      for (Map.Entry<String, Object> entry : map.entrySet())
-        nl.add(entry.getKey(), entry.getValue());
-    }
-  }
-
-  @SuppressWarnings({"rawtypes"})
-  DataSource wrapDs(final DataSource ds) {
-    return new DataSource() {
-      @Override
-      public void init(Context context, Properties initProps) {
-        ds.init(context, initProps);
-      }
-
-      @Override
-      public void close() {
-        ds.close();
-      }
-
-      @Override
-      public Object getData(String query) {
-        log(DIHLogLevels.ENTITY_META, "query", query);
-        long start = System.nanoTime();
-        try {
-          return ds.getData(query);
-        } catch (DataImportHandlerException de) {
-          log(DIHLogLevels.ENTITY_EXCEPTION,
-                  null, de);
-          throw de;
-        } catch (Exception e) {
-          log(DIHLogLevels.ENTITY_EXCEPTION,
-                  null, e);
-          DataImportHandlerException de = new DataImportHandlerException(
-                  DataImportHandlerException.SEVERE, "", e);
-          de.debugged = true;
-          throw de;
-        } finally {
-          log(DIHLogLevels.ENTITY_META, "time-taken", DocBuilder
-                  .getTimeElapsedSince(start));
-        }
-      }
-    };
-  }
-
-  Transformer wrapTransformer(final Transformer t) {
-    return new Transformer() {
-      @Override
-      public Object transformRow(Map<String, Object> row, Context context) {
-        log(DIHLogLevels.PRE_TRANSFORMER_ROW, null, row);
-        String tName = getTransformerName(t);
-        Object result = null;
-        try {
-          result = t.transformRow(row, context);
-          log(DIHLogLevels.TRANSFORMED_ROW, tName, result);
-        } catch (DataImportHandlerException de) {
-          log(DIHLogLevels.TRANSFORMER_EXCEPTION, tName, de);
-          de.debugged = true;
-          throw de;
-        } catch (Exception e) {
-          log(DIHLogLevels.TRANSFORMER_EXCEPTION, tName, e);
-          DataImportHandlerException de = new DataImportHandlerException(DataImportHandlerException.SEVERE, "", e);
-          de.debugged = true;
-          throw de;
-        }
-        return result;
-      }
-    };
-  }
-
-  public static String getStacktraceString(Exception e) {
-    StringWriter sw = new StringWriter();
-    e.printStackTrace(new PrintWriter(sw));
-    return sw.toString();
-  }
-
-  static String getTransformerName(Transformer t) {
-    @SuppressWarnings({"rawtypes"})
-    Class transClass = t.getClass();
-    if (t instanceof EntityProcessorWrapper.ReflectionTransformer) {
-      return ((EntityProcessorWrapper.ReflectionTransformer) t).trans;
-    }
-    if (t instanceof ScriptTransformer) {
-      ScriptTransformer scriptTransformer = (ScriptTransformer) t;
-      return "script:" + scriptTransformer.getFunctionName();
-    }
-    if (transClass.getPackage().equals(DebugLogger.class.getPackage())) {
-      return transClass.getSimpleName();
-    } else {
-      return transClass.getName();
-    }
-  }
-
-  private static class DebugInfo {
-    String name;
-
-    int tCount, rowCount;
-
-    @SuppressWarnings({"rawtypes"})
-    NamedList lst;
-
-    DIHLogLevels type;
-
-    DebugInfo parent;
-
-    @SuppressWarnings({"unchecked", "rawtypes"})
-    public DebugInfo(String name, DIHLogLevels type, DebugInfo parent) {
-      this.name = name;
-      this.type = type;
-      this.parent = parent;
-      lst = new NamedList();
-      if (parent != null) {
-        String displayName = null;
-        if (type == DIHLogLevels.START_ENTITY) {
-          displayName = "entity:" + name;
-        } else if (type == DIHLogLevels.TRANSFORMED_ROW
-                || type == DIHLogLevels.TRANSFORMER_EXCEPTION) {
-          displayName = "transformer:" + name;
-        } else if (type == DIHLogLevels.START_DOC) {
-          this.name = displayName = "document#" + SolrWriter.getDocCount();
-        }
-        parent.lst.add(displayName, lst);
-      }
-    }
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DocBuilder.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DocBuilder.java
deleted file mode 100644
index 0f8dd6e..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DocBuilder.java
+++ /dev/null
@@ -1,1004 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.handler.dataimport.config.ConfigNameConstants;
-import org.apache.solr.handler.dataimport.config.DIHConfiguration;
-import org.apache.solr.handler.dataimport.config.Entity;
-import org.apache.solr.handler.dataimport.config.EntityField;
-
-import static org.apache.solr.handler.dataimport.SolrWriter.LAST_INDEX_KEY;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-import org.apache.solr.schema.IndexSchema;
-import org.apache.solr.schema.SchemaField;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.lang.invoke.MethodHandles;
-import java.text.SimpleDateFormat;
-import java.util.*;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicBoolean;
-import java.util.concurrent.atomic.AtomicLong;
-
-/**
- * <p> {@link DocBuilder} is responsible for creating Solr documents out of the given configuration. It also maintains
- * statistics information. It depends on the {@link EntityProcessor} implementations to fetch data. </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class DocBuilder {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private static final AtomicBoolean WARNED_ABOUT_INDEX_TIME_BOOSTS = new AtomicBoolean();
-
-  private static final Date EPOCH = new Date(0);
-  public static final String DELETE_DOC_BY_ID = "$deleteDocById";
-  public static final String DELETE_DOC_BY_QUERY = "$deleteDocByQuery";
-  public static final String DOC_BOOST = "$docBoost";
-  public static final String SKIP_DOC = "$skipDoc";
-  public static final String SKIP_ROW = "$skipRow";
-
-  DataImporter dataImporter;
-
-  private DIHConfiguration config;
-
-  private EntityProcessorWrapper currentEntityProcessorWrapper;
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  private Map statusMessages = Collections.synchronizedMap(new LinkedHashMap());
-
-  public Statistics importStatistics = new Statistics();
-
-  DIHWriter writer;
-
-  boolean verboseDebug = false;
-
-  Map<String, Object> session = new HashMap<>();
-
-  static final ThreadLocal<DocBuilder> INSTANCE = new ThreadLocal<>();
-  private Map<String, Object> persistedProperties;
-  
-  private DIHProperties propWriter;
-  private DebugLogger debugLogger;
-  private final RequestInfo reqParams;
-  
-  public DocBuilder(DataImporter dataImporter, DIHWriter solrWriter, DIHProperties propWriter, RequestInfo reqParams) {
-    INSTANCE.set(this);
-    this.dataImporter = dataImporter;
-    this.reqParams = reqParams;
-    this.propWriter = propWriter;
-    DataImporter.QUERY_COUNT.set(importStatistics.queryCount);
-    verboseDebug = reqParams.isDebug() && reqParams.getDebugInfo().verbose;
-    persistedProperties = propWriter.readIndexerProperties();
-     
-    writer = solrWriter;
-    ContextImpl ctx = new ContextImpl(null, null, null, null, reqParams.getRawParams(), null, this);
-    if (writer != null) {
-      writer.init(ctx);
-    }
-  }
-
-
-  DebugLogger getDebugLogger(){
-    if (debugLogger == null) {
-      debugLogger = new DebugLogger();
-    }
-    return debugLogger;
-  }
-
-  private VariableResolver getVariableResolver() {
-    try {
-      VariableResolver resolver = null;
-      String epoch = propWriter.convertDateToString(EPOCH);
-      if(dataImporter != null && dataImporter.getCore() != null
-          && dataImporter.getCore().getCoreDescriptor().getSubstitutableProperties() != null){
-        resolver =  new VariableResolver(dataImporter.getCore().getCoreDescriptor().getSubstitutableProperties());
-      } else {
-        resolver = new VariableResolver();
-      }
-      resolver.setEvaluators(dataImporter.getEvaluators());
-      Map<String, Object> indexerNamespace = new HashMap<>();
-      if (persistedProperties.get(LAST_INDEX_TIME) != null) {
-        indexerNamespace.put(LAST_INDEX_TIME, persistedProperties.get(LAST_INDEX_TIME));
-      } else  {
-        // set epoch
-        indexerNamespace.put(LAST_INDEX_TIME, epoch);
-      }
-      indexerNamespace.put(INDEX_START_TIME, dataImporter.getIndexStartTime());
-      indexerNamespace.put("request", new HashMap<>(reqParams.getRawParams()));
-      indexerNamespace.put("handlerName", dataImporter.getHandlerName());
-      for (Entity entity : dataImporter.getConfig().getEntities()) {
-        Map<String, Object> entityNamespace = new HashMap<>();
-        String key = SolrWriter.LAST_INDEX_KEY;
-        Object lastIndex = persistedProperties.get(entity.getName() + "." + key);
-        if (lastIndex != null) {
-          entityNamespace.put(SolrWriter.LAST_INDEX_KEY, lastIndex);
-        } else  {
-          entityNamespace.put(SolrWriter.LAST_INDEX_KEY, epoch);
-        }
-        indexerNamespace.put(entity.getName(), entityNamespace);
-      }
-      resolver.addNamespace(ConfigNameConstants.IMPORTER_NS_SHORT, indexerNamespace);
-      resolver.addNamespace(ConfigNameConstants.IMPORTER_NS, indexerNamespace);
-      return resolver;
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e);
-      // unreachable statement
-      return null;
-    }
-  }
-
-  private void invokeEventListener(String className) {
-    invokeEventListener(className, null);
-  }
-
-
-  private void invokeEventListener(String className, Exception lastException) {
-    try {
-      @SuppressWarnings({"unchecked"})
-      EventListener listener = (EventListener) loadClass(className, dataImporter.getCore()).getConstructor().newInstance();
-      notifyListener(listener, lastException);
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e, "Unable to load class : " + className);
-    }
-  }
-
-  private void notifyListener(EventListener listener, Exception lastException) {
-    String currentProcess;
-    if (dataImporter.getStatus() == DataImporter.Status.RUNNING_DELTA_DUMP) {
-      currentProcess = Context.DELTA_DUMP;
-    } else {
-      currentProcess = Context.FULL_DUMP;
-    }
-    ContextImpl ctx = new ContextImpl(null, getVariableResolver(), null, currentProcess, session, null, this);
-    ctx.setLastException(lastException);
-    listener.onEvent(ctx);
-  }
-
-  @SuppressWarnings("unchecked")
-  public void execute() {
-    List<EntityProcessorWrapper> epwList = null;
-    try {
-      dataImporter.store(DataImporter.STATUS_MSGS, statusMessages);
-      config = dataImporter.getConfig();
-      final AtomicLong startTime = new AtomicLong(System.nanoTime());
-      statusMessages.put(TIME_ELAPSED, new Object() {
-        @Override
-        public String toString() {
-          return getTimeElapsedSince(startTime.get());
-        }
-      });
-
-      statusMessages.put(DataImporter.MSG.TOTAL_QUERIES_EXECUTED,
-              importStatistics.queryCount);
-      statusMessages.put(DataImporter.MSG.TOTAL_ROWS_EXECUTED,
-              importStatistics.rowsCount);
-      statusMessages.put(DataImporter.MSG.TOTAL_DOC_PROCESSED,
-              importStatistics.docCount);
-      statusMessages.put(DataImporter.MSG.TOTAL_DOCS_SKIPPED,
-              importStatistics.skipDocCount);
-
-      List<String> entities = reqParams.getEntitiesToRun();
-
-      // Trigger onImportStart
-      if (config.getOnImportStart() != null) {
-        invokeEventListener(config.getOnImportStart());
-      }
-      AtomicBoolean fullCleanDone = new AtomicBoolean(false);
-      //we must not do a delete of *:* multiple times if there are multiple root entities to be run
-      Map<String,Object> lastIndexTimeProps = new HashMap<>();
-      lastIndexTimeProps.put(LAST_INDEX_KEY, dataImporter.getIndexStartTime());
-
-      epwList = new ArrayList<>(config.getEntities().size());
-      for (Entity e : config.getEntities()) {
-        epwList.add(getEntityProcessorWrapper(e));
-      }
-      for (EntityProcessorWrapper epw : epwList) {
-        if (entities != null && !entities.contains(epw.getEntity().getName()))
-          continue;
-        lastIndexTimeProps.put(epw.getEntity().getName() + "." + LAST_INDEX_KEY, propWriter.getCurrentTimestamp());
-        currentEntityProcessorWrapper = epw;
-        String delQuery = epw.getEntity().getAllAttributes().get("preImportDeleteQuery");
-        if (dataImporter.getStatus() == DataImporter.Status.RUNNING_DELTA_DUMP) {
-          cleanByQuery(delQuery, fullCleanDone);
-          doDelta();
-          delQuery = epw.getEntity().getAllAttributes().get("postImportDeleteQuery");
-          if (delQuery != null) {
-            fullCleanDone.set(false);
-            cleanByQuery(delQuery, fullCleanDone);
-          }
-        } else {
-          cleanByQuery(delQuery, fullCleanDone);
-          doFullDump();
-          delQuery = epw.getEntity().getAllAttributes().get("postImportDeleteQuery");
-          if (delQuery != null) {
-            fullCleanDone.set(false);
-            cleanByQuery(delQuery, fullCleanDone);
-          }
-        }
-      }
-
-      if (stop.get()) {
-        // Dont commit if aborted using command=abort
-        statusMessages.put("Aborted", new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.ROOT).format(new Date()));
-        handleError("Aborted", null);
-      } else {
-        // Do not commit unnecessarily if this is a delta-import and no documents were created or deleted
-        if (!reqParams.isClean()) {
-          if (importStatistics.docCount.get() > 0 || importStatistics.deletedDocCount.get() > 0) {
-            finish(lastIndexTimeProps);
-          }
-        } else {
-          // Finished operation normally, commit now
-          finish(lastIndexTimeProps);
-        }
-
-        if (config.getOnImportEnd() != null) {
-          invokeEventListener(config.getOnImportEnd());
-        }
-      }
-
-      statusMessages.remove(TIME_ELAPSED);
-      statusMessages.put(DataImporter.MSG.TOTAL_DOC_PROCESSED, ""+ importStatistics.docCount.get());
-      if(importStatistics.failedDocCount.get() > 0)
-        statusMessages.put(DataImporter.MSG.TOTAL_FAILED_DOCS, ""+ importStatistics.failedDocCount.get());
-
-      statusMessages.put("Time taken", getTimeElapsedSince(startTime.get()));
-      if (log.isInfoEnabled()) {
-        log.info("Time taken = {}", getTimeElapsedSince(startTime.get()));
-      }
-    } catch(Exception e)
-    {
-      throw new RuntimeException(e);
-    } finally
-    {
-      if (writer != null) {
-        writer.close();
-      }
-      if (epwList != null) {
-        closeEntityProcessorWrappers(epwList);
-      }
-      if(reqParams.isDebug()) {
-        reqParams.getDebugInfo().debugVerboseOutput = getDebugLogger().output;
-      }
-    }
-  }
-  private void closeEntityProcessorWrappers(List<EntityProcessorWrapper> epwList) {
-    for(EntityProcessorWrapper epw : epwList) {
-      epw.close();
-      if(epw.getDatasource()!=null) {
-        epw.getDatasource().close();
-      }
-      closeEntityProcessorWrappers(epw.getChildren());
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  private void finish(Map<String,Object> lastIndexTimeProps) {
-    log.info("Import completed successfully");
-    statusMessages.put("", "Indexing completed. Added/Updated: "
-            + importStatistics.docCount + " documents. Deleted "
-            + importStatistics.deletedDocCount + " documents.");
-    if(reqParams.isCommit()) {
-      writer.commit(reqParams.isOptimize());
-      addStatusMessage("Committed");
-      if (reqParams.isOptimize())
-        addStatusMessage("Optimized");
-    }
-    try {
-      propWriter.persist(lastIndexTimeProps);
-    } catch (Exception e) {
-      log.error("Could not write property file", e);
-      statusMessages.put("error", "Could not write property file. Delta imports will not work. " +
-          "Make sure your conf directory is writable");
-    }
-  }
-
-  @SuppressWarnings({"unchecked"})
-  void handleError(String message, Exception e) {
-    if (!dataImporter.getCore().getCoreContainer().isZooKeeperAware()) {
-      writer.rollback();
-    }
-
-    statusMessages.put(message, "Indexing error");
-    addStatusMessage(message);
-    if ((config != null) && (config.getOnError() != null)) {
-      invokeEventListener(config.getOnError(), e);
-    }
-  }
-
-  private void doFullDump() {
-    addStatusMessage("Full Dump Started");    
-    buildDocument(getVariableResolver(), null, null, currentEntityProcessorWrapper, true, null);
-  }
-
-  @SuppressWarnings("unchecked")
-  private void doDelta() {
-    addStatusMessage("Delta Dump started");
-    VariableResolver resolver = getVariableResolver();
-
-    if (config.getDeleteQuery() != null) {
-      writer.deleteByQuery(config.getDeleteQuery());
-    }
-
-    addStatusMessage("Identifying Delta");
-    log.info("Starting delta collection.");
-    Set<Map<String, Object>> deletedKeys = new HashSet<>();
-    Set<Map<String, Object>> allPks = collectDelta(currentEntityProcessorWrapper, resolver, deletedKeys);
-    if (stop.get())
-      return;
-    addStatusMessage("Deltas Obtained");
-    addStatusMessage("Building documents");
-    if (!deletedKeys.isEmpty()) {
-      allPks.removeAll(deletedKeys);
-      deleteAll(deletedKeys);
-      // Make sure that documents are not re-created
-    }
-    deletedKeys = null;
-    writer.setDeltaKeys(allPks);
-
-    statusMessages.put("Total Changed Documents", allPks.size());
-    VariableResolver vri = getVariableResolver();
-    Iterator<Map<String, Object>> pkIter = allPks.iterator();
-    while (pkIter.hasNext()) {
-      Map<String, Object> map = pkIter.next();
-      vri.addNamespace(ConfigNameConstants.IMPORTER_NS_SHORT + ".delta", map);
-      buildDocument(vri, null, map, currentEntityProcessorWrapper, true, null);
-      pkIter.remove();
-      // check for abort
-      if (stop.get())
-        break;
-    }
-
-    if (!stop.get()) {
-      log.info("Delta Import completed successfully");
-    }
-  }
-
-  private void deleteAll(Set<Map<String, Object>> deletedKeys) {
-    log.info("Deleting stale documents ");
-    Iterator<Map<String, Object>> iter = deletedKeys.iterator();
-    while (iter.hasNext()) {
-      Map<String, Object> map = iter.next();
-      String keyName = currentEntityProcessorWrapper.getEntity().isDocRoot() ? currentEntityProcessorWrapper.getEntity().getPk() : currentEntityProcessorWrapper.getEntity().getSchemaPk();
-      Object key = map.get(keyName);
-      if(key == null) {
-        keyName = findMatchingPkColumn(keyName, map);
-        key = map.get(keyName);
-      }
-      if(key == null) {
-        log.warn("no key was available for deleted pk query. keyName = {}", keyName);
-        continue;
-      }
-      writer.deleteDoc(key);
-      importStatistics.deletedDocCount.incrementAndGet();
-      iter.remove();
-    }
-  }
-  
-  @SuppressWarnings("unchecked")
-  public void addStatusMessage(String msg) {
-    statusMessages.put(msg, new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.ROOT).format(new Date()));
-  }
-
-  private void resetEntity(EntityProcessorWrapper epw) {
-    epw.setInitialized(false);
-    for (EntityProcessorWrapper child : epw.getChildren()) {
-      resetEntity(child);
-    }
-    
-  }
-  
-  private void buildDocument(VariableResolver vr, DocWrapper doc,
-      Map<String,Object> pk, EntityProcessorWrapper epw, boolean isRoot,
-      ContextImpl parentCtx) {
-    List<EntityProcessorWrapper> entitiesToDestroy = new ArrayList<>();
-    try {
-      buildDocument(vr, doc, pk, epw, isRoot, parentCtx, entitiesToDestroy);
-    } catch (Exception e) {
-      throw new RuntimeException(e);
-    } finally {
-      for (EntityProcessorWrapper entityWrapper : entitiesToDestroy) {
-        entityWrapper.destroy();
-      }
-      resetEntity(epw);
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  private void buildDocument(VariableResolver vr, DocWrapper doc,
-                             Map<String, Object> pk, EntityProcessorWrapper epw, boolean isRoot,
-                             ContextImpl parentCtx, List<EntityProcessorWrapper> entitiesToDestroy) {
-
-    ContextImpl ctx = new ContextImpl(epw, vr, null,
-            pk == null ? Context.FULL_DUMP : Context.DELTA_DUMP,
-            session, parentCtx, this);
-    epw.init(ctx);
-    if (!epw.isInitialized()) {
-      entitiesToDestroy.add(epw);
-      epw.setInitialized(true);
-    }
-    
-    if (reqParams.getStart() > 0) {
-      getDebugLogger().log(DIHLogLevels.DISABLE_LOGGING, null, null);
-    }
-
-    if (verboseDebug) {
-      getDebugLogger().log(DIHLogLevels.START_ENTITY, epw.getEntity().getName(), null);
-    }
-
-    int seenDocCount = 0;
-
-    try {
-      while (true) {
-        if (stop.get())
-          return;
-        if(importStatistics.docCount.get() > (reqParams.getStart() + reqParams.getRows())) break;
-        try {
-          seenDocCount++;
-
-          if (seenDocCount > reqParams.getStart()) {
-            getDebugLogger().log(DIHLogLevels.ENABLE_LOGGING, null, null);
-          }
-
-          if (verboseDebug && epw.getEntity().isDocRoot()) {
-            getDebugLogger().log(DIHLogLevels.START_DOC, epw.getEntity().getName(), null);
-          }
-          if (doc == null && epw.getEntity().isDocRoot()) {
-            doc = new DocWrapper();
-            ctx.setDoc(doc);
-            Entity e = epw.getEntity();
-            while (e.getParentEntity() != null) {
-              addFields(e.getParentEntity(), doc, (Map<String, Object>) vr
-                      .resolve(e.getParentEntity().getName()), vr);
-              e = e.getParentEntity();
-            }
-          }
-
-          Map<String, Object> arow = epw.nextRow();
-          if (arow == null) {
-            break;
-          }
-
-          // Support for start parameter in debug mode
-          if (epw.getEntity().isDocRoot()) {
-            if (seenDocCount <= reqParams.getStart())
-              continue;
-            if (seenDocCount > reqParams.getStart() + reqParams.getRows()) {
-              log.info("Indexing stopped at docCount = {}", importStatistics.docCount);
-              break;
-            }
-          }
-
-          if (verboseDebug) {
-            getDebugLogger().log(DIHLogLevels.ENTITY_OUT, epw.getEntity().getName(), arow);
-          }
-          importStatistics.rowsCount.incrementAndGet();
-          
-          DocWrapper childDoc = null;
-          if (doc != null) {
-            if (epw.getEntity().isChild()) {
-              childDoc = new DocWrapper();
-              handleSpecialCommands(arow, childDoc);
-              addFields(epw.getEntity(), childDoc, arow, vr);
-              doc.addChildDocument(childDoc);
-            } else {
-              handleSpecialCommands(arow, doc);
-              vr.addNamespace(epw.getEntity().getName(), arow);
-              addFields(epw.getEntity(), doc, arow, vr);
-              vr.removeNamespace(epw.getEntity().getName());
-            }
-          }
-          if (epw.getEntity().getChildren() != null) {
-            vr.addNamespace(epw.getEntity().getName(), arow);
-            for (EntityProcessorWrapper child : epw.getChildren()) {
-              if (childDoc != null) {
-              buildDocument(vr, childDoc,
-                  child.getEntity().isDocRoot() ? pk : null, child, false, ctx, entitiesToDestroy);
-              } else {
-                buildDocument(vr, doc,
-                    child.getEntity().isDocRoot() ? pk : null, child, false, ctx, entitiesToDestroy);
-              }
-            }
-            vr.removeNamespace(epw.getEntity().getName());
-          }
-          if (epw.getEntity().isDocRoot()) {
-            if (stop.get())
-              return;
-            if (!doc.isEmpty()) {
-              boolean result = writer.upload(doc);
-              if(reqParams.isDebug()) {
-                reqParams.getDebugInfo().debugDocuments.add(doc);
-              }
-              doc = null;
-              if (result){
-                importStatistics.docCount.incrementAndGet();
-              } else {
-                importStatistics.failedDocCount.incrementAndGet();
-              }
-            }
-          }
-        } catch (DataImportHandlerException e) {
-          if (verboseDebug) {
-            getDebugLogger().log(DIHLogLevels.ENTITY_EXCEPTION, epw.getEntity().getName(), e);
-          }
-          if(e.getErrCode() == DataImportHandlerException.SKIP_ROW){
-            continue;
-          }
-          if (isRoot) {
-            if (e.getErrCode() == DataImportHandlerException.SKIP) {
-              importStatistics.skipDocCount.getAndIncrement();
-              doc = null;
-            } else {
-              SolrException.log(log, "Exception while processing: "
-                      + epw.getEntity().getName() + " document : " + doc, e);
-            }
-            if (e.getErrCode() == DataImportHandlerException.SEVERE)
-              throw e;
-          } else
-            throw e;
-        } catch (Exception t) {
-          if (verboseDebug) {
-            getDebugLogger().log(DIHLogLevels.ENTITY_EXCEPTION, epw.getEntity().getName(), t);
-          }
-          throw new DataImportHandlerException(DataImportHandlerException.SEVERE, t);
-        } finally {
-          if (verboseDebug) {
-            getDebugLogger().log(DIHLogLevels.ROW_END, epw.getEntity().getName(), null);
-            if (epw.getEntity().isDocRoot())
-              getDebugLogger().log(DIHLogLevels.END_DOC, null, null);
-          }
-        }
-      }
-    } finally {
-      if (verboseDebug) {
-        getDebugLogger().log(DIHLogLevels.END_ENTITY, null, null);
-      }
-    }
-  }
-
-  static class DocWrapper extends SolrInputDocument {
-    //final SolrInputDocument solrDocument = new SolrInputDocument();
-    Map<String ,Object> session;
-
-    public void setSessionAttribute(String key, Object val){
-      if(session == null) session = new HashMap<>();
-      session.put(key, val);
-    }
-
-    public Object getSessionAttribute(String key) {
-      return session == null ? null : session.get(key);
-    }
-  }
-
-  private void handleSpecialCommands(Map<String, Object> arow, DocWrapper doc) {
-    Object value = arow.get(DELETE_DOC_BY_ID);
-    if (value != null) {
-      if (value instanceof Collection) {
-        @SuppressWarnings({"rawtypes"})
-        Collection collection = (Collection) value;
-        for (Object o : collection) {
-          writer.deleteDoc(o.toString());
-          importStatistics.deletedDocCount.incrementAndGet();
-        }
-      } else {
-        writer.deleteDoc(value);
-        importStatistics.deletedDocCount.incrementAndGet();
-      }
-    }    
-    value = arow.get(DELETE_DOC_BY_QUERY);
-    if (value != null) {
-      if (value instanceof Collection) {
-        @SuppressWarnings({"rawtypes"})
-        Collection collection = (Collection) value;
-        for (Object o : collection) {
-          writer.deleteByQuery(o.toString());
-          importStatistics.deletedDocCount.incrementAndGet();
-        }
-      } else {
-        writer.deleteByQuery(value.toString());
-        importStatistics.deletedDocCount.incrementAndGet();
-      }
-    }
-    value = arow.get(DOC_BOOST);
-    if (value != null) {
-      String message = "Ignoring document boost: " + value + " as index-time boosts are not supported anymore";
-      if (WARNED_ABOUT_INDEX_TIME_BOOSTS.compareAndSet(false, true)) {
-        log.warn(message);
-      } else {
-        log.debug(message);
-      }
-    }
-
-    value = arow.get(SKIP_DOC);
-    if (value != null) {
-      if (Boolean.parseBoolean(value.toString())) {
-        throw new DataImportHandlerException(DataImportHandlerException.SKIP,
-                "Document skipped :" + arow);
-      }
-    }
-
-    value = arow.get(SKIP_ROW);
-    if (value != null) {
-      if (Boolean.parseBoolean(value.toString())) {
-        throw new DataImportHandlerException(DataImportHandlerException.SKIP_ROW);
-      }
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  private void addFields(Entity entity, DocWrapper doc,
-                         Map<String, Object> arow, VariableResolver vr) {
-    for (Map.Entry<String, Object> entry : arow.entrySet()) {
-      String key = entry.getKey();
-      Object value = entry.getValue();
-      if (value == null)  continue;
-      if (key.startsWith("$")) continue;
-      Set<EntityField> field = entity.getColNameVsField().get(key);
-      IndexSchema schema = null == reqParams.getRequest() ? null : reqParams.getRequest().getSchema();
-      if (field == null && schema != null) {
-        // This can be a dynamic field or a field which does not have an entry in data-config ( an implicit field)
-        SchemaField sf = schema.getFieldOrNull(key);
-        if (sf == null) {
-          sf = config.getSchemaField(key);
-        }
-        if (sf != null) {
-          addFieldToDoc(entry.getValue(), sf.getName(), sf.multiValued(), doc);
-        }
-        //else do nothing. if we add it it may fail
-      } else {
-        if (field != null) {
-          for (EntityField f : field) {
-            String name = f.getName();
-            boolean multiValued = f.isMultiValued();
-            boolean toWrite = f.isToWrite();
-            if(f.isDynamicName()){
-              name =  vr.replaceTokens(name);
-              SchemaField schemaField = config.getSchemaField(name);
-              if(schemaField == null) {
-                toWrite = false;
-              } else {
-                multiValued = schemaField.multiValued();
-                toWrite = true;
-              }
-            }
-            if (toWrite) {
-              addFieldToDoc(entry.getValue(), name, multiValued, doc);
-            }
-          }
-        }
-      }
-    }
-  }
-
-  private void addFieldToDoc(Object value, String name, boolean multiValued, DocWrapper doc) {
-    if (value instanceof Collection) {
-      @SuppressWarnings({"rawtypes"})
-      Collection collection = (Collection) value;
-      if (multiValued) {
-        for (Object o : collection) {
-          if (o != null)
-            doc.addField(name, o);
-        }
-      } else {
-        if (doc.getField(name) == null)
-          for (Object o : collection) {
-            if (o != null)  {
-              doc.addField(name, o);
-              break;
-            }
-          }
-      }
-    } else if (multiValued) {
-      if (value != null)  {
-        doc.addField(name, value);
-      }
-    } else {
-      if (doc.getField(name) == null && value != null)
-        doc.addField(name, value);
-    }
-  }
-
-  @SuppressWarnings({"unchecked"})
-  public EntityProcessorWrapper getEntityProcessorWrapper(Entity entity) {
-    EntityProcessor entityProcessor = null;
-    if (entity.getProcessorName() == null) {
-      entityProcessor = new SqlEntityProcessor();
-    } else {
-      try {
-        entityProcessor = (EntityProcessor) loadClass(entity.getProcessorName(), dataImporter.getCore())
-            .getConstructor().newInstance();
-      } catch (Exception e) {
-        wrapAndThrow (SEVERE,e,
-                "Unable to load EntityProcessor implementation for entity:" + entity.getName());
-      }
-    }
-    EntityProcessorWrapper epw = new EntityProcessorWrapper(entityProcessor, entity, this);
-    for(Entity e1 : entity.getChildren()) {
-      epw.getChildren().add(getEntityProcessorWrapper(e1));
-    }
-      
-    return epw;
-  }
-
-  private String findMatchingPkColumn(String pk, Map<String, Object> row) {
-    if (row.containsKey(pk)) {
-      throw new IllegalArgumentException(String.format(Locale.ROOT,
-          "deltaQuery returned a row with null for primary key %s", pk));
-    }
-    String resolvedPk = null;
-    for (String columnName : row.keySet()) {
-      if (columnName.endsWith("." + pk) || pk.endsWith("." + columnName)) {
-        if (resolvedPk != null)
-          throw new IllegalArgumentException(
-            String.format(Locale.ROOT, 
-              "deltaQuery has more than one column (%s and %s) that might resolve to declared primary key pk='%s'",
-              resolvedPk, columnName, pk));
-        resolvedPk = columnName;
-      }
-    }
-    if (resolvedPk == null) {
-      throw new IllegalArgumentException(
-          String
-              .format(
-                  Locale.ROOT,
-                  "deltaQuery has no column to resolve to declared primary key pk='%s'",
-                  pk));
-    }
-    if (log.isInfoEnabled()) {
-      log.info(String.format(Locale.ROOT,
-          "Resolving deltaQuery column '%s' to match entity's declared pk '%s'",
-          resolvedPk, pk));
-    }
-    return resolvedPk;
-  }
-
-  /**
-   * <p> Collects unique keys of all Solr documents for whom one or more source tables have been changed since the last
-   * indexed time. </p> <p> Note: In our definition, unique key of Solr document is the primary key of the top level
-   * entity (unless skipped using docRoot=false) in the Solr document in data-config.xml </p>
-   *
-   * @return an iterator to the list of keys for which Solr documents should be updated.
-   */
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  public Set<Map<String, Object>> collectDelta(EntityProcessorWrapper epw, VariableResolver resolver,
-                                               Set<Map<String, Object>> deletedRows) {
-    //someone called abort
-    if (stop.get())
-      return new HashSet();
-
-    ContextImpl context1 = new ContextImpl(epw, resolver, null, Context.FIND_DELTA, session, null, this);
-    epw.init(context1);
-
-    Set<Map<String, Object>> myModifiedPks = new HashSet<>();
-
-   
-
-    for (EntityProcessorWrapper childEpw : epw.getChildren()) {
-      //this ensures that we start from the leaf nodes
-      myModifiedPks.addAll(collectDelta(childEpw, resolver, deletedRows));
-      //someone called abort
-      if (stop.get())
-        return new HashSet();
-    }
-    
-    // identifying the modified rows for this entity
-    Map<String, Map<String, Object>> deltaSet = new HashMap<>();
-    if (log.isInfoEnabled()) {
-      log.info("Running ModifiedRowKey() for Entity: {}", epw.getEntity().getName());
-    }
-    //get the modified rows in this entity
-    String pk = epw.getEntity().getPk();
-    while (true) {
-      Map<String, Object> row = epw.nextModifiedRowKey();
-
-      if (row == null)
-        break;
-
-      Object pkValue = row.get(pk);
-      if (pkValue == null) {
-        pk = findMatchingPkColumn(pk, row);
-        pkValue = row.get(pk);
-      }
-
-      deltaSet.put(pkValue.toString(), row);
-      importStatistics.rowsCount.incrementAndGet();
-      // check for abort
-      if (stop.get())
-        return new HashSet();
-    }
-    //get the deleted rows for this entity
-    Set<Map<String, Object>> deletedSet = new HashSet<>();
-    while (true) {
-      Map<String, Object> row = epw.nextDeletedRowKey();
-      if (row == null)
-        break;
-
-      deletedSet.add(row);
-      
-      Object pkValue = row.get(pk);
-      if (pkValue == null) {
-        pk = findMatchingPkColumn(pk, row);
-        pkValue = row.get(pk);
-      }
-
-      // Remove deleted rows from the delta rows
-      String deletedRowPk = pkValue.toString();
-      if (deltaSet.containsKey(deletedRowPk)) {
-        deltaSet.remove(deletedRowPk);
-      }
-
-      importStatistics.rowsCount.incrementAndGet();
-      // check for abort
-      if (stop.get())
-        return new HashSet();
-    }
-
-    if (log.isInfoEnabled()) {
-      log.info("Completed ModifiedRowKey for Entity: {} rows obtained: {}", epw.getEntity().getName(), deltaSet.size());
-      log.info("Completed DeletedRowKey for Entity: {} rows obtained : {}", epw.getEntity().getName(), deletedSet.size()); // logOk
-    }
-
-    myModifiedPks.addAll(deltaSet.values());
-    Set<Map<String, Object>> parentKeyList = new HashSet<>();
-    //all that we have captured is useless (in a sub-entity) if no rows in the parent is modified because of these
-    //propogate up the changes in the chain
-    if (epw.getEntity().getParentEntity() != null) {
-      // identifying deleted rows with deltas
-
-      for (Map<String, Object> row : myModifiedPks) {
-        resolver.addNamespace(epw.getEntity().getName(), row);
-        getModifiedParentRows(resolver, epw.getEntity().getName(), epw, parentKeyList);
-        // check for abort
-        if (stop.get())
-          return new HashSet();
-      }
-      // running the same for deletedrows
-      for (Map<String, Object> row : deletedSet) {
-        resolver.addNamespace(epw.getEntity().getName(), row);
-        getModifiedParentRows(resolver, epw.getEntity().getName(), epw, parentKeyList);
-        // check for abort
-        if (stop.get())
-          return new HashSet();
-      }
-    }
-    if (log.isInfoEnabled()) {
-      log.info("Completed parentDeltaQuery for Entity: {}", epw.getEntity().getName());
-    }
-    if (epw.getEntity().isDocRoot())
-      deletedRows.addAll(deletedSet);
-
-    // Do not use entity.isDocRoot here because one of descendant entities may set rootEntity="true"
-    return epw.getEntity().getParentEntity() == null ?
-        myModifiedPks : new HashSet<>(parentKeyList);
-  }
-
-  private void getModifiedParentRows(VariableResolver resolver,
-                                     String entity, EntityProcessor entityProcessor,
-                                     Set<Map<String, Object>> parentKeyList) {
-    try {
-      while (true) {
-        Map<String, Object> parentRow = entityProcessor
-                .nextModifiedParentRowKey();
-        if (parentRow == null)
-          break;
-
-        parentKeyList.add(parentRow);
-        importStatistics.rowsCount.incrementAndGet();
-        // check for abort
-        if (stop.get())
-          return;
-      }
-
-    } finally {
-      resolver.removeNamespace(entity);
-    }
-  }
-
-  public void abort() {
-    stop.set(true);
-  }
-
-  private AtomicBoolean stop = new AtomicBoolean(false);
-
-  public static final String TIME_ELAPSED = "Time Elapsed";
-
-  static String getTimeElapsedSince(long l) {
-    l = TimeUnit.MILLISECONDS.convert(System.nanoTime() - l, TimeUnit.NANOSECONDS);
-    return (l / (60000 * 60)) + ":" + (l / 60000) % 60 + ":" + (l / 1000)
-            % 60 + "." + l % 1000;
-  }
-
-  public RequestInfo getReqParams() {
-    return reqParams;
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  static Class loadClass(String name, SolrCore core) throws ClassNotFoundException {
-    try {
-      return core != null ?
-              core.getResourceLoader().findClass(name, Object.class) :
-              Class.forName(name);
-    } catch (Exception e) {
-      try {
-        String n = DocBuilder.class.getPackage().getName() + "." + name;
-        return core != null ?
-                core.getResourceLoader().findClass(n, Object.class) :
-                Class.forName(n);
-      } catch (Exception e1) {
-        throw new ClassNotFoundException("Unable to load " + name + " or " + DocBuilder.class.getPackage().getName() + "." + name, e);
-      }
-    }
-  }
-
-  public static class Statistics {
-    public AtomicLong docCount = new AtomicLong();
-
-    public AtomicLong deletedDocCount = new AtomicLong();
-
-    public AtomicLong failedDocCount = new AtomicLong();
-
-    public AtomicLong rowsCount = new AtomicLong();
-
-    public AtomicLong queryCount = new AtomicLong();
-
-    public AtomicLong skipDocCount = new AtomicLong();
-
-    public Statistics add(Statistics stats) {
-      this.docCount.addAndGet(stats.docCount.get());
-      this.deletedDocCount.addAndGet(stats.deletedDocCount.get());
-      this.rowsCount.addAndGet(stats.rowsCount.get());
-      this.queryCount.addAndGet(stats.queryCount.get());
-
-      return this;
-    }
-
-    public Map<String, Object> getStatsSnapshot() {
-      Map<String, Object> result = new HashMap<>();
-      result.put("docCount", docCount.get());
-      result.put("deletedDocCount", deletedDocCount.get());
-      result.put("rowCount", rowsCount.get());
-      result.put("queryCount", rowsCount.get());
-      result.put("skipDocCount", skipDocCount.get());
-      return result;
-    }
-
-  }
-
-  private void cleanByQuery(String delQuery, AtomicBoolean completeCleanDone) {
-    delQuery = getVariableResolver().replaceTokens(delQuery);
-    if (reqParams.isClean()) {
-      if (delQuery == null && !completeCleanDone.get()) {
-        writer.doDeleteAll();
-        completeCleanDone.set(true);
-      } else if (delQuery != null) {
-        writer.deleteByQuery(delQuery);
-      }
-    }
-  }
-
-  public static final String LAST_INDEX_TIME = "last_index_time";
-  public static final String INDEX_START_TIME = "index_start_time";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessor.java
deleted file mode 100644
index 8cfbed9..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessor.java
+++ /dev/null
@@ -1,113 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.Map;
-
-/**
- * <p>
- * An instance of entity processor serves an entity. It is reused throughout the
- * import process.
- * </p>
- * <p>
- * Implementations of this abstract class must provide a public no-args constructor.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public abstract class EntityProcessor {
-
-  /**
-   * This method is called when it starts processing an entity. When it comes
-   * back to the entity it is called again. So it can reset anything at that point.
-   * For a rootmost entity this is called only once for an ingestion. For sub-entities , this
-   * is called multiple once for each row from its parent entity
-   *
-   * @param context The current context
-   */
-  public abstract void init(Context context);
-
-  /**
-   * This method helps streaming the data for each row . The implementation
-   * would fetch as many rows as needed and gives one 'row' at a time. Only this
-   * method is used during a full import
-   *
-   * @return A 'row'.  The 'key' for the map is the column name and the 'value'
-   *         is the value of that column. If there are no more rows to be
-   *         returned, return 'null'
-   */
-  public abstract Map<String, Object> nextRow();
-
-  /**
-   * This is used for delta-import. It gives the pks of the changed rows in this
-   * entity
-   *
-   * @return the pk vs value of all changed rows
-   */
-  public abstract Map<String, Object> nextModifiedRowKey();
-
-  /**
-   * This is used during delta-import. It gives the primary keys of the rows
-   * that are deleted from this entity. If this entity is the root entity, solr
-   * document is deleted. If this is a sub-entity, the Solr document is
-   * considered as 'changed' and will be recreated
-   *
-   * @return the pk vs value of all changed rows
-   */
-  public abstract Map<String, Object> nextDeletedRowKey();
-
-  /**
-   * This is used during delta-import. This gives the primary keys and their
-   * values of all the rows changed in a parent entity due to changes in this
-   * entity.
-   *
-   * @return the pk vs value of all changed rows in the parent entity
-   */
-  public abstract Map<String, Object> nextModifiedParentRowKey();
-
-  /**
-   * Invoked for each entity at the very end of the import to do any needed cleanup tasks.
-   * 
-   */
-  public abstract void destroy();
-
-  /**
-   * Invoked after the transformers are invoked. EntityProcessors can add, remove or modify values
-   * added by Transformers in this method.
-   *
-   * @param r The transformed row
-   * @since solr 1.4
-   */
-  public void postTransform(Map<String, Object> r) {
-  }
-
-  /**
-   * Invoked when the Entity processor is destroyed towards the end of import.
-   *
-   * @since solr 1.4
-   */
-  public void close() {
-    //no-op
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessorBase.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessorBase.java
deleted file mode 100644
index 8311f36..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessorBase.java
+++ /dev/null
@@ -1,174 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrException;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.*;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.lang.invoke.MethodHandles;
-import java.util.*;
-
-/**
- * <p> Base class for all implementations of {@link EntityProcessor} </p> <p> Most implementations of {@link EntityProcessor}
- * extend this base class which provides common functionality. </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class EntityProcessorBase extends EntityProcessor {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  protected boolean isFirstInit = true;
-
-  protected String entityName;
-
-  protected Context context;
-
-  protected Iterator<Map<String, Object>> rowIterator;
-
-  protected String query;  
-  
-  protected String onError = ABORT;  
-  
-  protected DIHCacheSupport cacheSupport = null;
-  
-  private Zipper zipper;
-
-
-  @Override
-  public void init(Context context) {
-    this.context = context;
-    if (isFirstInit) {
-      firstInit(context);
-    }
-    if(zipper!=null){
-      zipper.onNewParent(context);
-    }else{
-      if(cacheSupport!=null) {
-        cacheSupport.initNewParent(context);
-      }   
-    }
-  }
-
-  /**
-   * first time init call. do one-time operations here
-   * it's necessary to call it from the overridden method,
-   * otherwise it throws NPE on accessing zipper from nextRow()
-   */
-  protected void firstInit(Context context) {
-    entityName = context.getEntityAttribute("name");
-    String s = context.getEntityAttribute(ON_ERROR);
-    if (s != null) onError = s;
-    
-    zipper = Zipper.createOrNull(context);
-    
-    if(zipper==null){
-      initCache(context);
-    }
-    isFirstInit = false;
-  }
-
-    protected void initCache(Context context) {
-        String cacheImplName = context
-            .getResolvedEntityAttribute(DIHCacheSupport.CACHE_IMPL);
-
-        if (cacheImplName != null ) {
-          cacheSupport = new DIHCacheSupport(context, cacheImplName);
-        }
-    }
-
-    @Override
-  public Map<String, Object> nextModifiedRowKey() {
-    return null;
-  }
-
-  @Override
-  public Map<String, Object> nextDeletedRowKey() {
-    return null;
-  }
-
-  @Override
-  public Map<String, Object> nextModifiedParentRowKey() {
-    return null;
-  }
-
-  /**
-   * For a simple implementation, this is the only method that the sub-class should implement. This is intended to
-   * stream rows one-by-one. Return null to signal end of rows
-   *
-   * @return a row where the key is the name of the field and value can be any Object or a Collection of objects. Return
-   *         null to signal end of rows
-   */
-  @Override
-  public Map<String, Object> nextRow() {
-    return null;// do not do anything
-  }
-  
-  protected Map<String, Object> getNext() {
-    if(zipper!=null){
-      return zipper.supplyNextChild(rowIterator);
-    }else{
-      if(cacheSupport==null) {
-        try {
-          if (rowIterator == null)
-            return null;
-          if (rowIterator.hasNext())
-            return rowIterator.next();
-          query = null;
-          rowIterator = null;
-          return null;
-        } catch (Exception e) {
-          SolrException.log(log, "getNext() failed for query '" + query + "'", e);
-          query = null;
-          rowIterator = null;
-          wrapAndThrow(DataImportHandlerException.WARN, e);
-          return null;
-        }
-      } else  {
-        return cacheSupport.getCacheData(context, query, rowIterator);
-      }  
-    }
-  }
-
-
-  @Override
-  public void destroy() {
-    query = null;
-    if(cacheSupport!=null){
-      cacheSupport.destroyAll();
-    }
-    cacheSupport = null;
-  }
-
-  
-
-  public static final String TRANSFORMER = "transformer";
-
-  public static final String TRANSFORM_ROW = "transformRow";
-
-  public static final String ON_ERROR = "onError";
-
-  public static final String ABORT = "abort";
-
-  public static final String CONTINUE = "continue";
-
-  public static final String SKIP = "skip";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessorWrapper.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessorWrapper.java
deleted file mode 100644
index 6c106bd..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EntityProcessorWrapper.java
+++ /dev/null
@@ -1,357 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrException.ErrorCode;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.handler.dataimport.config.ConfigNameConstants;
-import org.apache.solr.handler.dataimport.config.Entity;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.*;
-import static org.apache.solr.handler.dataimport.EntityProcessorBase.*;
-import static org.apache.solr.handler.dataimport.EntityProcessorBase.SKIP;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.lang.invoke.MethodHandles;
-import java.lang.reflect.Method;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-
-/**
- * A Wrapper over {@link EntityProcessor} instance which performs transforms and handles multi-row outputs correctly.
- *
- * @since solr 1.4
- */
-public class EntityProcessorWrapper extends EntityProcessor {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private EntityProcessor delegate;
-  private Entity entity;
-  @SuppressWarnings({"rawtypes"})
-  private DataSource datasource;
-  private List<EntityProcessorWrapper> children = new ArrayList<>();
-  private DocBuilder docBuilder;
-  private boolean initialized;
-  private String onError;
-  private Context context;
-  private VariableResolver resolver;
-  private String entityName;
-
-  protected List<Transformer> transformers;
-
-  protected List<Map<String, Object>> rowcache;
-  
-  public EntityProcessorWrapper(EntityProcessor delegate, Entity entity, DocBuilder docBuilder) {
-    this.delegate = delegate;
-    this.entity = entity;
-    this.docBuilder = docBuilder;
-  }
-
-  @Override
-  public void init(Context context) {
-    rowcache = null;
-    this.context = context;
-    resolver = context.getVariableResolver();
-    if (entityName == null) {
-      onError = resolver.replaceTokens(context.getEntityAttribute(ON_ERROR));
-      if (onError == null) onError = ABORT;
-      entityName = context.getEntityAttribute(ConfigNameConstants.NAME);
-    }
-    delegate.init(context);
-
-  }
-
-  @SuppressWarnings({"unchecked"})
-  void loadTransformers() {
-    String transClasses = context.getEntityAttribute(TRANSFORMER);
-
-    if (transClasses == null) {
-      transformers = Collections.emptyList();
-      return;
-    }
-
-    String[] transArr = transClasses.split(",");
-    transformers = new ArrayList<Transformer>() {
-      @Override
-      public boolean add(Transformer transformer) {
-        if (docBuilder != null && docBuilder.verboseDebug) {
-          transformer = docBuilder.getDebugLogger().wrapTransformer(transformer);
-        }
-        return super.add(transformer);
-      }
-    };
-    for (String aTransArr : transArr) {
-      String trans = aTransArr.trim();
-      if (trans.startsWith("script:")) {
-        // The script transformer is a potential vulnerability, esp. when the script is
-        // provided from an untrusted source. Check and don't proceed if source is untrusted.
-        checkIfTrusted(trans);
-        String functionName = trans.substring("script:".length());
-        ScriptTransformer scriptTransformer = new ScriptTransformer();
-        scriptTransformer.setFunctionName(functionName);
-        transformers.add(scriptTransformer);
-        continue;
-      }
-      try {
-        @SuppressWarnings({"rawtypes"})
-        Class clazz = DocBuilder.loadClass(trans, context.getSolrCore());
-        if (Transformer.class.isAssignableFrom(clazz)) {
-          transformers.add((Transformer) clazz.getConstructor().newInstance());
-        } else {
-          Method meth = clazz.getMethod(TRANSFORM_ROW, Map.class);
-          transformers.add(new ReflectionTransformer(meth, clazz, trans));
-        }
-      } catch (NoSuchMethodException nsme){
-         String msg = "Transformer :"
-                    + trans
-                    + "does not implement Transformer interface or does not have a transformRow(Map<String.Object> m)method";
-            log.error(msg);
-            wrapAndThrow(SEVERE, nsme,msg);        
-      } catch (Exception e) {
-        log.error("Unable to load Transformer: {}", aTransArr, e);
-        wrapAndThrow(SEVERE, e,"Unable to load Transformer: " + trans);
-      }
-    }
-
-  }
-
-  private void checkIfTrusted(String trans) {
-    if (docBuilder != null) {
-      SolrCore core = docBuilder.dataImporter.getCore();
-      boolean trusted = (core != null)? core.getCoreDescriptor().isConfigSetTrusted(): true;
-      if (!trusted) {
-        Exception ex = new SolrException(ErrorCode.UNAUTHORIZED, "The configset for this collection was uploaded "
-            + "without any authentication in place,"
-            + " and this transformer is not available for collections with untrusted configsets. To use this transformer,"
-            + " re-upload the configset after enabling authentication and authorization.");
-        String msg = "Transformer: "
-            + trans
-            + ". " + ex.getMessage();
-        log.error(msg);
-        wrapAndThrow(SEVERE, ex, msg);
-      }
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  static class ReflectionTransformer extends Transformer {
-    final Method meth;
-
-    @SuppressWarnings({"rawtypes"})
-    final Class clazz;
-
-    final String trans;
-
-    final Object o;
-
-    public ReflectionTransformer(Method meth, @SuppressWarnings({"rawtypes"})Class clazz, String trans)
-            throws Exception {
-      this.meth = meth;
-      this.clazz = clazz;
-      this.trans = trans;
-      o = clazz.getConstructor().newInstance();
-    }
-
-    @Override
-    public Object transformRow(Map<String, Object> aRow, Context context) {
-      try {
-        return meth.invoke(o, aRow);
-      } catch (Exception e) {
-        log.warn("method invocation failed on transformer : {}", trans, e);
-        throw new DataImportHandlerException(WARN, e);
-      }
-    }
-  }
-
-  protected Map<String, Object> getFromRowCache() {
-    Map<String, Object> r = rowcache.remove(0);
-    if (rowcache.isEmpty())
-      rowcache = null;
-    return r;
-  }
-
-  @SuppressWarnings("unchecked")
-  protected Map<String, Object> applyTransformer(Map<String, Object> row) {
-    if(row == null) return null;
-    if (transformers == null)
-      loadTransformers();
-    if (transformers == Collections.EMPTY_LIST)
-      return row;
-    Map<String, Object> transformedRow = row;
-    List<Map<String, Object>> rows = null;
-    boolean stopTransform = checkStopTransform(row);
-    VariableResolver resolver = context.getVariableResolver();
-    for (Transformer t : transformers) {
-      if (stopTransform) break;
-      try {
-        if (rows != null) {
-          List<Map<String, Object>> tmpRows = new ArrayList<>();
-          for (Map<String, Object> map : rows) {
-            resolver.addNamespace(entityName, map);
-            Object o = t.transformRow(map, context);
-            if (o == null)
-              continue;
-            if (o instanceof Map) {
-              @SuppressWarnings({"rawtypes"})
-              Map oMap = (Map) o;
-              stopTransform = checkStopTransform(oMap);
-              tmpRows.add((Map) o);
-            } else if (o instanceof List) {
-              tmpRows.addAll((List) o);
-            } else {
-              log.error("Transformer must return Map<String, Object> or a List<Map<String, Object>>");
-            }
-          }
-          rows = tmpRows;
-        } else {
-          resolver.addNamespace(entityName, transformedRow);
-          Object o = t.transformRow(transformedRow, context);
-          if (o == null)
-            return null;
-          if (o instanceof Map) {
-            @SuppressWarnings({"rawtypes"})
-            Map oMap = (Map) o;
-            stopTransform = checkStopTransform(oMap);
-            transformedRow = (Map) o;
-          } else if (o instanceof List) {
-            rows = (List) o;
-          } else {
-            log.error("Transformer must return Map<String, Object> or a List<Map<String, Object>>");
-          }
-        }
-      } catch (Exception e) {
-        log.warn("transformer threw error", e);
-        if (ABORT.equals(onError)) {
-          wrapAndThrow(SEVERE, e);
-        } else if (SKIP.equals(onError)) {
-          wrapAndThrow(DataImportHandlerException.SKIP, e);
-        }
-        // onError = continue
-      }
-    }
-    if (rows == null) {
-      return transformedRow;
-    } else {
-      rowcache = rows;
-      return getFromRowCache();
-    }
-
-  }
-
-  private boolean checkStopTransform(@SuppressWarnings({"rawtypes"})Map oMap) {
-    return oMap.get("$stopTransform") != null
-            && Boolean.parseBoolean(oMap.get("$stopTransform").toString());
-  }
-
-  @Override
-  public Map<String, Object> nextRow() {
-    if (rowcache != null) {
-      return getFromRowCache();
-    }
-    while (true) {
-      Map<String, Object> arow = null;
-      try {
-        arow = delegate.nextRow();
-      } catch (Exception e) {
-        if(ABORT.equals(onError)){
-          wrapAndThrow(SEVERE, e);
-        } else {
-          //SKIP is not really possible. If this calls the nextRow() again the Entityprocessor would be in an inconisttent state           
-          SolrException.log(log, "Exception in entity : "+ entityName, e);
-          return null;
-        }
-      }
-      if (arow == null) {
-        return null;
-      } else {
-        arow = applyTransformer(arow);
-        if (arow != null) {
-          delegate.postTransform(arow);
-          return arow;
-        }
-      }
-    }
-  }
-
-  @Override
-  public Map<String, Object> nextModifiedRowKey() {
-    Map<String, Object> row = delegate.nextModifiedRowKey();
-    row = applyTransformer(row);
-    rowcache = null;
-    return row;
-  }
-
-  @Override
-  public Map<String, Object> nextDeletedRowKey() {
-    Map<String, Object> row = delegate.nextDeletedRowKey();
-    row = applyTransformer(row);
-    rowcache = null;
-    return row;
-  }
-
-  @Override
-  public Map<String, Object> nextModifiedParentRowKey() {
-    return delegate.nextModifiedParentRowKey();
-  }
-
-  @Override
-  public void destroy() {
-    delegate.destroy();
-  }
-
-  public VariableResolver getVariableResolver() {
-    return context.getVariableResolver();
-  }
-
-  public Context getContext() {
-    return context;
-  }
-
-  @Override
-  public void close() {
-    delegate.close();
-  }
-
-  public Entity getEntity() {
-    return entity;
-  }
-
-  public List<EntityProcessorWrapper> getChildren() {
-    return children;
-  }
-
-  @SuppressWarnings({"rawtypes"})
-  public DataSource getDatasource() {
-    return datasource;
-  }
-
-  public void setDatasource(@SuppressWarnings({"rawtypes"})DataSource datasource) {
-    this.datasource = datasource;
-  }
-
-  public boolean isInitialized() {
-    return initialized;
-  }
-
-  public void setInitialized(boolean initialized) {
-    this.initialized = initialized;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Evaluator.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Evaluator.java
deleted file mode 100644
index 22282b9..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Evaluator.java
+++ /dev/null
@@ -1,140 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.regex.Pattern;
-
-/**
- * <p>
- * Pluggable functions for resolving variables
- * </p>
- * <p>
- * Implementations of this abstract class must provide a public no-arg constructor.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public abstract class Evaluator {
-
-  /**
-   * Return a String after processing an expression and a {@link VariableResolver}
-   *
-   * @see VariableResolver
-   * @param expression string to be evaluated
-   * @param context instance
-   * @return the value of the given expression evaluated using the resolver
-   */
-  public abstract String evaluate(String expression, Context context);
-
-  /**
-   * Parses a string of expression into separate params. The values are separated by commas. each value will be
-   * translated into one of the following:
-   * &lt;ol&gt;
-   * &lt;li&gt;If it is in single quotes the value will be translated to a String&lt;/li&gt;
-   * &lt;li&gt;If is is not in quotes and is a number a it will be translated into a Double&lt;/li&gt;
-   * &lt;li&gt;else it is a variable which can be resolved and it will be put in as an instance of VariableWrapper&lt;/li&gt;
-   * &lt;/ol&gt;
-   *
-   * @param expression the expression to be parsed
-   * @param vr the VariableResolver instance for resolving variables
-   *
-   * @return a List of objects which can either be a string, number or a variable wrapper
-   */
-  protected List<Object> parseParams(String expression, VariableResolver vr) {
-    List<Object> result = new ArrayList<>();
-    expression = expression.trim();
-    String[] ss = expression.split(",");
-    for (int i = 0; i < ss.length; i++) {
-      ss[i] = ss[i].trim();
-      if (ss[i].startsWith("'")) {//a string param has started
-        StringBuilder sb = new StringBuilder();
-        while (true) {
-          sb.append(ss[i]);
-          if (ss[i].endsWith("'")) break;
-          i++;
-          if (i >= ss.length)
-            throw new DataImportHandlerException(SEVERE, "invalid string at " + ss[i - 1] + " in function params: " + expression);
-          sb.append(",");
-        }
-        String s = sb.substring(1, sb.length() - 1);
-        s = s.replaceAll("\\\\'", "'");
-        result.add(s);
-      } else {
-        if (Character.isDigit(ss[i].charAt(0))) {
-          try {
-            Double doub = Double.parseDouble(ss[i]);
-            result.add(doub);
-          } catch (NumberFormatException e) {
-            if (vr.resolve(ss[i]) == null) {
-              wrapAndThrow(
-                      SEVERE, e, "Invalid number :" + ss[i] +
-                              "in parameters  " + expression);
-            }
-          }
-        } else {
-          result.add(getVariableWrapper(ss[i], vr));
-        }
-      }
-    }
-    return result;
-  }
-
-  protected VariableWrapper getVariableWrapper(String s, VariableResolver vr) {
-    return new VariableWrapper(s,vr);
-  }
-
-  static protected class VariableWrapper {
-    public final String varName;
-    public final VariableResolver vr;
-
-    public VariableWrapper(String s, VariableResolver vr) {
-      this.varName = s;
-      this.vr = vr;
-    }
-
-    public Object resolve() {
-      return vr.resolve(varName);
-    }
-
-    @Override
-    public String toString() {
-      Object o = vr.resolve(varName);
-      return o == null ? null : o.toString();
-    }
-  }
-
-  static Pattern IN_SINGLE_QUOTES = Pattern.compile("^'(.*?)'$");
-  
-  public static final String DATE_FORMAT_EVALUATOR = "formatDate";
-
-  public static final String URL_ENCODE_EVALUATOR = "encodeUrl";
-
-  public static final String ESCAPE_SOLR_QUERY_CHARS = "escapeQueryChars";
-
-  public static final String SQL_ESCAPE_EVALUATOR = "escapeSql";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EventListener.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EventListener.java
deleted file mode 100644
index 0c43a0b..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/EventListener.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-/**
- * Event listener for DataImportHandler
- *
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.4
- */
-public interface EventListener {
-
-  /**
-   * Event callback
-   *
-   * @param ctx the Context in which this event was called
-   */
-  void onEvent(Context ctx);
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldReaderDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldReaderDataSource.java
deleted file mode 100644
index 571c280..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldReaderDataSource.java
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.*;
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.StandardCharsets;
-import java.sql.Blob;
-import java.sql.Clob;
-import java.sql.SQLException;
-import java.util.Properties;
-
-/**
- * This can be useful for users who have a DB field containing xml and wish to use a nested {@link XPathEntityProcessor}
- * <p>
- * The datasouce may be configured as follows
- * <p>
- * &lt;datasource name="f1" type="FieldReaderDataSource" /&gt;
- * <p>
- * The entity which uses this datasource must keep the url value as the variable name url="field-name"
- * <p>
- * The fieldname must be resolvable from {@link VariableResolver}
- * <p>
- * This may be used with any {@link EntityProcessor} which uses a {@link DataSource}&lt;{@link Reader}&gt; eg: {@link XPathEntityProcessor}
- * <p>
- * Supports String, BLOB, CLOB data types and there is an extra field (in the entity) 'encoding' for BLOB types
- *
- * @since 1.4
- */
-public class FieldReaderDataSource extends DataSource<Reader> {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  protected VariableResolver vr;
-  protected String dataField;
-  private String encoding;
-  private EntityProcessorWrapper entityProcessor;
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    dataField = context.getEntityAttribute("dataField");
-    encoding = context.getEntityAttribute("encoding");
-    entityProcessor = (EntityProcessorWrapper) context.getEntityProcessor();
-    /*no op*/
-  }
-
-  @Override
-  public Reader getData(String query) {
-    Object o = entityProcessor.getVariableResolver().resolve(dataField);
-    if (o == null) {
-       throw new DataImportHandlerException (SEVERE, "No field available for name : " +dataField);
-    }
-    if (o instanceof String) {
-      return new StringReader((String) o);
-    } else if (o instanceof Clob) {
-      Clob clob = (Clob) o;
-      try {
-        //Most of the JDBC drivers have getCharacterStream defined as public
-        // so let us just check it
-        return readCharStream(clob);
-      } catch (Exception e) {
-        log.info("Unable to get data from CLOB");
-        return null;
-
-      }
-
-    } else if (o instanceof Blob) {
-      Blob blob = (Blob) o;
-      try {
-        return getReader(blob);
-      } catch (Exception e) {
-        log.info("Unable to get data from BLOB");
-        return null;
-
-      }
-    } else {
-      return new StringReader(o.toString());
-    }
-
-  }
-
-  static Reader readCharStream(Clob clob) {
-    try {
-      return clob.getCharacterStream();
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e,"Unable to get reader from clob");
-      return null;//unreachable
-    }
-  }
-
-  private Reader getReader(Blob blob)
-          throws SQLException, UnsupportedEncodingException {
-    if (encoding == null) {
-      return (new InputStreamReader(blob.getBinaryStream(), StandardCharsets.UTF_8));
-    } else {
-      return (new InputStreamReader(blob.getBinaryStream(), encoding));
-    }
-  }
-
-  @Override
-  public void close() {
-
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldStreamDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldStreamDataSource.java
deleted file mode 100644
index ba7ca5d..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldStreamDataSource.java
+++ /dev/null
@@ -1,85 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.io.ByteArrayInputStream;
-import java.io.InputStream;
-import java.lang.invoke.MethodHandles;
-import java.sql.Blob;
-import java.sql.SQLException;
-import java.util.Properties;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-
-/**
- * This can be useful for users who have a DB field containing BLOBs which may be Rich documents
- * <p>
- * The datasource may be configured as follows
- * <p>
- * &lt;dataSource name="f1" type="FieldStreamDataSource" /&gt;
- * <p>
- * The entity which uses this datasource must keep and attribute dataField
- * <p>
- * The fieldname must be resolvable from {@link VariableResolver}
- * <p>
- * This may be used with any {@link EntityProcessor} which uses a {@link DataSource}&lt;{@link InputStream}&gt; eg: TikaEntityProcessor
- *
- * @since 3.1
- */
-public class FieldStreamDataSource extends DataSource<InputStream> {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  protected VariableResolver vr;
-  protected String dataField;
-  private EntityProcessorWrapper wrapper;
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    dataField = context.getEntityAttribute("dataField");
-    wrapper = (EntityProcessorWrapper) context.getEntityProcessor();
-    /*no op*/
-  }
-
-  @Override
-  public InputStream getData(String query) {
-    Object o = wrapper.getVariableResolver().resolve(dataField);
-    if (o == null) {
-      throw new DataImportHandlerException(SEVERE, "No field available for name : " + dataField);
-    } else if (o instanceof Blob) {
-      Blob blob = (Blob) o;
-      try {
-        return blob.getBinaryStream();
-      } catch (SQLException sqle) {
-        log.info("Unable to get data from BLOB");
-        return null;
-      }
-    } else if (o instanceof byte[]) {
-      byte[] bytes = (byte[]) o;
-      return new ByteArrayInputStream(bytes);
-    } else {
-      throw new RuntimeException("unsupported type : " + o.getClass());
-    } 
-
-  }
-
-  @Override
-  public void close() {
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FileDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FileDataSource.java
deleted file mode 100644
index 34df1226..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FileDataSource.java
+++ /dev/null
@@ -1,155 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.*;
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.StandardCharsets;
-import java.util.Properties;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-/**
- * <p>
- * A {@link DataSource} which reads from local files
- * </p>
- * <p>
- * The file is read with the default platform encoding. It can be overriden by
- * specifying the encoding in solrconfig.xml
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class FileDataSource extends DataSource<Reader> {
-  public static final String BASE_PATH = "basePath";
-
-  /**
-   * The basePath for this data source
-   */
-  protected String basePath;
-
-  /**
-   * The encoding using which the given file should be read
-   */
-  protected String encoding = null;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    basePath = initProps.getProperty(BASE_PATH);
-    if (initProps.get(URLDataSource.ENCODING) != null)
-      encoding = initProps.getProperty(URLDataSource.ENCODING);
-  }
-
-  /**
-   * <p>
-   * Returns a reader for the given file.
-   * </p>
-   * <p>
-   * If the given file is not absolute, we try to construct an absolute path
-   * using basePath configuration. If that fails, then the relative path is
-   * tried. If file is not found a RuntimeException is thrown.
-   * </p>
-   * <p>
-   * <b>It is the responsibility of the calling method to properly close the
-   * returned Reader</b>
-   * </p>
-   */
-  @Override
-  public Reader getData(String query) {
-    File f = getFile(basePath,query);
-    try {
-      return openStream(f);
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE,e,"Unable to open File : "+f.getAbsolutePath());
-      return null;
-    }
-  }
-
-  static File getFile(String basePath, String query) {
-    try {
-      File file = new File(query);
-
-      // If it's not an absolute path, try relative from basePath. 
-      if (!file.isAbsolute()) {
-        // Resolve and correct basePath.
-        File basePathFile;
-        if (basePath == null) {
-          basePathFile = new File(".").getAbsoluteFile(); 
-          log.warn("FileDataSource.basePath is empty. Resolving to: {}"
-              , basePathFile.getAbsolutePath());
-        } else {
-          basePathFile = new File(basePath);
-          if (!basePathFile.isAbsolute()) {
-            basePathFile = basePathFile.getAbsoluteFile();
-            log.warn("FileDataSource.basePath is not absolute. Resolving to: {}"
-                , basePathFile.getAbsolutePath());
-          }
-        }
-
-        file = new File(basePathFile, query).getAbsoluteFile();
-      }
-
-      if (file.isFile() && file.canRead()) {
-        if (log.isDebugEnabled()) {
-          log.debug("Accessing File: {}", file.getAbsolutePath());
-        }
-        return file;
-      } else {
-        throw new FileNotFoundException("Could not find file: " + query + 
-            " (resolved to: " + file.getAbsolutePath());
-      }
-    } catch (FileNotFoundException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  /**
-   * Open a {@link java.io.Reader} for the given file name
-   *
-   * @param file a {@link java.io.File} instance
-   * @return a Reader on the given file
-   * @throws FileNotFoundException if the File does not exist
-   * @throws UnsupportedEncodingException if the encoding is unsupported
-   * @since solr 1.4
-   */
-  protected Reader openStream(File file) throws FileNotFoundException,
-          UnsupportedEncodingException {
-    if (encoding == null) {
-      return new InputStreamReader(new FileInputStream(file), StandardCharsets.UTF_8);
-    } else {
-      return new InputStreamReader(new FileInputStream(file), encoding);
-    }
-  }
-
-  @Override
-  public void close() {
-
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FileListEntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FileListEntityProcessor.java
deleted file mode 100644
index a03354f..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FileListEntityProcessor.java
+++ /dev/null
@@ -1,305 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.io.FilenameFilter;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.TimeZone;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import org.apache.solr.util.DateMathParser;
-
-/**
- * <p>
- * An {@link EntityProcessor} instance which can stream file names found in a given base
- * directory matching patterns and returning rows containing file information.
- * </p>
- * <p>
- * It supports querying a give base directory by matching:
- * <ul>
- * <li>regular expressions to file names</li>
- * <li>excluding certain files based on regular expression</li>
- * <li>last modification date (newer or older than a given date or time)</li>
- * <li>size (bigger or smaller than size given in bytes)</li>
- * <li>recursively iterating through sub-directories</li>
- * </ul>
- * Its output can be used along with {@link FileDataSource} to read from files in file
- * systems.
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- * @see Pattern
- */
-public class FileListEntityProcessor extends EntityProcessorBase {
-  /**
-   * A regex pattern to identify files given in data-config.xml after resolving any variables 
-   */
-  protected String fileName;
-
-  /**
-   * The baseDir given in data-config.xml after resolving any variables
-   */
-  protected String baseDir;
-
-  /**
-   * A Regex pattern of excluded file names as given in data-config.xml after resolving any variables
-   */
-  protected String excludes;
-
-  /**
-   * The newerThan given in data-config as a {@link java.util.Date}
-   * <p>
-   * <b>Note: </b> This variable is resolved just-in-time in the {@link #nextRow()} method.
-   * </p>
-   */
-  protected Date newerThan;
-
-  /**
-   * The newerThan given in data-config as a {@link java.util.Date}
-   */
-  protected Date olderThan;
-
-  /**
-   * The biggerThan given in data-config as a long value
-   * <p>
-   * <b>Note: </b> This variable is resolved just-in-time in the {@link #nextRow()} method.
-   * </p>
-   */
-  protected long biggerThan = -1;
-
-  /**
-   * The smallerThan given in data-config as a long value
-   * <p>
-   * <b>Note: </b> This variable is resolved just-in-time in the {@link #nextRow()} method.
-   * </p>
-   */
-  protected long smallerThan = -1;
-
-  /**
-   * The recursive given in data-config. Default value is false.
-   */
-  protected boolean recursive = false;
-
-  private Pattern fileNamePattern, excludesPattern;
-
-  @Override
-  public void init(Context context) {
-    super.init(context);
-    fileName = context.getEntityAttribute(FILE_NAME);
-    if (fileName != null) {
-      fileName = context.replaceTokens(fileName);
-      fileNamePattern = Pattern.compile(fileName);
-    }
-    baseDir = context.getEntityAttribute(BASE_DIR);
-    if (baseDir == null)
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-              "'baseDir' is a required attribute");
-    baseDir = context.replaceTokens(baseDir);
-    File dir = new File(baseDir);
-    if (!dir.isDirectory())
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-              "'baseDir' value: " + baseDir + " is not a directory");
-
-    String r = context.getEntityAttribute(RECURSIVE);
-    if (r != null)
-      recursive = Boolean.parseBoolean(r);
-    excludes = context.getEntityAttribute(EXCLUDES);
-    if (excludes != null) {
-      excludes = context.replaceTokens(excludes);
-      excludesPattern = Pattern.compile(excludes);
-    }
-  }
-
-  /**
-   * Get the Date object corresponding to the given string.
-   *
-   * @param dateStr the date string. It can be a DateMath string or it may have a evaluator function
-   * @return a Date instance corresponding to the input string
-   */
-  private Date getDate(String dateStr) {
-    if (dateStr == null)
-      return null;
-
-    Matcher m = PLACE_HOLDER_PATTERN.matcher(dateStr);
-    if (m.find()) {
-      Object o = context.resolve(m.group(1));
-      if (o instanceof Date)  return (Date)o;
-      dateStr = (String) o;
-    } else  {
-      dateStr = context.replaceTokens(dateStr);
-    }
-    m = Evaluator.IN_SINGLE_QUOTES.matcher(dateStr);
-    if (m.find()) {
-      String expr = m.group(1);
-      //TODO refactor DateMathParser.parseMath a bit to have a static method for this logic.
-      if (expr.startsWith("NOW")) {
-        expr = expr.substring("NOW".length());
-      }
-      try {
-        // DWS TODO: is this TimeZone the right default for us?  Deserves explanation if so.
-        return new DateMathParser(TimeZone.getDefault()).parseMath(expr);
-      } catch (ParseException exp) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-                "Invalid expression for date", exp);
-      }
-    }
-    try {
-      return new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.ROOT).parse(dateStr);
-    } catch (ParseException exp) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-              "Invalid expression for date", exp);
-    }
-  }
-
-  /**
-   * Get the Long value for the given string after resolving any evaluator or variable.
-   *
-   * @param sizeStr the size as a string
-   * @return the Long value corresponding to the given string
-   */
-  private Long getSize(String sizeStr)  {
-    if (sizeStr == null)
-      return null;
-
-    Matcher m = PLACE_HOLDER_PATTERN.matcher(sizeStr);
-    if (m.find()) {
-      Object o = context.resolve(m.group(1));
-      if (o instanceof Number) {
-        Number number = (Number) o;
-        return number.longValue();
-      }
-      sizeStr = (String) o;
-    } else  {
-      sizeStr = context.replaceTokens(sizeStr);
-    }
-
-    return Long.parseLong(sizeStr);
-  }
-
-  @Override
-  public Map<String, Object> nextRow() {
-    if (rowIterator != null)
-      return getNext();
-    List<Map<String, Object>> fileDetails = new ArrayList<>();
-    File dir = new File(baseDir);
-
-    String dateStr = context.getEntityAttribute(NEWER_THAN);
-    newerThan = getDate(dateStr);
-    dateStr = context.getEntityAttribute(OLDER_THAN);
-    olderThan = getDate(dateStr);
-    String biggerThanStr = context.getEntityAttribute(BIGGER_THAN);
-    if (biggerThanStr != null)
-      biggerThan = getSize(biggerThanStr);
-    String smallerThanStr = context.getEntityAttribute(SMALLER_THAN);
-    if (smallerThanStr != null)
-      smallerThan = getSize(smallerThanStr);
-
-    getFolderFiles(dir, fileDetails);
-    rowIterator = fileDetails.iterator();
-    return getNext();
-  }
-
-  private void getFolderFiles(File dir, final List<Map<String, Object>> fileDetails) {
-    // Fetch an array of file objects that pass the filter, however the
-    // returned array is never populated; accept() always returns false.
-    // Rather we make use of the fileDetails array which is populated as
-    // a side affect of the accept method.
-    dir.list(new FilenameFilter() {
-      @Override
-      public boolean accept(File dir, String name) {
-        File fileObj = new File(dir, name);
-        if (fileObj.isDirectory()) {
-          if (recursive) getFolderFiles(fileObj, fileDetails);
-        } else if (fileNamePattern == null) {
-          addDetails(fileDetails, dir, name);
-        } else if (fileNamePattern.matcher(name).find()) {
-          if (excludesPattern != null && excludesPattern.matcher(name).find())
-            return false;
-          addDetails(fileDetails, dir, name);
-        }
-        return false;
-      }
-    });
-  }
-
-  private void addDetails(List<Map<String, Object>> files, File dir, String name) {
-    Map<String, Object> details = new HashMap<>();
-    File aFile = new File(dir, name);
-    if (aFile.isDirectory()) return;
-    long sz = aFile.length();
-    Date lastModified = new Date(aFile.lastModified());
-    if (biggerThan != -1 && sz <= biggerThan)
-      return;
-    if (smallerThan != -1 && sz >= smallerThan)
-      return;
-    if (olderThan != null && lastModified.after(olderThan))
-      return;
-    if (newerThan != null && lastModified.before(newerThan))
-      return;
-    details.put(DIR, dir.getAbsolutePath());
-    details.put(FILE, name);
-    details.put(ABSOLUTE_FILE, aFile.getAbsolutePath());
-    details.put(SIZE, sz);
-    details.put(LAST_MODIFIED, lastModified);
-    files.add(details);
-  }
-
-  public static final Pattern PLACE_HOLDER_PATTERN = Pattern
-          .compile("\\$\\{(.*?)\\}");
-
-  public static final String DIR = "fileDir";
-
-  public static final String FILE = "file";
-
-  public static final String ABSOLUTE_FILE = "fileAbsolutePath";
-
-  public static final String SIZE = "fileSize";
-
-  public static final String LAST_MODIFIED = "fileLastModified";
-
-  public static final String FILE_NAME = "fileName";
-
-  public static final String BASE_DIR = "baseDir";
-
-  public static final String EXCLUDES = "excludes";
-
-  public static final String NEWER_THAN = "newerThan";
-
-  public static final String OLDER_THAN = "olderThan";
-
-  public static final String BIGGER_THAN = "biggerThan";
-
-  public static final String SMALLER_THAN = "smallerThan";
-
-  public static final String RECURSIVE = "recursive";
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/HTMLStripTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/HTMLStripTransformer.java
deleted file mode 100644
index 7ef4d93..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/HTMLStripTransformer.java
+++ /dev/null
@@ -1,96 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.lucene.analysis.charfilter.HTMLStripCharFilter;
-
-import java.io.IOException;
-import java.io.StringReader;
-import java.io.BufferedReader;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-/**
- * A {@link Transformer} implementation which strip off HTML tags using {@link HTMLStripCharFilter} This is useful
- * in case you don't need this HTML anyway.
- *
- * @see HTMLStripCharFilter
- * @since solr 1.4
- */
-public class HTMLStripTransformer extends Transformer {
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public Object transformRow(Map<String, Object> row, Context context) {
-    List<Map<String, String>> fields = context.getAllEntityFields();
-    for (Map<String, String> field : fields) {
-      String col = field.get(DataImporter.COLUMN);
-      String splitHTML = context.replaceTokens(field.get(STRIP_HTML));
-      if (!TRUE.equals(splitHTML))
-        continue;
-      Object tmpVal = row.get(col);
-      if (tmpVal == null)
-        continue;
-
-      if (tmpVal instanceof List) {
-        List<String> inputs = (List<String>) tmpVal;
-        @SuppressWarnings({"rawtypes"})
-        List results = new ArrayList();
-        for (String input : inputs) {
-          if (input == null)
-            continue;
-          Object o = stripHTML(input, col);
-          if (o != null)
-            results.add(o);
-        }
-        row.put(col, results);
-      } else {
-        String value = tmpVal.toString();
-        Object o = stripHTML(value, col);
-        if (o != null)
-          row.put(col, o);
-      }
-    }
-    return row;
-  }
-
-  private Object stripHTML(String value, String column) {
-    StringBuilder out = new StringBuilder();
-    StringReader strReader = new StringReader(value);
-    try {
-      HTMLStripCharFilter html = new HTMLStripCharFilter(strReader.markSupported() ? strReader : new BufferedReader(strReader));
-      char[] cbuf = new char[1024 * 10];
-      while (true) {
-        int count = html.read(cbuf);
-        if (count == -1)
-          break; // end of stream mark is -1
-        if (count > 0)
-          out.append(cbuf, 0, count);
-      }
-      html.close();
-    } catch (IOException e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-              "Failed stripping HTML for column: " + column, e);
-    }
-    return out.toString();
-  }
-
-  public static final String STRIP_HTML = "stripHTML";
-
-  public static final String TRUE = "true";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
deleted file mode 100644
index 87f38f4..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/JdbcDataSource.java
+++ /dev/null
@@ -1,583 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import org.apache.solr.common.SolrException;
-import org.apache.solr.util.CryptoKeys;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import javax.naming.InitialContext;
-import javax.naming.NamingException;
-
-import java.io.FileInputStream;
-import java.io.IOException;
-import java.io.InputStreamReader;
-import java.io.Reader;
-import java.lang.invoke.MethodHandles;
-import java.math.BigDecimal;
-import java.math.BigInteger;
-import java.sql.*;
-import java.util.*;
-import java.util.concurrent.Callable;
-import java.util.concurrent.TimeUnit;
-
-/**
- * <p> A DataSource implementation which can fetch data using JDBC. </p> <p> Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a> for more
- * details. </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class JdbcDataSource extends
-        DataSource<Iterator<Map<String, Object>>> {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  protected Callable<Connection> factory;
-
-  private long connLastUsed = 0;
-
-  private Connection conn;
-  
-  private ResultSetIterator resultSetIterator;  
-
-  private Map<String, Integer> fieldNameVsType = new HashMap<>();
-
-  private boolean convertType = false;
-
-  private int batchSize = FETCH_SIZE;
-
-  private int maxRows = 0;
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    resolveVariables(context, initProps);
-    initProps = decryptPwd(context, initProps);
-    Object o = initProps.get(CONVERT_TYPE);
-    if (o != null)
-      convertType = Boolean.parseBoolean(o.toString());
-
-    factory = createConnectionFactory(context, initProps);
-
-    String bsz = initProps.getProperty("batchSize");
-    if (bsz != null) {
-      bsz = context.replaceTokens(bsz);
-      try {
-        batchSize = Integer.parseInt(bsz);
-        if (batchSize == -1)
-          batchSize = Integer.MIN_VALUE;
-      } catch (NumberFormatException e) {
-        log.warn("Invalid batch size: {}", bsz);
-      }
-    }
-
-    for (Map<String, String> map : context.getAllEntityFields()) {
-      String n = map.get(DataImporter.COLUMN);
-      String t = map.get(DataImporter.TYPE);
-      if ("sint".equals(t) || "integer".equals(t))
-        fieldNameVsType.put(n, Types.INTEGER);
-      else if ("slong".equals(t) || "long".equals(t))
-        fieldNameVsType.put(n, Types.BIGINT);
-      else if ("float".equals(t) || "sfloat".equals(t))
-        fieldNameVsType.put(n, Types.FLOAT);
-      else if ("double".equals(t) || "sdouble".equals(t))
-        fieldNameVsType.put(n, Types.DOUBLE);
-      else if ("date".equals(t))
-        fieldNameVsType.put(n, Types.DATE);
-      else if ("boolean".equals(t))
-        fieldNameVsType.put(n, Types.BOOLEAN);
-      else if ("binary".equals(t))
-        fieldNameVsType.put(n, Types.BLOB);
-      else
-        fieldNameVsType.put(n, Types.VARCHAR);
-    }
-  }
-
-  private Properties decryptPwd(Context context, Properties initProps) {
-    String encryptionKey = initProps.getProperty("encryptKeyFile");
-    if (initProps.getProperty("password") != null && encryptionKey != null) {
-      // this means the password is encrypted and use the file to decode it
-      try {
-        try (Reader fr = new InputStreamReader(new FileInputStream(encryptionKey), UTF_8)) {
-          char[] chars = new char[100];//max 100 char password
-          int len = fr.read(chars);
-          if (len < 6)
-            throw new DataImportHandlerException(SEVERE, "There should be a password of length 6 atleast " + encryptionKey);
-          Properties props = new Properties();
-          props.putAll(initProps);
-          String password = null;
-          try {
-            password = CryptoKeys.decodeAES(initProps.getProperty("password"), new String(chars, 0, len)).trim();
-          } catch (SolrException se) {
-            throw new DataImportHandlerException(SEVERE, "Error decoding password", se.getCause());
-          }
-          props.put("password", password);
-          initProps = props;
-        }
-      } catch (IOException e) {
-        throw new DataImportHandlerException(SEVERE, "Could not load encryptKeyFile  " + encryptionKey);
-      }
-    }
-    return initProps;
-  }
-
-  protected Callable<Connection> createConnectionFactory(final Context context,
-                                       final Properties initProps) {
-//    final VariableResolver resolver = context.getVariableResolver();
-    final String jndiName = initProps.getProperty(JNDI_NAME);
-    final String url = initProps.getProperty(URL);
-    final String driver = initProps.getProperty(DRIVER);
-
-    if (url == null && jndiName == null)
-      throw new DataImportHandlerException(SEVERE,
-              "JDBC URL or JNDI name has to be specified");
-
-    if (driver != null) {
-      try {
-        DocBuilder.loadClass(driver, context.getSolrCore());
-      } catch (ClassNotFoundException e) {
-        wrapAndThrow(SEVERE, e, "Could not load driver: " + driver);
-      }
-    } else {
-      if(jndiName == null){
-        throw new DataImportHandlerException(SEVERE, "One of driver or jndiName must be specified in the data source");
-      }
-    }
-
-    String s = initProps.getProperty("maxRows");
-    if (s != null) {
-      maxRows = Integer.parseInt(s);
-    }
-
-    return factory = new Callable<Connection>() {
-      @Override
-      public Connection call() throws Exception {
-        if (log.isInfoEnabled()) {
-          log.info("Creating a connection for entity {} with URL: {}"
-              , context.getEntityAttribute(DataImporter.NAME), url);
-        }
-        long start = System.nanoTime();
-        Connection c = null;
-
-        if (jndiName != null) {
-          c = getFromJndi(initProps, jndiName);
-        } else if (url != null) {
-          try {
-            c = DriverManager.getConnection(url, initProps);
-          } catch (SQLException e) {
-            // DriverManager does not allow you to use a driver which is not loaded through
-            // the class loader of the class which is trying to make the connection.
-            // This is a workaround for cases where the user puts the driver jar in the
-            // solr.home/lib or solr.home/core/lib directories.
-            @SuppressWarnings({"unchecked"})
-            Driver d = (Driver) DocBuilder.loadClass(driver, context.getSolrCore()).getConstructor().newInstance();
-            c = d.connect(url, initProps);
-          }
-        }
-        if (c != null) {
-          try {
-            initializeConnection(c, initProps);
-          } catch (SQLException e) {
-            try {
-              c.close();
-            } catch (SQLException e2) {
-              log.warn("Exception closing connection during cleanup", e2);
-            }
-
-            throw new DataImportHandlerException(SEVERE, "Exception initializing SQL connection", e);
-          }
-        }
-        log.info("Time taken for getConnection(): {}"
-            , TimeUnit.MILLISECONDS.convert(System.nanoTime() - start, TimeUnit.NANOSECONDS));
-        return c;
-      }
-
-      private void initializeConnection(Connection c, final Properties initProps)
-          throws SQLException {
-        if (Boolean.parseBoolean(initProps.getProperty("readOnly"))) {
-          c.setReadOnly(true);
-          // Add other sane defaults
-          c.setAutoCommit(true);
-          c.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
-          c.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT);
-        }
-        if (!Boolean.parseBoolean(initProps.getProperty("autoCommit"))) {
-          c.setAutoCommit(false);
-        }
-        String transactionIsolation = initProps.getProperty("transactionIsolation");
-        if ("TRANSACTION_READ_UNCOMMITTED".equals(transactionIsolation)) {
-          c.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
-        } else if ("TRANSACTION_READ_COMMITTED".equals(transactionIsolation)) {
-          c.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
-        } else if ("TRANSACTION_REPEATABLE_READ".equals(transactionIsolation)) {
-          c.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
-        } else if ("TRANSACTION_SERIALIZABLE".equals(transactionIsolation)) {
-          c.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
-        } else if ("TRANSACTION_NONE".equals(transactionIsolation)) {
-          c.setTransactionIsolation(Connection.TRANSACTION_NONE);
-        }
-        String holdability = initProps.getProperty("holdability");
-        if ("CLOSE_CURSORS_AT_COMMIT".equals(holdability)) {
-          c.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT);
-        } else if ("HOLD_CURSORS_OVER_COMMIT".equals(holdability)) {
-          c.setHoldability(ResultSet.HOLD_CURSORS_OVER_COMMIT);
-        }
-      }
-
-      private Connection getFromJndi(final Properties initProps, final String jndiName) throws NamingException,
-          SQLException {
-
-        Connection c = null;
-        InitialContext ctx =  new InitialContext();
-        Object jndival =  ctx.lookup(jndiName);
-        if (jndival instanceof javax.sql.DataSource) {
-          javax.sql.DataSource dataSource = (javax.sql.DataSource) jndival;
-          String user = (String) initProps.get("user");
-          String pass = (String) initProps.get("password");
-          if(user == null || user.trim().equals("")){
-            c = dataSource.getConnection();
-          } else {
-            c = dataSource.getConnection(user, pass);
-          }
-        } else {
-          throw new DataImportHandlerException(SEVERE,
-                  "the jndi name : '"+jndiName +"' is not a valid javax.sql.DataSource");
-        }
-        return c;
-      }
-    };
-  }
-
-  private void resolveVariables(Context ctx, Properties initProps) {
-    for (Map.Entry<Object, Object> entry : initProps.entrySet()) {
-      if (entry.getValue() != null) {
-        entry.setValue(ctx.replaceTokens((String) entry.getValue()));
-      }
-    }
-  }
-
-  @Override
-  public Iterator<Map<String, Object>> getData(String query) {
-    if (resultSetIterator != null) {
-      resultSetIterator.close();
-      resultSetIterator = null;
-    }
-    resultSetIterator = createResultSetIterator(query);
-    return resultSetIterator.getIterator();
-  }
-
-  protected ResultSetIterator createResultSetIterator(String query) {
-    return new ResultSetIterator(query);
-  }
-
-  private void logError(String msg, Exception e) {
-    log.warn(msg, e);
-  }
-
-  protected List<String> readFieldNames(ResultSetMetaData metaData)
-          throws SQLException {
-    List<String> colNames = new ArrayList<>();
-    int count = metaData.getColumnCount();
-    for (int i = 0; i < count; i++) {
-      colNames.add(metaData.getColumnLabel(i + 1));
-    }
-    return colNames;
-  }
-
-  protected class ResultSetIterator {
-    private ResultSet resultSet;
-
-    private Statement stmt = null;
-
-    private List<String> colNames; 
-   
-    private Iterator<Map<String, Object>> rSetIterator;
-
-    public ResultSetIterator(String query) {
-
-      try {
-        Connection c = getConnection();
-        stmt = createStatement(c, batchSize, maxRows);
-        log.debug("Executing SQL: {}", query);
-        long start = System.nanoTime();
-        resultSet = executeStatement(stmt, query);
-        log.trace("Time taken for sql : {}"
-                , TimeUnit.MILLISECONDS.convert(System.nanoTime() - start, TimeUnit.NANOSECONDS));
-        setColNames(resultSet);
-      } catch (Exception e) {
-        close();
-        wrapAndThrow(SEVERE, e, "Unable to execute query: " + query);
-        return;
-      }
-      if (resultSet == null) {
-        close();
-        rSetIterator = new ArrayList<Map<String, Object>>().iterator();
-        return;
-      }
-
-      rSetIterator = createIterator(convertType, fieldNameVsType);
-    }
-
-    
-    protected Statement createStatement(final Connection c, final int batchSize, final int maxRows)
-        throws SQLException {
-      Statement statement = c.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-      statement.setFetchSize(batchSize);
-      statement.setMaxRows(maxRows);
-      return statement;
-    }
-
-    protected ResultSet executeStatement(Statement statement, String query) throws SQLException {
-      boolean resultSetReturned = statement.execute(query);
-      return getNextResultSet(resultSetReturned, statement);
-    }
-
-    protected ResultSet getNextResultSet(final boolean initialResultSetAvailable, final Statement statement) throws SQLException {
-      boolean resultSetAvailable = initialResultSetAvailable;
-      while (!resultSetAvailable && statement.getUpdateCount() != -1) {
-        resultSetAvailable = statement.getMoreResults();
-      }
-      if (resultSetAvailable) {
-        return statement.getResultSet();
-      }
-      return null;
-    }
-    
-    protected void setColNames(final ResultSet resultSet) throws SQLException {
-      if (resultSet != null) {
-        colNames = readFieldNames(resultSet.getMetaData());
-      } else {
-        colNames = Collections.emptyList();
-      }
-    }
-
-    protected Iterator<Map<String,Object>> createIterator(final boolean convertType,
-        final Map<String,Integer> fieldNameVsType) {
-      return new Iterator<Map<String,Object>>() {
-        @Override
-        public boolean hasNext() {
-          return hasnext();
-        }
-
-        @Override
-        public Map<String,Object> next() {
-          return getARow(convertType, fieldNameVsType);
-        }
-
-        @Override
-        public void remove() {/* do nothing */
-        }
-      };
-    }
-    
- 
-
-    protected Map<String,Object> getARow(boolean convertType, Map<String,Integer> fieldNameVsType) {
-      if (getResultSet() == null)
-        return null;
-      Map<String, Object> result = new HashMap<>();
-      for (String colName : getColNames()) {
-        try {
-          if (!convertType) {
-            // Use underlying database's type information except for BigDecimal and BigInteger
-            // which cannot be serialized by JavaBin/XML. See SOLR-6165
-            Object value = getResultSet().getObject(colName);
-            if (value instanceof BigDecimal || value instanceof BigInteger) {
-              result.put(colName, value.toString());
-            } else {
-              result.put(colName, value);
-            }
-            continue;
-          }
-
-          Integer type = fieldNameVsType.get(colName);
-          if (type == null)
-            type = Types.VARCHAR;
-          switch (type) {
-            case Types.INTEGER:
-              result.put(colName, getResultSet().getInt(colName));
-              break;
-            case Types.FLOAT:
-              result.put(colName, getResultSet().getFloat(colName));
-              break;
-            case Types.BIGINT:
-              result.put(colName, getResultSet().getLong(colName));
-              break;
-            case Types.DOUBLE:
-              result.put(colName, getResultSet().getDouble(colName));
-              break;
-            case Types.DATE:
-              result.put(colName, getResultSet().getTimestamp(colName));
-              break;
-            case Types.BOOLEAN:
-              result.put(colName, getResultSet().getBoolean(colName));
-              break;
-            case Types.BLOB:
-              result.put(colName, getResultSet().getBytes(colName));
-              break;
-            default:
-              result.put(colName, getResultSet().getString(colName));
-              break;
-          }
-        } catch (SQLException e) {
-          logError("Error reading data ", e);
-          wrapAndThrow(SEVERE, e, "Error reading data from database");
-        }
-      }
-      return result;
-    }
-
-    protected boolean hasnext() {
-      if (getResultSet() == null) {
-        close();
-        return false;
-      }
-      try {
-        if (getResultSet().next()) {
-          return true;
-        } else {
-          closeResultSet();
-          setResultSet(getNextResultSet(getStatement().getMoreResults(), getStatement()));
-          setColNames(getResultSet());
-          return hasnext();
-        }
-      } catch (SQLException e) {
-        close();
-        wrapAndThrow(SEVERE,e);
-        return false;
-      }
-    }
-
-    protected void close() {
-      closeResultSet();
-      try {
-        if (getStatement() != null)
-          getStatement().close();
-      } catch (Exception e) {
-        logError("Exception while closing statement", e);
-      } finally {
-        setStatement(null);
-      }
-    }
-
-    protected void closeResultSet() {
-      try {
-        if (getResultSet() != null) {
-          getResultSet().close();
-        }
-      } catch (Exception e) {
-        logError("Exception while closing result set", e);
-      } finally {
-        setResultSet(null);
-      }
-    }
-
-    protected final Iterator<Map<String,Object>> getIterator() {
-      return rSetIterator;
-    }
-    
-    
-    protected final Statement getStatement() {
-      return stmt;
-    }
-    
-    protected final void setStatement(Statement stmt) {
-      this.stmt = stmt;
-    }
-    
-    protected final ResultSet getResultSet() {
-      return resultSet;
-    }
-    
-    protected final void setResultSet(ResultSet resultSet) {
-      this.resultSet = resultSet;
-    }
-    
-    protected final List<String> getColNames() {
-      return colNames;
-    }
-
-    protected final void setColNames(List<String> colNames) {
-      this.colNames = colNames;
-    }
-    
-  }
-
-  protected Connection getConnection() throws Exception {
-    long currTime = System.nanoTime();
-    if (currTime - connLastUsed > CONN_TIME_OUT) {
-      synchronized (this) {
-        Connection tmpConn = factory.call();
-        closeConnection();
-        connLastUsed = System.nanoTime();
-        return conn = tmpConn;
-      }
-
-    } else {
-      connLastUsed = currTime;
-      return conn;
-    }
-  }
-
-  private boolean isClosed = false;
-
-  @Override
-  public void close() {
-    if (resultSetIterator != null) {
-      resultSetIterator.close();
-    }
-    try {
-      closeConnection();
-    } finally {
-      isClosed = true;
-    }
-  }
-
-  private void closeConnection()  {
-    try {
-      if (conn != null) {
-        try {
-          //SOLR-2045
-          conn.commit();
-        } catch(Exception ex) {
-          //ignore.
-        }
-        conn.close();
-      }
-    } catch (Exception e) {
-      log.error("Ignoring Error when closing connection", e);
-    }
-  }
-
-  private static final long CONN_TIME_OUT = TimeUnit.NANOSECONDS.convert(10, TimeUnit.SECONDS);
-
-  private static final int FETCH_SIZE = 500;
-
-  public static final String URL = "url";
-
-  public static final String JNDI_NAME = "jndiName";
-
-  public static final String DRIVER = "driver";
-
-  public static final String CONVERT_TYPE = "convertType";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/LineEntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/LineEntityProcessor.java
deleted file mode 100644
index 0940cbd..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/LineEntityProcessor.java
+++ /dev/null
@@ -1,164 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.*;
-import java.util.*;
-import java.util.regex.Pattern;
-
-import org.apache.commons.io.IOUtils;
-
-
-/**
- * <p>
- * An {@link EntityProcessor} instance which can stream lines of text read from a 
- * datasource. Options allow lines to be explicitly skipped or included in the index.
- * </p>
- * <p>
- * Attribute summary 
- * <ul>
- * <li>url is the required location of the input file. If this value is
- *     relative, it assumed to be relative to baseLoc.</li>
- * <li>acceptLineRegex is an optional attribute that if present discards any 
- *     line which does not match the regExp.</li>
- * <li>skipLineRegex is an optional attribute that is applied after any 
- *     acceptLineRegex and discards any line which matches this regExp.</li>
- * </ul>
- * <p>
- * Although envisioned for reading lines from a file or url, LineEntityProcessor may also be useful
- * for dealing with change lists, where each line contains filenames which can be used by subsequent entities
- * to parse content from those files.
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.4
- * @see Pattern
- */
-public class LineEntityProcessor extends EntityProcessorBase {
-  private Pattern acceptLineRegex, skipLineRegex;
-  private String url;
-  private BufferedReader reader;
-
-  /**
-   * Parses each of the entity attributes.
-   */
-  @Override
-  public void init(Context context) {
-    super.init(context);
-    String s;
-
-    // init a regex to locate files from the input we want to index
-    s = context.getResolvedEntityAttribute(ACCEPT_LINE_REGEX);
-    if (s != null) {
-      acceptLineRegex = Pattern.compile(s);
-    }
-
-    // init a regex to locate files from the input to be skipped
-    s = context.getResolvedEntityAttribute(SKIP_LINE_REGEX);
-    if (s != null) {
-      skipLineRegex = Pattern.compile(s);
-    }
-
-    // the FileName is required.
-    url = context.getResolvedEntityAttribute(URL);
-    if (url == null) throw
-      new DataImportHandlerException(DataImportHandlerException.SEVERE,
-           "'"+ URL +"' is a required attribute");
-  }
-
-
-  /**
-   * Reads lines from the url till it finds a lines that matches the
-   * optional acceptLineRegex and does not match the optional skipLineRegex.
-   *
-   * @return A row containing a minimum of one field "rawLine" or null to signal
-   * end of file. The rawLine is the as line as returned by readLine()
-   * from the url. However transformers can be used to create as 
-   * many other fields as required.
-   */
-  @Override
-  public Map<String, Object> nextRow() {
-    if (reader == null) {
-      reader = new BufferedReader((Reader) context.getDataSource().getData(url));
-    }
-
-    String line;
-    
-    while ( true ) { 
-      // read a line from the input file
-      try {
-        line = reader.readLine();
-      }
-      catch (IOException exp) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-             "Problem reading from input", exp);
-      }
-  
-      // end of input
-      if (line == null) {
-        closeResources();
-        return null;
-      }
-
-      // First scan whole line to see if we want it
-      if (acceptLineRegex != null && ! acceptLineRegex.matcher(line).find()) continue;
-      if (skipLineRegex != null &&   skipLineRegex.matcher(line).find()) continue;
-      // Contruct the 'row' of fields
-      Map<String, Object> row = new HashMap<>();
-      row.put("rawLine", line);
-      return row;
-    }
-  }
-  
-  public void closeResources() {
-    if (reader != null) {
-      IOUtils.closeQuietly(reader);
-    }
-    reader= null;
-  }
-
-    @Override
-    public void destroy() {
-      closeResources();
-      super.destroy();
-    }
-
-  /**
-   * Holds the name of entity attribute that will be parsed to obtain
-   * the filename containing the changelist.
-   */
-  public static final String URL = "url";
-
-  /**
-   * Holds the name of entity attribute that will be parsed to obtain
-   * the pattern to be used when checking to see if a line should
-   * be returned.
-   */
-  public static final String ACCEPT_LINE_REGEX = "acceptLineRegex";
-
-  /**
-   * Holds the name of entity attribute that will be parsed to obtain
-   * the pattern to be used when checking to see if a line should
-   * be ignored.
-   */
-  public static final String SKIP_LINE_REGEX = "skipLineRegex";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/LogTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/LogTransformer.java
deleted file mode 100644
index 66c525e..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/LogTransformer.java
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.lang.invoke.MethodHandles;
-import java.util.Map;
-
-/**
- * A {@link Transformer} implementation which logs messages in a given template format.
- * <p>
- * Refer to <a href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.4
- */
-public class LogTransformer extends Transformer {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Override
-  public Object transformRow(Map<String, Object> row, Context ctx) {
-    String expr = ctx.getEntityAttribute(LOG_TEMPLATE);
-    String level = ctx.replaceTokens(ctx.getEntityAttribute(LOG_LEVEL));
-
-    if (expr == null || level == null) return row;
-
-    if ("info".equals(level)) {
-      if (log.isInfoEnabled())
-        log.info(ctx.replaceTokens(expr));
-    } else if ("trace".equals(level)) {
-      if (log.isTraceEnabled())
-        log.trace(ctx.replaceTokens(expr));
-    } else if ("warn".equals(level)) {
-      if (log.isWarnEnabled())
-        log.warn(ctx.replaceTokens(expr));
-    } else if ("error".equals(level)) {
-      if (log.isErrorEnabled())
-        log.error(ctx.replaceTokens(expr));
-    } else if ("debug".equals(level)) {
-      if (log.isDebugEnabled())
-        log.debug(ctx.replaceTokens(expr));
-    }
-
-    return row;
-  }
-
-  public static final String LOG_TEMPLATE = "logTemplate";
-  public static final String LOG_LEVEL = "logLevel";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/MockDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/MockDataSource.java
deleted file mode 100644
index 8989ea2..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/MockDataSource.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.Properties;
-
-/**
- * <p>
- * A mock DataSource implementation which can be used for testing.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class MockDataSource extends
-        DataSource<Iterator<Map<String, Object>>> {
-
-  private static Map<String, Iterator<Map<String, Object>>> cache = new HashMap<>();
-
-  public static void setIterator(String query,
-                                 Iterator<Map<String, Object>> iter) {
-    cache.put(query, iter);
-  }
-
-  public static void clearCache() {
-    cache.clear();
-  }
-
-  @Override
-  public void init(Context context, Properties initProps) {
-  }
-
-  @Override
-  public Iterator<Map<String, Object>> getData(String query) {
-    return cache.get(query);
-  }
-
-  @Override
-  public void close() {
-    cache.clear();
-
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/NumberFormatTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/NumberFormatTransformer.java
deleted file mode 100644
index f693aec..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/NumberFormatTransformer.java
+++ /dev/null
@@ -1,134 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-import java.text.NumberFormat;
-import java.text.ParseException;
-import java.text.ParsePosition;
-import java.util.ArrayList;
-import java.util.IllformedLocaleException;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
-/**
- * <p>
- * A {@link Transformer} instance which can extract numbers out of strings. It uses
- * {@link NumberFormat} class to parse strings and supports
- * Number, Integer, Currency and Percent styles as supported by
- * {@link NumberFormat} with configurable locales.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class NumberFormatTransformer extends Transformer {
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public Object transformRow(Map<String, Object> row, Context context) {
-    for (Map<String, String> fld : context.getAllEntityFields()) {
-      String style = context.replaceTokens(fld.get(FORMAT_STYLE));
-      if (style != null) {
-        String column = fld.get(DataImporter.COLUMN);
-        String srcCol = fld.get(RegexTransformer.SRC_COL_NAME);
-        String localeStr = context.replaceTokens(fld.get(LOCALE));
-        if (srcCol == null)
-          srcCol = column;
-        Locale locale = Locale.ROOT;
-        if (localeStr != null) {
-          try {
-            locale = new Locale.Builder().setLanguageTag(localeStr).build();
-          } catch (IllformedLocaleException e) {
-            throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-                "Invalid Locale '" + localeStr + "' specified for field: " + fld, e);
-          }
-        }
-
-        Object val = row.get(srcCol);
-        String styleSmall = style.toLowerCase(Locale.ROOT);
-
-        if (val instanceof List) {
-          List<String> inputs = (List) val;
-          @SuppressWarnings({"rawtypes"})
-          List results = new ArrayList();
-          for (String input : inputs) {
-            try {
-              results.add(process(input, styleSmall, locale));
-            } catch (ParseException e) {
-              throw new DataImportHandlerException(
-                      DataImportHandlerException.SEVERE,
-                      "Failed to apply NumberFormat on column: " + column, e);
-            }
-          }
-          row.put(column, results);
-        } else {
-          if (val == null || val.toString().trim().equals(""))
-            continue;
-          try {
-            row.put(column, process(val.toString(), styleSmall, locale));
-          } catch (ParseException e) {
-            throw new DataImportHandlerException(
-                    DataImportHandlerException.SEVERE,
-                    "Failed to apply NumberFormat on column: " + column, e);
-          }
-        }
-      }
-    }
-    return row;
-  }
-
-  private Number process(String val, String style, Locale locale) throws ParseException {
-    if (INTEGER.equals(style)) {
-      return parseNumber(val, NumberFormat.getIntegerInstance(locale));
-    } else if (NUMBER.equals(style)) {
-      return parseNumber(val, NumberFormat.getNumberInstance(locale));
-    } else if (CURRENCY.equals(style)) {
-      return parseNumber(val, NumberFormat.getCurrencyInstance(locale));
-    } else if (PERCENT.equals(style)) {
-      return parseNumber(val, NumberFormat.getPercentInstance(locale));
-    }
-
-    return null;
-  }
-
-  private Number parseNumber(String val, NumberFormat numFormat) throws ParseException {
-    ParsePosition parsePos = new ParsePosition(0);
-    Number num = numFormat.parse(val, parsePos);
-    if (parsePos.getIndex() != val.length()) {
-      throw new ParseException("illegal number format", parsePos.getIndex());
-    }
-    return num;
-  }
-
-  public static final String FORMAT_STYLE = "formatStyle";
-
-  public static final String LOCALE = "locale";
-
-  public static final String NUMBER = "number";
-
-  public static final String PERCENT = "percent";
-
-  public static final String INTEGER = "integer";
-
-  public static final String CURRENCY = "currency";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/PlainTextEntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/PlainTextEntityProcessor.java
deleted file mode 100644
index 4b8771a..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/PlainTextEntityProcessor.java
+++ /dev/null
@@ -1,78 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.XPathEntityProcessor.URL;
-import org.apache.commons.io.IOUtils;
-
-import java.io.IOException;
-import java.io.Reader;
-import java.io.StringWriter;
-import java.util.HashMap;
-import java.util.Map;
-
-/**
- * <p>An implementation of {@link EntityProcessor} which reads data from a url/file and give out a row which contains one String
- * value. The name of the field is 'plainText'.
- *
- * @since solr 1.4
- */
-public class PlainTextEntityProcessor extends EntityProcessorBase {
-  private boolean ended = false;
-
-  @Override
-  public void init(Context context) {
-    super.init(context);
-    ended = false;
-  }
-
-  @Override
-  public Map<String, Object> nextRow() {
-    if (ended) return null;
-    @SuppressWarnings({"unchecked"})
-    DataSource<Reader> ds = context.getDataSource();
-    String url = context.replaceTokens(context.getEntityAttribute(URL));
-    Reader r = null;
-    try {
-      r = ds.getData(url);
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e, "Exception reading url : " + url);
-    }
-    StringWriter sw = new StringWriter();
-    char[] buf = new char[1024];
-    while (true) {
-      int len = 0;
-      try {
-        len = r.read(buf);
-      } catch (IOException e) {
-        IOUtils.closeQuietly(r);
-        wrapAndThrow(SEVERE, e, "Exception reading url : " + url);
-      }
-      if (len <= 0) break;
-      sw.append(new String(buf, 0, len));
-    }
-    Map<String, Object> row = new HashMap<>();
-    row.put(PLAIN_TEXT, sw.toString());
-    ended = true;
-    IOUtils.closeQuietly(r);
-    return row;
-  }
-
-  public static final String PLAIN_TEXT = "plainText";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/RegexTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/RegexTransformer.java
deleted file mode 100644
index f593416..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/RegexTransformer.java
+++ /dev/null
@@ -1,200 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.util.*;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-/**
- * <p>
- * A {@link Transformer} implementation which uses Regular Expressions to extract, split
- * and replace data in fields.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- * @see Pattern
- */
-public class RegexTransformer extends Transformer {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Override
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  public Map<String, Object> transformRow(Map<String, Object> row,
-                                          Context ctx) {
-    List<Map<String, String>> fields = ctx.getAllEntityFields();
-    for (Map<String, String> field : fields) {
-      String col = field.get(DataImporter.COLUMN);
-      String reStr = ctx.replaceTokens(field.get(REGEX));
-      String splitBy = ctx.replaceTokens(field.get(SPLIT_BY));
-      String replaceWith = ctx.replaceTokens(field.get(REPLACE_WITH));
-      String groupNames = ctx.replaceTokens(field.get(GROUP_NAMES));
-      if (reStr != null || splitBy != null) {
-        String srcColName = field.get(SRC_COL_NAME);
-        if (srcColName == null) {
-          srcColName = col;
-        }
-        Object tmpVal = row.get(srcColName);
-        if (tmpVal == null)
-          continue;
-
-        if (tmpVal instanceof List) {
-          List<String> inputs = (List<String>) tmpVal;
-          List results = new ArrayList();
-          Map<String,List> otherVars= null;
-          for (String input : inputs) {
-            Object o = process(col, reStr, splitBy, replaceWith, input, groupNames);
-            if (o != null){
-              if (o instanceof Map) {
-                Map map = (Map) o;
-                for (Object e : map.entrySet()) {
-                  Map.Entry<String ,Object> entry = (Map.Entry<String, Object>) e;
-                  List l = results;
-                  if(!col.equals(entry.getKey())){
-                    if(otherVars == null) otherVars = new HashMap<>();
-                    l = otherVars.get(entry.getKey());
-                    if(l == null){
-                      l = new ArrayList();
-                      otherVars.put(entry.getKey(), l);
-                    }
-                  }
-                  if (entry.getValue() instanceof Collection) {
-                    l.addAll((Collection) entry.getValue());
-                  } else {
-                    l.add(entry.getValue());
-                  }
-                }
-              } else {
-                if (o instanceof Collection) {
-                  results.addAll((Collection) o);
-                } else {
-                  results.add(o);
-                }
-              }
-            }
-          }
-          row.put(col, results);
-          if(otherVars != null) row.putAll(otherVars);
-        } else {
-          String value = tmpVal.toString();
-          Object o = process(col, reStr, splitBy, replaceWith, value, groupNames);
-          if (o != null){
-            if (o instanceof Map) {
-              row.putAll((Map) o);
-            } else{
-              row.put(col, o);
-            }
-          }
-        }
-      }
-    }
-    return row;
-  }
-
-  private Object process(String col, String reStr, String splitBy,
-                         String replaceWith, String value, String groupNames) {
-    if (splitBy != null) {
-      return readBySplit(splitBy, value);
-    } else if (replaceWith != null) {
-      Pattern p = getPattern(reStr);
-      Matcher m = p.matcher(value);
-      return m.find() ? m.replaceAll(replaceWith) : value;
-    } else {
-      return readfromRegExp(reStr, value, col, groupNames);
-    }
-  }
-
-  @SuppressWarnings("unchecked")
-  private List<String> readBySplit(String splitBy, String value) {
-    String[] vals = value.split(splitBy);
-    List<String> l = new ArrayList<>(Arrays.asList(vals));
-    return l;
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  private Object readfromRegExp(String reStr, String value, String columnName, String gNames) {
-    String[] groupNames = null;
-    if(gNames != null && gNames.trim().length() >0){
-      groupNames =  gNames.split(",");
-    }
-    Pattern regexp = getPattern(reStr);
-    Matcher m = regexp.matcher(value);
-    if (m.find() && m.groupCount() > 0) {
-      if (m.groupCount() > 1) {
-        List l = null;
-        Map<String ,String > map = null;
-        if(groupNames == null){
-          l = new ArrayList();
-        } else {
-          map =  new HashMap<>();
-        }
-        for (int i = 1; i <= m.groupCount(); i++) {
-          try {
-            if(l != null){
-              l.add(m.group(i));
-            } else if (map != null ){
-              if(i <= groupNames.length){
-                String nameOfGroup = groupNames[i-1];
-                if(nameOfGroup != null && nameOfGroup.trim().length() >0){
-                  map.put(nameOfGroup, m.group(i));
-                }
-              }
-            }
-          } catch (Exception e) {
-            log.warn("Parsing failed for field : {}", columnName, e);
-          }
-        }
-        return l == null ? map: l;
-      } else {
-        return m.group(1);
-      }
-    }
-
-    return null;
-  }
-
-  private Pattern getPattern(String reStr) {
-    Pattern result = PATTERN_CACHE.get(reStr);
-    if (result == null) {
-      PATTERN_CACHE.put(reStr, result = Pattern.compile(reStr));
-    }
-    return result;
-  }
-
-  private HashMap<String, Pattern> PATTERN_CACHE = new HashMap<>();
-
-  public static final String REGEX = "regex";
-
-  public static final String REPLACE_WITH = "replaceWith";
-
-  public static final String SPLIT_BY = "splitBy";
-
-  public static final String SRC_COL_NAME = "sourceColName";
-
-  public static final String GROUP_NAMES = "groupNames";
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/RequestInfo.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/RequestInfo.java
deleted file mode 100644
index d3f1a56..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/RequestInfo.java
+++ /dev/null
@@ -1,177 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.solr.common.util.ContentStream;
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.request.SolrQueryRequest;
-
-public class RequestInfo {
-  private final String command;
-  private final boolean debug;  
-  private final boolean syncMode;
-  private final boolean commit; 
-  private final boolean optimize;
-  private final int start;
-  private final long rows; 
-  private final boolean clean; 
-  private final List<String> entitiesToRun;
-  private final Map<String,Object> rawParams;
-  private final String configFile;
-  private final String dataConfig;
-  private final SolrQueryRequest request;
-  
-  //TODO:  find a different home for these two...
-  private final ContentStream contentStream;  
-  private final DebugInfo debugInfo;
-  
-  public RequestInfo(SolrQueryRequest request, Map<String,Object> requestParams, ContentStream stream) {
-    this.request = request;
-    this.contentStream = stream;    
-    if (requestParams.containsKey("command")) { 
-      command = (String) requestParams.get("command");
-    } else {
-      command = null;
-    }    
-    boolean debugMode = StrUtils.parseBool((String) requestParams.get("debug"), false);    
-    if (debugMode) {
-      debug = true;
-      debugInfo = new DebugInfo(requestParams);
-    } else {
-      debug = false;
-      debugInfo = null;
-    }       
-    if (requestParams.containsKey("clean")) {
-      clean = StrUtils.parseBool( (String) requestParams.get("clean"), true);
-    } else if (DataImporter.DELTA_IMPORT_CMD.equals(command) || DataImporter.IMPORT_CMD.equals(command)) {
-      clean = false;
-    } else  {
-      clean = debug ? false : true;
-    }    
-    optimize = StrUtils.parseBool((String) requestParams.get("optimize"), false);
-    if(optimize) {
-      commit = true;
-    } else {
-      commit = StrUtils.parseBool( (String) requestParams.get("commit"), (debug ? false : true));
-    }      
-    if (requestParams.containsKey("rows")) {
-      rows = Integer.parseInt((String) requestParams.get("rows"));
-    } else {
-      rows = debug ? 10 : Long.MAX_VALUE;
-    }      
-    
-    if (requestParams.containsKey("start")) {
-      start = Integer.parseInt((String) requestParams.get("start"));
-    } else {
-      start = 0;
-    }
-    syncMode = StrUtils.parseBool((String) requestParams.get("synchronous"), false);    
-    
-    Object o = requestParams.get("entity");     
-    List<String> modifiableEntities = null;
-    if(o != null) {
-      if (o instanceof String) {
-        modifiableEntities = new ArrayList<>();
-        modifiableEntities.add((String) o);
-      } else if (o instanceof List<?>) {
-        @SuppressWarnings("unchecked")
-        List<String> modifiableEntities1 = new ArrayList<>((List<String>) o);
-        modifiableEntities = modifiableEntities1;
-      } 
-      entitiesToRun = Collections.unmodifiableList(modifiableEntities);
-    } else {
-      entitiesToRun = null;
-    }
-    String configFileParam = (String) requestParams.get("config");
-    configFile = configFileParam;
-    String dataConfigParam = (String) requestParams.get("dataConfig");
-    if (dataConfigParam != null && dataConfigParam.trim().length() == 0) {
-      // Empty data-config param is not valid, change it to null
-      dataConfigParam = null;
-    }
-    dataConfig = dataConfigParam;
-    this.rawParams = Collections.unmodifiableMap(new HashMap<>(requestParams));
-  }
-
-  public String getCommand() {
-    return command;
-  }
-
-  public boolean isDebug() {
-    return debug;
-  }
-
-  public boolean isSyncMode() {
-    return syncMode;
-  }
-
-  public boolean isCommit() {
-    return commit;
-  }
-
-  public boolean isOptimize() {
-    return optimize;
-  }
-
-  public int getStart() {
-    return start;
-  }
-
-  public long getRows() {
-    return rows;
-  }
-
-  public boolean isClean() {
-    return clean;
-  }
-  /**
-   * Returns null if we are to run all entities, otherwise just run the entities named in the list.
-   */
-  public List<String> getEntitiesToRun() {
-    return entitiesToRun;
-  }
-
-   public String getDataConfig() {
-    return dataConfig;
-  }
-
-  public Map<String,Object> getRawParams() {
-    return rawParams;
-  }
-
-  public ContentStream getContentStream() {
-    return contentStream;
-  }
-
-  public DebugInfo getDebugInfo() {
-    return debugInfo;
-  }
-
-  public String getConfigFile() {
-    return configFile;
-  }
-
-  public SolrQueryRequest getRequest() {
-    return request;
-  }
-}
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ScriptTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ScriptTransformer.java
deleted file mode 100644
index fe848b1..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ScriptTransformer.java
+++ /dev/null
@@ -1,131 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.security.AccessControlContext;
-import java.security.AccessController;
-import java.security.PrivilegedAction;
-import java.security.PrivilegedActionException;
-import java.security.PrivilegedExceptionAction;
-import java.security.ProtectionDomain;
-import java.util.Map;
-
-import javax.script.Invocable;
-import javax.script.ScriptEngine;
-import javax.script.ScriptEngineManager;
-import javax.script.ScriptException;
-
-/**
- * <p>
- * A {@link Transformer} instance capable of executing functions written in scripting
- * languages as a {@link Transformer} instance.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class ScriptTransformer extends Transformer {
- private Invocable engine;
- private String functionName;
-
-  @Override
-  public Object transformRow(Map<String,Object> row, Context context) {
-    return AccessController.doPrivileged(new PrivilegedAction<Object>() {
-      @Override
-      public Object run() {
-        return transformRowUnsafe(row, context);
-      }
-    }, SCRIPT_SANDBOX);
-  }
-
-  public Object transformRowUnsafe(Map<String, Object> row, Context context) {
-    try {
-      if (engine == null)
-        initEngine(context);
-      if (engine == null)
-        return row;
-      return engine.invokeFunction(functionName, new Object[]{row, context});      
-    } catch (DataImportHandlerException e) {
-      throw e;
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE,e, "Error invoking script for entity " + context.getEntityAttribute("name"));
-    }
-    //will not reach here
-    return null;
-  }
-
-  private void initEngine(Context context) {
-    String scriptText = context.getScript();
-    String scriptLang = context.getScriptLanguage();
-    if (scriptText == null) {
-      throw new DataImportHandlerException(SEVERE,
-          "<script> tag is not present under <dataConfig>");
-    }
-    ScriptEngineManager scriptEngineMgr = new ScriptEngineManager();
-    ScriptEngine scriptEngine = scriptEngineMgr.getEngineByName(scriptLang);
-    if (scriptEngine == null) {
-      throw new DataImportHandlerException(SEVERE,
-          "Cannot load Script Engine for language: " + scriptLang);
-    }
-    if (scriptEngine instanceof Invocable) {
-      engine = (Invocable) scriptEngine;
-    } else {
-      throw new DataImportHandlerException(SEVERE,
-          "The installed ScriptEngine for: " + scriptLang
-              + " does not implement Invocable.  Class is "
-              + scriptEngine.getClass().getName());
-    }
-    try {
-      try {
-        AccessController.doPrivileged(new PrivilegedExceptionAction<Void>() {
-          @Override
-          public Void run() throws ScriptException  {
-            scriptEngine.eval(scriptText);
-            return null;
-          }
-        }, SCRIPT_SANDBOX);
-      } catch (PrivilegedActionException e) {
-        throw (ScriptException) e.getException();
-      }
-    } catch (ScriptException e) {
-      wrapAndThrow(SEVERE, e, "'eval' failed with language: " + scriptLang
-          + " and script: \n" + scriptText);
-    }
-  }
-
-  public void setFunctionName(String methodName) {
-    this.functionName = methodName;
-  }
-
-  public String getFunctionName() {
-    return functionName;
-  }
-
-  // sandbox for script code: zero permissions
-  private static final AccessControlContext SCRIPT_SANDBOX =
-      new AccessControlContext(new ProtectionDomain[] { new ProtectionDomain(null, null) });
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SimplePropertiesWriter.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SimplePropertiesWriter.java
deleted file mode 100644
index 0b77c6e..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SimplePropertiesWriter.java
+++ /dev/null
@@ -1,247 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.io.OutputStreamWriter;
-import java.io.Writer;
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.StandardCharsets;
-import java.security.AccessControlException;
-import java.text.ParseException;
-import java.text.SimpleDateFormat;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.IllformedLocaleException;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Properties;
-
-import org.apache.lucene.util.IOUtils;
-import org.apache.solr.common.util.SuppressForbidden;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.core.SolrPaths;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-/**
- * <p>
- *  Writes properties using {@link Properties#store} .
- *  The special property "last_index_time" is converted to a formatted date.
- *  Users can configure the location, filename, locale and date format to use.
- * </p> 
- */
-public class SimplePropertiesWriter extends DIHProperties {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  static final String LAST_INDEX_KEY = "last_index_time";
-  
-  protected String filename = null;
-  
-  protected String configDir = null;
-  
-  protected Locale locale = null;
-  
-  protected SimpleDateFormat dateFormat = null;
-  
-  /**
-   * The locale to use when writing the properties file.  Default is {@link Locale#ROOT}
-   */
-  public static final String LOCALE = "locale";
-  /**
-   * The date format to use when writing values for "last_index_time" to the properties file.
-   * See {@link SimpleDateFormat} for patterns.  Default is yyyy-MM-dd HH:mm:ss .
-   */
-  public static final String DATE_FORMAT = "dateFormat";
-  /**
-   * The directory to save the properties file in. Default is the current core's "config" directory.
-   */
-  public static final String DIRECTORY = "directory";
-  /**
-   * The filename to save the properties file to.  Default is this Handler's name from solrconfig.xml.
-   */
-  public static final String FILENAME = "filename";
-  
-  @Override
-  public void init(DataImporter dataImporter, Map<String, String> params) {
-    if(params.get(FILENAME) != null) {
-      filename = params.get(FILENAME);
-    } else if(dataImporter.getHandlerName()!=null) {
-      filename = dataImporter.getHandlerName() +  ".properties";
-    } else {
-      filename = "dataimport.properties";
-    }
-    findDirectory(dataImporter, params);
-    if(params.get(LOCALE) != null) {
-      locale = getLocale(params.get(LOCALE));
-    } else {
-      locale = Locale.ROOT;
-    }    
-    if(params.get(DATE_FORMAT) != null) {
-      dateFormat = new SimpleDateFormat(params.get(DATE_FORMAT), locale);
-    } else {
-      dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", locale);
-    }    
-  }
-  
-  @SuppressForbidden(reason = "Usage of outdated locale parsing with Locale#toString() because of backwards compatibility")
-  private Locale getLocale(String name) {
-    if (name == null) {
-      return Locale.ROOT;
-    }
-    for (final Locale l : Locale.getAvailableLocales()) {
-      if(name.equals(l.toString()) || name.equals(l.getDisplayName(Locale.ROOT))) {
-        return locale;
-      }
-    }
-    try {
-      return new Locale.Builder().setLanguageTag(name).build();
-    } catch (IllformedLocaleException ex) {
-      throw new DataImportHandlerException(SEVERE, "Unsupported locale for PropertyWriter: " + name);
-    }
-  }
-  
-  protected void findDirectory(DataImporter dataImporter, Map<String, String> params) {
-    if(params.get(DIRECTORY) != null) {
-      configDir = params.get(DIRECTORY);
-    } else {
-      SolrCore core = dataImporter.getCore();
-      if (core == null) {
-        configDir = SolrPaths.locateSolrHome().toString();
-      } else {
-        configDir = core.getResourceLoader().getConfigDir();
-      }
-    }
-  }
-  
-  private File getPersistFile() {
-    final File filePath;
-    if (new File(filename).isAbsolute() || configDir == null) {
-      filePath = new File(filename);
-    } else {
-      filePath = new File(new File(configDir), filename);
-    }
-    return filePath;
-  }
-
-  @Override
-  public boolean isWritable() {
-    File persistFile = getPersistFile();
-    try {
-      return persistFile.exists() 
-          ? persistFile.canWrite() 
-          : persistFile.getParentFile().canWrite();
-    } catch (AccessControlException e) {
-      return false;
-    }
-  }
-  
-  @Override
-  public String convertDateToString(Date d) {
-    return dateFormat.format(d);
-  }
-  protected Date convertStringToDate(String s) {
-    try {
-      return dateFormat.parse(s);
-    } catch (ParseException e) {
-      throw new DataImportHandlerException(SEVERE, "Value for "
-          + LAST_INDEX_KEY + " is invalid for date format "
-          + dateFormat.toLocalizedPattern() + " : " + s);
-    }
-  }
-  /**
-   * {@link DocBuilder} sends the date as an Object because 
-   * this class knows how to convert it to a String
-   */
-  protected Properties mapToProperties(Map<String,Object> propObjs) {
-    Properties p = new Properties();
-    for(Map.Entry<String,Object> entry : propObjs.entrySet()) {
-      String key = entry.getKey();
-      String val = null;
-      String lastKeyPart = key;
-      int lastDotPos = key.lastIndexOf('.');
-      if(lastDotPos!=-1 && key.length() > lastDotPos+1) {
-        lastKeyPart = key.substring(lastDotPos + 1);
-      }
-      if(LAST_INDEX_KEY.equals(lastKeyPart) && entry.getValue() instanceof Date) {
-        val = convertDateToString((Date) entry.getValue());
-      } else {
-        val = entry.getValue().toString();
-      }
-      p.put(key, val);
-    }
-    return p;
-  }
-  /**
-   * We'll send everything back as Strings as this class has
-   * already converted them.
-   */
-  protected Map<String,Object> propertiesToMap(Properties p) {
-    Map<String,Object> theMap = new HashMap<>();
-    for(Map.Entry<Object,Object> entry : p.entrySet()) {
-      String key = entry.getKey().toString();
-      Object val = entry.getValue().toString();
-      theMap.put(key, val);
-    }
-    return theMap;
-  }
-  
-  @Override
-  public void persist(Map<String, Object> propObjs) {
-    Writer propOutput = null;    
-    Properties existingProps = mapToProperties(readIndexerProperties());    
-    Properties newProps = mapToProperties(propObjs);
-    try {
-      existingProps.putAll(newProps);
-      propOutput = new OutputStreamWriter(new FileOutputStream(getPersistFile()), StandardCharsets.UTF_8);
-      existingProps.store(propOutput, null);
-      log.info("Wrote last indexed time to {}", filename);
-    } catch (Exception e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-          "Unable to persist Index Start Time", e);
-    } finally {
-      IOUtils.closeWhileHandlingException(propOutput);
-    }
-  }
-  
-  @Override
-  public Map<String, Object> readIndexerProperties() {
-    Properties props = new Properties();
-    InputStream propInput = null;    
-    try {
-      String filePath = configDir;
-      if (configDir != null && !configDir.endsWith(File.separator)) {
-        filePath += File.separator;
-      }
-      filePath += filename;
-      propInput = new FileInputStream(filePath);
-      props.load(new InputStreamReader(propInput, StandardCharsets.UTF_8));
-      log.info("Read {}", filename);
-    } catch (Exception e) {
-      log.warn("Unable to read: {}", filename);
-    } finally {
-      IOUtils.closeWhileHandlingException(propInput);
-    }    
-    return propertiesToMap(props);
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrEntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrEntityProcessor.java
deleted file mode 100644
index 7732673..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrEntityProcessor.java
+++ /dev/null
@@ -1,321 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.net.MalformedURLException;
-import java.net.URL;
-import java.util.Collection;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.Map;
-
-import org.apache.http.client.HttpClient;
-import org.apache.solr.client.solrj.SolrClient;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.HttpClientUtil;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.impl.HttpSolrClient.Builder;
-import org.apache.solr.client.solrj.impl.XMLResponseParser;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.common.SolrDocument;
-import org.apache.solr.common.SolrDocumentList;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.CursorMarkParams;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * An implementation of {@link EntityProcessor} which fetches values from a
- * separate Solr implementation using the SolrJ client library. Yield a row per
- * Solr document.
- * </p>
- * <p>
- * Limitations: 
- * All configuration is evaluated at the beginning;
- * Only one query is walked;
- * </p>
- */
-public class SolrEntityProcessor extends EntityProcessorBase {
-  
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  public static final String SOLR_SERVER = "url";
-  public static final String QUERY = "query";
-  public static final String TIMEOUT = "timeout";
-
-  public static final int TIMEOUT_SECS = 5 * 60; // 5 minutes
-  public static final int ROWS_DEFAULT = 50;
-  
-  private SolrClient solrClient = null;
-  private String queryString;
-  private int rows = ROWS_DEFAULT;
-  private String[] filterQueries;
-  private String[] fields;
-  private String requestHandler;// 'qt' param
-  private int timeout = TIMEOUT_SECS;
-  
-  @Override
-  public void destroy() {
-    try {
-      solrClient.close();
-    } catch (IOException e) {
-
-    } finally {
-      HttpClientUtil.close(((HttpSolrClient) solrClient).getHttpClient());
-    }
-  }
-
-  /**
-   * Factory method that returns a {@link HttpClient} instance used for interfacing with a source Solr service.
-   * One can override this method to return a differently configured {@link HttpClient} instance.
-   * For example configure https and http authentication.
-   *
-   * @return a {@link HttpClient} instance used for interfacing with a source Solr service
-   */
-  protected HttpClient getHttpClient() {
-    return HttpClientUtil.createClient(null);
-  }
-
-  @Override
-  protected void firstInit(Context context) {
-    super.firstInit(context);
-    
-    try {
-      String serverPath = context.getResolvedEntityAttribute(SOLR_SERVER);
-      if (serverPath == null) {
-        throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-            "SolrEntityProcessor: parameter 'url' is required");
-      }
-
-      HttpClient client = getHttpClient();
-      URL url = new URL(serverPath);
-      // (wt="javabin|xml") default is javabin
-      if ("xml".equals(context.getResolvedEntityAttribute(CommonParams.WT))) {
-        // TODO: it doesn't matter for this impl when passing a client currently, but we should close this!
-        solrClient = new Builder(url.toExternalForm())
-            .withHttpClient(client)
-            .withResponseParser(new XMLResponseParser())
-            .build();
-        log.info("using XMLResponseParser");
-      } else {
-        // TODO: it doesn't matter for this impl when passing a client currently, but we should close this!
-        solrClient = new Builder(url.toExternalForm())
-            .withHttpClient(client)
-            .build();
-        log.info("using BinaryResponseParser");
-      }
-    } catch (MalformedURLException e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE, e);
-    }
-  }
-  
-  @Override
-  public Map<String,Object> nextRow() {
-    buildIterator();
-    return getNext();
-  }
-  
-  /**
-   * The following method changes the rowIterator mutable field. It requires
-   * external synchronization. 
-   */
-  protected void buildIterator() {
-    if (rowIterator != null)  {
-      SolrDocumentListIterator documentListIterator = (SolrDocumentListIterator) rowIterator;
-      if (!documentListIterator.hasNext() && documentListIterator.hasMoreRows()) {
-        nextPage();
-      }
-    } else {
-      boolean cursor = Boolean.parseBoolean(context
-          .getResolvedEntityAttribute(CursorMarkParams.CURSOR_MARK_PARAM));
-      rowIterator = !cursor ? new SolrDocumentListIterator(new SolrDocumentList())
-          : new SolrDocumentListCursor(new SolrDocumentList(), CursorMarkParams.CURSOR_MARK_START);
-      nextPage();
-    }
-  }
-  
-  protected void nextPage() {
-    ((SolrDocumentListIterator)rowIterator).doQuery();
-  }
-
-  class SolrDocumentListCursor extends SolrDocumentListIterator {
-    
-    private final String cursorMark;
-
-    public SolrDocumentListCursor(SolrDocumentList solrDocumentList, String cursorMark) {
-      super(solrDocumentList);
-      this.cursorMark = cursorMark;
-    }
-
-    @Override
-    protected void passNextPage(SolrQuery solrQuery) {
-      String timeoutAsString = context.getResolvedEntityAttribute(TIMEOUT);
-      if (timeoutAsString != null) {
-        throw new DataImportHandlerException(SEVERE,"cursorMark can't be used with timeout");
-      }
-      
-      solrQuery.set(CursorMarkParams.CURSOR_MARK_PARAM, cursorMark);
-    }
-    
-    @Override
-    protected Iterator<Map<String,Object>> createNextPageIterator(QueryResponse response) {
-      return
-          new SolrDocumentListCursor(response.getResults(),
-              response.getNextCursorMark()) ;
-    }
-  }
-  
-  class SolrDocumentListIterator implements Iterator<Map<String,Object>> {
-    
-    private final int start;
-    private final int size;
-    private final long numFound;
-    private final Iterator<SolrDocument> solrDocumentIterator;
-    
-    public SolrDocumentListIterator(SolrDocumentList solrDocumentList) {
-      this.solrDocumentIterator = solrDocumentList.iterator();
-      this.numFound = solrDocumentList.getNumFound();
-      // SolrQuery has the start field of type int while SolrDocumentList of
-      // type long. We are always querying with an int so we can't receive a
-      // long as output. That's the reason why the following cast seems safe
-      this.start = (int) solrDocumentList.getStart();
-      this.size = solrDocumentList.size();
-    }
-
-    protected QueryResponse doQuery() {
-      SolrEntityProcessor.this.queryString = context.getResolvedEntityAttribute(QUERY);
-      if (SolrEntityProcessor.this.queryString == null) {
-        throw new DataImportHandlerException(
-            DataImportHandlerException.SEVERE,
-            "SolrEntityProcessor: parameter 'query' is required"
-        );
-      }
-
-      String rowsP = context.getResolvedEntityAttribute(CommonParams.ROWS);
-      if (rowsP != null) {
-        rows = Integer.parseInt(rowsP);
-      }
-
-      String sortParam = context.getResolvedEntityAttribute(CommonParams.SORT);
-      
-      String fqAsString = context.getResolvedEntityAttribute(CommonParams.FQ);
-      if (fqAsString != null) {
-        SolrEntityProcessor.this.filterQueries = fqAsString.split(",");
-      }
-
-      String fieldsAsString = context.getResolvedEntityAttribute(CommonParams.FL);
-      if (fieldsAsString != null) {
-        SolrEntityProcessor.this.fields = fieldsAsString.split(",");
-      }
-      SolrEntityProcessor.this.requestHandler = context.getResolvedEntityAttribute(CommonParams.QT);
-     
-
-      SolrQuery solrQuery = new SolrQuery(queryString);
-      solrQuery.setRows(rows);
-      
-      if (sortParam!=null) {
-        solrQuery.setParam(CommonParams.SORT, sortParam);
-      }
-      
-      passNextPage(solrQuery);
-      
-      if (fields != null) {
-        for (String field : fields) {
-          solrQuery.addField(field);
-        }
-      }
-      solrQuery.setRequestHandler(requestHandler);
-      solrQuery.setFilterQueries(filterQueries);
-      
-      
-      QueryResponse response = null;
-      try {
-        response = solrClient.query(solrQuery);
-      } catch (SolrServerException | IOException | SolrException e) {
-        if (ABORT.equals(onError)) {
-          wrapAndThrow(SEVERE, e);
-        } else if (SKIP.equals(onError)) {
-          wrapAndThrow(DataImportHandlerException.SKIP_ROW, e);
-        }
-      }
-      
-      if (response != null) {
-        SolrEntityProcessor.this.rowIterator = createNextPageIterator(response);
-      }
-      return response;
-    }
-
-    protected Iterator<Map<String,Object>> createNextPageIterator(QueryResponse response) {
-      return new SolrDocumentListIterator(response.getResults());
-    }
-
-    protected void passNextPage(SolrQuery solrQuery) {
-      String timeoutAsString = context.getResolvedEntityAttribute(TIMEOUT);
-      if (timeoutAsString != null) {
-        SolrEntityProcessor.this.timeout = Integer.parseInt(timeoutAsString);
-      }
-      
-      solrQuery.setTimeAllowed(timeout * 1000);
-      
-      solrQuery.setStart(getStart() + getSize());
-    }
-    
-    @Override
-    public boolean hasNext() {
-      return solrDocumentIterator.hasNext();
-    }
-
-    @Override
-    public Map<String,Object> next() {
-      SolrDocument solrDocument = solrDocumentIterator.next();
-      
-      HashMap<String,Object> map = new HashMap<>();
-      Collection<String> fields = solrDocument.getFieldNames();
-      for (String field : fields) {
-        Object fieldValue = solrDocument.getFieldValue(field);
-        map.put(field, fieldValue);
-      }
-      return map;
-    }
-    
-    public int getStart() {
-      return start;
-    }
-    
-    public int getSize() {
-      return size;
-    }
-    
-    public boolean hasMoreRows() {
-      return numFound > start + size;
-    }
-
-    @Override
-    public void remove() {
-      throw new UnsupportedOperationException();
-    }
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrQueryEscapingEvaluator.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrQueryEscapingEvaluator.java
deleted file mode 100644
index aece031..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrQueryEscapingEvaluator.java
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.util.List;
-
-import org.apache.solr.client.solrj.util.ClientUtils;
-
-public class SolrQueryEscapingEvaluator extends Evaluator {
-  @Override
-  public String evaluate(String expression, Context context) {
-    List<Object> l = parseParams(expression, context.getVariableResolver());
-    if (l.size() != 1) {
-      throw new DataImportHandlerException(SEVERE, "'escapeQueryChars' must have at least one parameter ");
-    }
-    String s = l.get(0).toString();
-    return ClientUtils.escapeQueryChars(s);
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrWriter.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrWriter.java
deleted file mode 100644
index 8e7624b..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SolrWriter.java
+++ /dev/null
@@ -1,175 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.params.UpdateParams;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.update.AddUpdateCommand;
-import org.apache.solr.update.CommitUpdateCommand;
-import org.apache.solr.update.DeleteUpdateCommand;
-import org.apache.solr.update.RollbackUpdateCommand;
-import org.apache.solr.update.processor.UpdateRequestProcessor;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.*;
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.StandardCharsets;
-
-/**
- * <p> Writes documents to SOLR. </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class SolrWriter extends DIHWriterBase implements DIHWriter {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  public static final String LAST_INDEX_KEY = "last_index_time";
-
-  private final UpdateRequestProcessor processor;
-  private final int commitWithin;
-  
-  SolrQueryRequest req;
-
-  public SolrWriter(UpdateRequestProcessor processor, SolrQueryRequest req) {
-    this.processor = processor;
-    this.req = req;
-    commitWithin = (req != null) ? req.getParams().getInt(UpdateParams.COMMIT_WITHIN, -1): -1;
-  }
-  
-  @Override
-  public void close() {
-    try {
-      processor.finish();
-    } catch (IOException e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-          "Unable to call finish() on UpdateRequestProcessor", e);
-    } finally {
-      deltaKeys = null;
-      try {
-        processor.close();
-      } catch (IOException e) {
-        SolrException.log(log, e);
-      }
-    }
-  }
-  @Override
-  public boolean upload(SolrInputDocument d) {
-    try {
-      AddUpdateCommand command = new AddUpdateCommand(req);
-      command.solrDoc = d;
-      command.commitWithin = commitWithin;
-      processor.processAdd(command);
-    } catch (Exception e) {
-      log.warn("Error creating document : {}", d, e);
-      return false;
-    }
-
-    return true;
-  }
-  
-  @Override
-  public void deleteDoc(Object id) {
-    try {
-      log.info("Deleting document: {}", id);
-      DeleteUpdateCommand delCmd = new DeleteUpdateCommand(req);
-      delCmd.setId(id.toString());
-      processor.processDelete(delCmd);
-    } catch (IOException e) {
-      log.error("Exception while deleteing: {}", id, e);
-    }
-  }
-
-  @Override
-  public void deleteByQuery(String query) {
-    try {
-      log.info("Deleting documents from Solr with query: {}", query);
-      DeleteUpdateCommand delCmd = new DeleteUpdateCommand(req);
-      delCmd.query = query;
-      processor.processDelete(delCmd);
-    } catch (IOException e) {
-      log.error("Exception while deleting by query: {}", query, e);
-    }
-  }
-
-  @Override
-  public void commit(boolean optimize) {
-    try {
-      CommitUpdateCommand commit = new CommitUpdateCommand(req,optimize);
-      processor.processCommit(commit);
-    } catch (Exception e) {
-      log.error("Exception while solr commit.", e);
-    }
-  }
-
-  @Override
-  public void rollback() {
-    try {
-      RollbackUpdateCommand rollback = new RollbackUpdateCommand(req);
-      processor.processRollback(rollback);
-    } catch (Exception e) {
-      log.error("Exception during rollback command.", e);
-    }
-  }
-
-  @Override
-  public void doDeleteAll() {
-    try {
-      DeleteUpdateCommand deleteCommand = new DeleteUpdateCommand(req);
-      deleteCommand.query = "*:*";
-      processor.processDelete(deleteCommand);
-    } catch (IOException e) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-              "Exception in full dump while deleting all documents.", e);
-    }
-  }
-
-  static String getResourceAsString(InputStream in) throws IOException {
-    ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);
-    byte[] buf = new byte[1024];
-    int sz = 0;
-    try {
-      while ((sz = in.read(buf)) != -1) {
-        baos.write(buf, 0, sz);
-      }
-    } finally {
-      try {
-        in.close();
-      } catch (Exception e) {
-
-      }
-    }
-    return new String(baos.toByteArray(), StandardCharsets.UTF_8);
-  }
-
-  static String getDocCount() {
-    if (DocBuilder.INSTANCE.get() != null) {
-      return ""
-              + (DocBuilder.INSTANCE.get().importStatistics.docCount.get() + 1);
-    } else {
-      return null;
-    }
-  }
-  @Override
-  public void init(Context context) {
-    /* NO-OP */
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SortedMapBackedCache.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SortedMapBackedCache.java
deleted file mode 100644
index bb84ba9..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SortedMapBackedCache.java
+++ /dev/null
@@ -1,238 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.SortedMap;
-import java.util.TreeMap;
-
-public class SortedMapBackedCache implements DIHCache {
-  private SortedMap<Object,List<Map<String,Object>>> theMap = null;
-  private boolean isOpen = false;
-  private boolean isReadOnly = false;
-  String primaryKeyName = null;
-  
-  @SuppressWarnings("unchecked")
-  @Override
-  public void add(Map<String,Object> rec) {
-    checkOpen(true);
-    checkReadOnly();
-    
-    if (rec == null || rec.size() == 0) {
-      return;
-    }
-    
-    if (primaryKeyName == null) {
-      primaryKeyName = rec.keySet().iterator().next();
-    }
-    
-    Object pk = rec.get(primaryKeyName);
-    if (pk instanceof Collection<?>) {
-      Collection<Object> c = (Collection<Object>) pk;
-      if (c.size() != 1) {
-        throw new RuntimeException(
-            "The primary key must have exactly 1 element.");
-      }
-      pk = c.iterator().next();
-    }
-    //Rows with null keys are not added.
-    if(pk==null) {
-      return;
-    }
-    List<Map<String,Object>> thisKeysRecs = theMap.get(pk);
-    if (thisKeysRecs == null) {
-      thisKeysRecs = new ArrayList<>();
-      theMap.put(pk, thisKeysRecs);
-    }
-    thisKeysRecs.add(rec);
-  }
-  
-  private void checkOpen(boolean shouldItBe) {
-    if (!isOpen && shouldItBe) {
-      throw new IllegalStateException(
-          "Must call open() before using this cache.");
-    }
-    if (isOpen && !shouldItBe) {
-      throw new IllegalStateException("The cache is already open.");
-    }
-  }
-  
-  private void checkReadOnly() {
-    if (isReadOnly) {
-      throw new IllegalStateException("Cache is read-only.");
-    }
-  }
-  
-  @Override
-  public void close() {
-    isOpen = false;
-  }
-  
-  @Override
-  public void delete(Object key) {
-    checkOpen(true);
-    checkReadOnly();
-    if(key==null) {
-      return;
-    }
-    theMap.remove(key);
-  }
-  
-  @Override
-  public void deleteAll() {
-    deleteAll(false);
-  }
-  
-  private void deleteAll(boolean readOnlyOk) {
-    if (!readOnlyOk) {
-      checkReadOnly();
-    }
-    if (theMap != null) {
-      theMap.clear();
-    }
-  }
-  
-  @Override
-  public void destroy() {
-    deleteAll(true);
-    theMap = null;
-    isOpen = false;
-  }
-  
-  @Override
-  public void flush() {
-    checkOpen(true);
-    checkReadOnly();
-  }
-  
-  @Override
-  public Iterator<Map<String,Object>> iterator(Object key) {
-    checkOpen(true);
-    if(key==null) {
-      return null;
-    }
-    if(key instanceof Iterable<?>) {
-      List<Map<String,Object>> vals = new ArrayList<>();
-      Iterator<?> iter = ((Iterable<?>) key).iterator();
-      while(iter.hasNext()) {
-        List<Map<String,Object>> val = theMap.get(iter.next());
-        if(val!=null) {
-          vals.addAll(val);
-        }
-      } 
-      if(vals.size()==0) {
-        return null;
-      }
-      return vals.iterator();
-    }    
-    List<Map<String,Object>> val = theMap.get(key);
-    if (val == null) {
-      return null;
-    }
-    return val.iterator();
-  }
-  
-  @Override
-  public Iterator<Map<String,Object>> iterator() {
-    return new Iterator<Map<String, Object>>() {
-        private Iterator<Map.Entry<Object,List<Map<String,Object>>>> theMapIter;
-        private List<Map<String,Object>> currentKeyResult = null;
-        private Iterator<Map<String,Object>> currentKeyResultIter = null;
-
-        {
-            theMapIter = theMap.entrySet().iterator();
-        }
-
-        @Override
-        public boolean hasNext() {
-          if (currentKeyResultIter != null) {
-            if (currentKeyResultIter.hasNext()) {
-              return true;
-            } else {
-              currentKeyResult = null;
-              currentKeyResultIter = null;
-            }
-          }
-
-          Map.Entry<Object,List<Map<String,Object>>> next = null;
-          if (theMapIter.hasNext()) {
-            next = theMapIter.next();
-            currentKeyResult = next.getValue();
-            currentKeyResultIter = currentKeyResult.iterator();
-            if (currentKeyResultIter.hasNext()) {
-              return true;
-            }
-          }
-          return false;
-        }
-
-        @Override
-        public Map<String,Object> next() {
-          if (currentKeyResultIter != null) {
-            if (currentKeyResultIter.hasNext()) {
-              return currentKeyResultIter.next();
-            } else {
-              currentKeyResult = null;
-              currentKeyResultIter = null;
-            }
-          }
-
-          Map.Entry<Object,List<Map<String,Object>>> next = null;
-          if (theMapIter.hasNext()) {
-            next = theMapIter.next();
-            currentKeyResult = next.getValue();
-            currentKeyResultIter = currentKeyResult.iterator();
-            if (currentKeyResultIter.hasNext()) {
-              return currentKeyResultIter.next();
-            }
-          }
-          return null;
-        }
-
-        @Override
-        public void remove() {
-          throw new UnsupportedOperationException();
-        }
-    };
-  }
-
-    @Override
-  public void open(Context context) {
-    checkOpen(false);
-    isOpen = true;
-    if (theMap == null) {
-      theMap = new TreeMap<>();
-    }
-    
-    String pkName = CachePropertyUtil.getAttributeValueAsString(context,
-        DIHCacheSupport.CACHE_PRIMARY_KEY);
-    if (pkName != null) {
-      primaryKeyName = pkName;
-    }
-    isReadOnly = false;
-    String readOnlyStr = CachePropertyUtil.getAttributeValueAsString(context,
-        DIHCacheSupport.CACHE_READ_ONLY);
-    if ("true".equalsIgnoreCase(readOnlyStr)) {
-      isReadOnly = true;
-    }
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SqlEntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SqlEntityProcessor.java
deleted file mode 100644
index 8e0522a..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SqlEntityProcessor.java
+++ /dev/null
@@ -1,173 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.util.Iterator;
-import java.util.Map;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-/**
- * <p>
- * An {@link EntityProcessor} instance which provides support for reading from
- * databases. It is used in conjunction with {@link JdbcDataSource}. This is the default
- * {@link EntityProcessor} if none is specified explicitly in data-config.xml
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- *
- * @since solr 1.3
- */
-public class SqlEntityProcessor extends EntityProcessorBase {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  protected DataSource<Iterator<Map<String, Object>>> dataSource;
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public void init(Context context) {
-    super.init(context);
-    dataSource = context.getDataSource();
-  }
-
-  protected void initQuery(String q) {
-    try {
-      DataImporter.QUERY_COUNT.get().incrementAndGet();
-      rowIterator = dataSource.getData(q);
-      this.query = q;
-    } catch (DataImportHandlerException e) {
-      throw e;
-    } catch (Exception e) {
-      log.error( "The query failed '{}'", q, e);
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE, e);
-    }
-  }
-
-  @Override
-  public Map<String, Object> nextRow() {    
-    if (rowIterator == null) {
-      String q = getQuery();
-      initQuery(context.replaceTokens(q));
-    }
-    return getNext();
-  }
-
-  @Override
-  public Map<String, Object> nextModifiedRowKey() {
-    if (rowIterator == null) {
-      String deltaQuery = context.getEntityAttribute(DELTA_QUERY);
-      if (deltaQuery == null)
-        return null;
-      initQuery(context.replaceTokens(deltaQuery));
-    }
-    return getNext();
-  }
-
-  @Override
-  public Map<String, Object> nextDeletedRowKey() {
-    if (rowIterator == null) {
-      String deletedPkQuery = context.getEntityAttribute(DEL_PK_QUERY);
-      if (deletedPkQuery == null)
-        return null;
-      initQuery(context.replaceTokens(deletedPkQuery));
-    }
-    return getNext();
-  }
-
-  @Override
-  public Map<String, Object> nextModifiedParentRowKey() {
-    if (rowIterator == null) {
-      String parentDeltaQuery = context.getEntityAttribute(PARENT_DELTA_QUERY);
-      if (parentDeltaQuery == null)
-        return null;
-      if (log.isInfoEnabled()) {
-        log.info("Running parentDeltaQuery for Entity: {}"
-            , context.getEntityAttribute("name"));
-      }
-      initQuery(context.replaceTokens(parentDeltaQuery));
-    }
-    return getNext();
-  }
-
-  public String getQuery() {
-    String queryString = context.getEntityAttribute(QUERY);
-    if (Context.FULL_DUMP.equals(context.currentProcess())) {
-      return queryString;
-    }
-    if (Context.DELTA_DUMP.equals(context.currentProcess())) {
-      String deltaImportQuery = context.getEntityAttribute(DELTA_IMPORT_QUERY);
-      if(deltaImportQuery != null) return deltaImportQuery;
-    }
-    log.warn("'deltaImportQuery' attribute is not specified for entity : {}", entityName);
-    return getDeltaImportQuery(queryString);
-  }
-
-  public String getDeltaImportQuery(String queryString) {    
-    StringBuilder sb = new StringBuilder(queryString);
-    if (SELECT_WHERE_PATTERN.matcher(queryString).find()) {
-      sb.append(" and ");
-    } else {
-      sb.append(" where ");
-    }
-    boolean first = true;
-    String[] primaryKeys = context.getEntityAttribute("pk").split(",");
-    for (String primaryKey : primaryKeys) {
-      if (!first) {
-        sb.append(" and ");
-      }
-      first = false;
-      Object val = context.resolve("dataimporter.delta." + primaryKey);
-      if (val == null) {
-        Matcher m = DOT_PATTERN.matcher(primaryKey);
-        if (m.find()) {
-          val = context.resolve("dataimporter.delta." + m.group(1));
-        }
-      }
-      sb.append(primaryKey).append(" = ");
-      if (val instanceof Number) {
-        sb.append(val.toString());
-      } else {
-        sb.append("'").append(val.toString()).append("'");
-      }
-    }
-    return sb.toString();
-  }
-
-  private static Pattern SELECT_WHERE_PATTERN = Pattern.compile(
-          "^\\s*(select\\b.*?\\b)(where).*", Pattern.CASE_INSENSITIVE);
-
-  public static final String QUERY = "query";
-
-  public static final String DELTA_QUERY = "deltaQuery";
-
-  public static final String DELTA_IMPORT_QUERY = "deltaImportQuery";
-
-  public static final String PARENT_DELTA_QUERY = "parentDeltaQuery";
-
-  public static final String DEL_PK_QUERY = "deletedPkQuery";
-
-  public static final Pattern DOT_PATTERN = Pattern.compile(".*?\\.(.*)$");
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SqlEscapingEvaluator.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SqlEscapingEvaluator.java
deleted file mode 100644
index 7f9c26e..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SqlEscapingEvaluator.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.util.List;
-
-/**
- * <p> Escapes values in SQL queries.  It escapes the value of the given expression 
- * by replacing all occurrences of single-quotes by two single-quotes and similarily 
- * for double-quotes </p>
- */
-public class SqlEscapingEvaluator extends Evaluator {
-  @Override
-  public String evaluate(String expression, Context context) {
-    List<Object> l = parseParams(expression, context.getVariableResolver());
-    if (l.size() != 1) {
-      throw new DataImportHandlerException(SEVERE, "'escapeSql' must have at least one parameter ");
-    }
-    String s = l.get(0).toString();
-    // escape single quote with two single quotes, double quote
-    // with two doule quotes, and backslash with double backslash.
-    // See:  http://dev.mysql.com/doc/refman/4.1/en/mysql-real-escape-string.html
-    return s.replaceAll("'", "''").replaceAll("\"", "\"\"").replaceAll("\\\\", "\\\\\\\\");
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/TemplateTransformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/TemplateTransformer.java
deleted file mode 100644
index 75a6ff2..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/TemplateTransformer.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.util.HashMap;
-import java.util.List;
-import java.util.ArrayList;
-import java.util.Map;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * A {@link Transformer} which can put values into a column by resolving an expression
- * containing other columns
- * </p>
- * <p>
- * For example:<br>
- * &lt;field column="name" template="${e.lastName}, ${e.firstName}
- * ${e.middleName}" /&gt; will produce the name by combining values from
- * lastName, firstName and middleName fields as given in the template attribute.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- *
- * @since solr 1.3
- */
-public class TemplateTransformer extends Transformer {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private Map<String ,List<String>> templateVsVars = new HashMap<>();
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public Object transformRow(Map<String, Object> row, Context context) {
-
-
-    VariableResolver resolver = context.getVariableResolver();
-    // Add current row to the copy of resolver map
-
-    for (Map<String, String> map : context.getAllEntityFields()) {
-      map.entrySet();
-      String expr = map.get(TEMPLATE);
-      if (expr == null)
-        continue;
-
-      String column = map.get(DataImporter.COLUMN);
-
-      // Verify if all variables can be resolved or not
-      boolean resolvable = true;
-      List<String> variables = this.templateVsVars.get(expr);
-      if(variables == null){
-        variables = resolver.getVariables(expr);
-        this.templateVsVars.put(expr, variables);
-      }
-      for (String v : variables) {
-        if (resolver.resolve(v) == null) {
-          log.warn("Unable to resolve variable: {} while parsing expression: {}"
-              ,v , expr);
-          resolvable = false;
-        }
-      }
-
-      if (!resolvable)
-        continue;
-      if(variables.size() == 1 && expr.startsWith("${") && expr.endsWith("}")){
-        addToRow(column, row, resolver.resolve(variables.get(0)));
-      } else {
-        addToRow(column, row, resolver.replaceTokens(expr));
-      }
-    }
-
-    return row;
-  }
-
-  @SuppressWarnings({"unchecked"})
-  private void addToRow(String key, Map<String, Object> row, Object value) {
-    Object prevVal = row.get(key);
-    if (prevVal != null) {
-      if (prevVal instanceof List) {
-        ((List) prevVal).add(value);
-      } else {
-        ArrayList<Object> valList = new ArrayList<Object>();
-        valList.add(prevVal);
-        valList.add(value);
-        row.put(key, valList);
-      }
-    } else {
-      row.put(key, value);
-    }
-  }
-    
-  public static final String TEMPLATE = "template";
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Transformer.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Transformer.java
deleted file mode 100644
index c7923e1..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Transformer.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.Map;
-
-/**
- * <p>
- * Use this API to implement a custom transformer for any given entity
- * </p>
- * <p>
- * Implementations of this abstract class must provide a public no-args constructor.
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- *
- * @since solr 1.3
- */
-public abstract class Transformer {
-  /**
-   * The input is a row of data and the output has to be a new row.
-   *
-   * @param context The current context
-   * @param row     A row of data
-   * @return The changed data. It must be a {@link Map}&lt;{@link String}, {@link Object}&gt; if it returns
-   *         only one row or if there are multiple rows to be returned it must
-   *         be a {@link java.util.List}&lt;{@link Map}&lt;{@link String}, {@link Object}&gt;&gt;
-   */
-  public abstract Object transformRow(Map<String, Object> row, Context context);
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/URLDataSource.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/URLDataSource.java
deleted file mode 100644
index 0beed25..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/URLDataSource.java
+++ /dev/null
@@ -1,154 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.io.Reader;
-import java.lang.invoke.MethodHandles;
-import java.net.URL;
-import java.net.URLConnection;
-import java.nio.charset.StandardCharsets;
-import java.util.Properties;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-/**
- * <p> A data source implementation which can be used to read character files using HTTP. </p> <p> Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a> for more
- * details. </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- *
- * @since solr 1.4
- */
-public class URLDataSource extends DataSource<Reader> {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private String baseUrl;
-
-  private String encoding;
-
-  private int connectionTimeout = CONNECTION_TIMEOUT;
-
-  private int readTimeout = READ_TIMEOUT;
-
-  private Context context;
-
-  private Properties initProps;
-
-  public URLDataSource() {
-  }
-
-  @Override
-  public void init(Context context, Properties initProps) {
-    this.context = context;
-    this.initProps = initProps;
-    
-    baseUrl = getInitPropWithReplacements(BASE_URL);
-    if (getInitPropWithReplacements(ENCODING) != null)
-      encoding = getInitPropWithReplacements(ENCODING);
-    String cTimeout = getInitPropWithReplacements(CONNECTION_TIMEOUT_FIELD_NAME);
-    String rTimeout = getInitPropWithReplacements(READ_TIMEOUT_FIELD_NAME);
-    if (cTimeout != null) {
-      try {
-        connectionTimeout = Integer.parseInt(cTimeout);
-      } catch (NumberFormatException e) {
-        log.warn("Invalid connection timeout: {}", cTimeout);
-      }
-    }
-    if (rTimeout != null) {
-      try {
-        readTimeout = Integer.parseInt(rTimeout);
-      } catch (NumberFormatException e) {
-        log.warn("Invalid read timeout: {}", rTimeout);
-      }
-    }
-  }
-
-  @Override
-  public Reader getData(String query) {
-    URL url = null;
-    try {
-      if (URIMETHOD.matcher(query).find()) url = new URL(query);
-      else url = new URL(baseUrl + query);
-
-      log.debug("Accessing URL: {}", url);
-
-      URLConnection conn = url.openConnection();
-      conn.setConnectTimeout(connectionTimeout);
-      conn.setReadTimeout(readTimeout);
-      InputStream in = conn.getInputStream();
-      String enc = encoding;
-      if (enc == null) {
-        String cType = conn.getContentType();
-        if (cType != null) {
-          Matcher m = CHARSET_PATTERN.matcher(cType);
-          if (m.find()) {
-            enc = m.group(1);
-          }
-        }
-      }
-      if (enc == null)
-        enc = UTF_8;
-      DataImporter.QUERY_COUNT.get().incrementAndGet();
-      return new InputStreamReader(in, enc);
-    } catch (Exception e) {
-      log.error("Exception thrown while getting data", e);
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE,
-              "Exception in invoking url " + url, e);
-    }
-  }
-
-  @Override
-  public void close() {
-  }
-
-  public String getBaseUrl() {
-    return baseUrl;
-  }
-
-  private String getInitPropWithReplacements(String propertyName) {
-    final String expr = initProps.getProperty(propertyName);
-    if (expr == null) {
-      return null;
-    }
-    return context.replaceTokens(expr);
-  }
-
-  static final Pattern URIMETHOD = Pattern.compile("\\w{3,}:/");
-
-  private static final Pattern CHARSET_PATTERN = Pattern.compile(".*?charset=(.*)$", Pattern.CASE_INSENSITIVE);
-
-  public static final String ENCODING = "encoding";
-
-  public static final String BASE_URL = "baseUrl";
-
-  public static final String UTF_8 = StandardCharsets.UTF_8.name();
-
-  public static final String CONNECTION_TIMEOUT_FIELD_NAME = "connectionTimeout";
-
-  public static final String READ_TIMEOUT_FIELD_NAME = "readTimeout";
-
-  public static final int CONNECTION_TIMEOUT = 5000;
-
-  public static final int READ_TIMEOUT = 10000;
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/UrlEvaluator.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/UrlEvaluator.java
deleted file mode 100644
index 8a6654c..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/UrlEvaluator.java
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-
-import java.net.URLEncoder;
-import java.util.List;
-
-/**
- * <p>Escapes reserved characters in Solr queries</p>
- *
- * @see org.apache.solr.client.solrj.util.ClientUtils#escapeQueryChars(String)
- */
-public class UrlEvaluator extends Evaluator {
-  @Override
-  public String evaluate(String expression, Context context) {
-    List<Object> l = parseParams(expression, context.getVariableResolver());
-    if (l.size() != 1) {
-      throw new DataImportHandlerException(SEVERE, "'encodeUrl' must have at least one parameter ");
-    }
-    String s = l.get(0).toString();
-
-    try {
-      return URLEncoder.encode(s.toString(), "UTF-8");
-    } catch (Exception e) {
-      wrapAndThrow(SEVERE, e, "Unable to encode expression: " + expression + " with value: " + s);
-      return null;
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/VariableResolver.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/VariableResolver.java
deleted file mode 100644
index 090e21b..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/VariableResolver.java
+++ /dev/null
@@ -1,211 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.WeakHashMap;
-import java.util.function.Function;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import org.apache.solr.common.util.Cache;
-import org.apache.solr.common.util.MapBackedCache;
-import org.apache.solr.update.processor.TemplateUpdateProcessorFactory;
-
-import static org.apache.solr.update.processor.TemplateUpdateProcessorFactory.Resolved;
-
-/**
- * <p>
- * A set of nested maps that can resolve variables by namespaces. Variables are
- * enclosed with a dollar sign then an opening curly brace, ending with a
- * closing curly brace. Namespaces are delimited with '.' (period).
- * </p>
- * <p>
- * This class also has special logic to resolve evaluator calls by recognizing
- * the reserved function namespace: dataimporter.functions.xxx
- * </p>
- * <p>
- * This class caches strings that have already been resolved from the current
- * dih import.
- * </p>
- * <b>This API is experimental and may change in the future.</b>
- * 
- * 
- * @since solr 1.3
- */
-public class VariableResolver {
-  
-  private static final Pattern DOT_PATTERN = Pattern.compile("[.]");
-  private static final Pattern EVALUATOR_FORMAT_PATTERN = Pattern
-      .compile("^(\\w*?)\\((.*?)\\)$");
-  private Map<String,Object> rootNamespace;
-  private Map<String,Evaluator> evaluators;
-  private Cache<String,Resolved> cache = new MapBackedCache<>(new WeakHashMap<>());
-  private Function<String,Object> fun = this::resolve;
-
-  public static final String FUNCTIONS_NAMESPACE = "dataimporter.functions.";
-  public static final String FUNCTIONS_NAMESPACE_SHORT = "dih.functions.";
-  
-  public VariableResolver() {
-    rootNamespace = new HashMap<>();
-  }
-  
-  public VariableResolver(Properties defaults) {
-    rootNamespace = new HashMap<>();
-    for (Map.Entry<Object,Object> entry : defaults.entrySet()) {
-      rootNamespace.put(entry.getKey().toString(), entry.getValue());
-    }
-  }
-  
-  public VariableResolver(Map<String,Object> defaults) {
-    rootNamespace = new HashMap<>(defaults);
-  }
-  
-  /**
-   * Resolves a given value with a name
-   * 
-   * @param name
-   *          the String to be resolved
-   * @return an Object which is the result of evaluation of given name
-   */
-  public Object resolve(String name) {
-    Object r = null;
-    if (name != null) {
-      String[] nameParts = DOT_PATTERN.split(name);
-      CurrentLevel cr = currentLevelMap(nameParts,
-          rootNamespace, false);
-      Map<String,Object> currentLevel = cr.map;
-      r = currentLevel.get(nameParts[nameParts.length - 1]);
-      if (r == null && name.startsWith(FUNCTIONS_NAMESPACE)
-          && name.length() > FUNCTIONS_NAMESPACE.length()) {
-        return resolveEvaluator(FUNCTIONS_NAMESPACE, name);
-      }
-      if (r == null && name.startsWith(FUNCTIONS_NAMESPACE_SHORT)
-          && name.length() > FUNCTIONS_NAMESPACE_SHORT.length()) {
-        return resolveEvaluator(FUNCTIONS_NAMESPACE_SHORT, name);
-      }
-      if (r == null) {
-        StringBuilder sb = new StringBuilder();
-        for(int i=cr.level ; i<nameParts.length ; i++) {
-          if(sb.length()>0) {
-            sb.append(".");
-          }
-          sb.append(nameParts[i]);
-        }
-        r = cr.map.get(sb.toString());
-      }      
-      if (r == null) {
-        r = System.getProperty(name);
-      }
-    }
-    return r == null ? "" : r;
-  }
-  
-  private Object resolveEvaluator(String namespace, String name) {
-    if (evaluators == null) {
-      return "";
-    }
-    Matcher m = EVALUATOR_FORMAT_PATTERN.matcher(name
-        .substring(namespace.length()));
-    if (m.find()) {
-      String fname = m.group(1);
-      Evaluator evaluator = evaluators.get(fname);
-      if (evaluator == null) return "";
-      ContextImpl ctx = new ContextImpl(null, this, null, null, null, null,
-          null);
-      String g2 = m.group(2);
-      return evaluator.evaluate(g2, ctx);
-    } else {
-      return "";
-    }
-  }
-  
-  /**
-   * Given a String with place holders, replace them with the value tokens.
-   * 
-   * @return the string with the placeholders replaced with their values
-   */
-  public String replaceTokens(String template) {
-    return TemplateUpdateProcessorFactory.replaceTokens(template, cache, fun, TemplateUpdateProcessorFactory.DOLLAR_BRACES_PLACEHOLDER_PATTERN);
-  }
-  public void addNamespace(String name, Map<String,Object> newMap) {
-    if (newMap != null) {
-      if (name != null) {
-        String[] nameParts = DOT_PATTERN.split(name);
-        Map<String,Object> nameResolveLevel = currentLevelMap(nameParts,
-            rootNamespace, false).map;
-        nameResolveLevel.put(nameParts[nameParts.length - 1], newMap);
-      } else {
-        for (Map.Entry<String,Object> entry : newMap.entrySet()) {
-          String[] keyParts = DOT_PATTERN.split(entry.getKey());
-          Map<String,Object> currentLevel = rootNamespace;
-          currentLevel = currentLevelMap(keyParts, currentLevel, false).map;
-          currentLevel.put(keyParts[keyParts.length - 1], entry.getValue());
-        }
-      }
-    }
-  }
-
-  public List<String> getVariables(String expr) {
-    return TemplateUpdateProcessorFactory.getVariables(expr, cache, TemplateUpdateProcessorFactory.DOLLAR_BRACES_PLACEHOLDER_PATTERN);
-  }
-
-  static class CurrentLevel {
-    final Map<String,Object> map;
-    final int level;
-    CurrentLevel(int level, Map<String,Object> map) {
-      this.level = level;
-      this.map = map;
-    }   
-  }
-  
-  private CurrentLevel currentLevelMap(String[] keyParts,
-      Map<String,Object> currentLevel, boolean includeLastLevel) {
-    int j = includeLastLevel ? keyParts.length : keyParts.length - 1;
-    for (int i = 0; i < j; i++) {
-      Object o = currentLevel.get(keyParts[i]);
-      if (o == null) {
-        if(i == j-1) {
-          Map<String,Object> nextLevel = new HashMap<>();
-          currentLevel.put(keyParts[i], nextLevel);
-          currentLevel = nextLevel;
-        } else {
-          return new CurrentLevel(i, currentLevel);
-        }
-      } else if (o instanceof Map<?,?>) {
-        @SuppressWarnings("unchecked")
-        Map<String,Object> nextLevel = (Map<String,Object>) o;
-        currentLevel = nextLevel;
-      } else {
-        throw new AssertionError(
-            "Non-leaf nodes should be of type java.util.Map");
-      }
-    }
-    return new CurrentLevel(j-1, currentLevel);
-  }
-  
-  public void removeNamespace(String name) {
-    rootNamespace.remove(name);
-  }
-  
-  public void setEvaluators(Map<String,Evaluator> evaluators) {
-    this.evaluators = evaluators;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathEntityProcessor.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathEntityProcessor.java
deleted file mode 100644
index 67dd80e..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathEntityProcessor.java
+++ /dev/null
@@ -1,555 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow;
-import org.apache.solr.core.SolrCore;
-import org.apache.lucene.analysis.util.ResourceLoader;
-import org.apache.solr.util.SystemIdResolver;
-import org.apache.solr.common.util.XMLErrorLogger;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.commons.io.IOUtils;
-
-import javax.xml.transform.TransformerException;
-import javax.xml.transform.TransformerFactory;
-import javax.xml.transform.stream.StreamResult;
-import javax.xml.transform.stream.StreamSource;
-import java.io.CharArrayReader;
-import java.io.CharArrayWriter;
-import java.io.Reader;
-import java.lang.invoke.MethodHandles;
-import java.util.*;
-import java.util.concurrent.ArrayBlockingQueue;
-import java.util.concurrent.BlockingQueue;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicBoolean;
-import java.util.concurrent.atomic.AtomicReference;
-
-/**
- * <p> An implementation of {@link EntityProcessor} which uses a streaming xpath parser to extract values out of XML documents.
- * It is typically used in conjunction with {@link URLDataSource} or {@link FileDataSource}. </p> <p> Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a> for more
- * details. </p>
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- *
- * @see XPathRecordReader
- * @since solr 1.3
- */
-public class XPathEntityProcessor extends EntityProcessorBase {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private static final XMLErrorLogger xmllog = new XMLErrorLogger(log);
-
-  private static final Map<String, Object> END_MARKER = new HashMap<>();
-  
-  protected List<String> placeHolderVariables;
-
-  protected List<String> commonFields;
-
-  private String pk;
-
-  private XPathRecordReader xpathReader;
-
-  protected DataSource<Reader> dataSource;
-
-  protected javax.xml.transform.Transformer xslTransformer;
-
-  protected boolean useSolrAddXml = false;
-
-  protected boolean streamRows = false;
-
-  // Amount of time to block reading/writing to queue when streaming
-  protected int blockingQueueTimeOut = 10;
-  
-  // Units for pumpTimeOut
-  protected TimeUnit blockingQueueTimeOutUnits = TimeUnit.SECONDS;
-  
-  // Number of rows to queue for asynchronous processing
-  protected int blockingQueueSize = 1000;
-
-  protected Thread publisherThread;
-
-  protected boolean reinitXPathReader = true;
-  
-  @Override
-  @SuppressWarnings("unchecked")
-  public void init(Context context) {
-    super.init(context);
-    if (reinitXPathReader)
-      initXpathReader(context.getVariableResolver());
-    pk = context.getEntityAttribute("pk");
-    dataSource = context.getDataSource();
-    rowIterator = null;
-
-  }
-
-  private void initXpathReader(VariableResolver resolver) {
-    reinitXPathReader = false;
-    useSolrAddXml = Boolean.parseBoolean(context
-            .getEntityAttribute(USE_SOLR_ADD_SCHEMA));
-    streamRows = Boolean.parseBoolean(context
-            .getEntityAttribute(STREAM));
-    if (context.getResolvedEntityAttribute("batchSize") != null) {
-      blockingQueueSize = Integer.parseInt(context.getEntityAttribute("batchSize"));
-    }
-    if (context.getResolvedEntityAttribute("readTimeOut") != null) {
-      blockingQueueTimeOut = Integer.parseInt(context.getEntityAttribute("readTimeOut"));
-    }
-    String xslt = context.getEntityAttribute(XSL);
-    if (xslt != null) {
-      xslt = context.replaceTokens(xslt);
-      try {
-        // create an instance of TransformerFactory
-        TransformerFactory transFact = TransformerFactory.newInstance();
-        final SolrCore core = context.getSolrCore();
-        final StreamSource xsltSource;
-        if (core != null) {
-          final ResourceLoader loader = core.getResourceLoader();
-          transFact.setURIResolver(new SystemIdResolver(loader).asURIResolver());
-          xsltSource = new StreamSource(loader.openResource(xslt),
-            SystemIdResolver.createSystemIdFromResourceName(xslt));
-        } else {
-          // fallback for tests
-          xsltSource = new StreamSource(xslt);
-        }
-        transFact.setErrorListener(xmllog);
-        try {
-          xslTransformer = transFact.newTransformer(xsltSource);
-        } finally {
-          // some XML parsers are broken and don't close the byte stream (but they should according to spec)
-          IOUtils.closeQuietly(xsltSource.getInputStream());
-        }
-        if (log.isInfoEnabled()) {
-          log.info("Using xslTransformer: {}", xslTransformer.getClass().getName());
-        }
-      } catch (Exception e) {
-        throw new DataImportHandlerException(SEVERE,
-                "Error initializing XSL ", e);
-      }
-    }
-
-    if (useSolrAddXml) {
-      // Support solr add documents
-      xpathReader = new XPathRecordReader("/add/doc");
-      xpathReader.addField("name", "/add/doc/field/@name", true);
-      xpathReader.addField("value", "/add/doc/field", true);
-    } else {
-      String forEachXpath = context.getResolvedEntityAttribute(FOR_EACH);
-      if (forEachXpath == null)
-        throw new DataImportHandlerException(SEVERE,
-                "Entity : " + context.getEntityAttribute("name")
-                        + " must have a 'forEach' attribute");
-      if (forEachXpath.equals(context.getEntityAttribute(FOR_EACH))) reinitXPathReader = true;
-
-      try {
-        xpathReader = new XPathRecordReader(forEachXpath);
-        for (Map<String, String> field : context.getAllEntityFields()) {
-          if (field.get(XPATH) == null)
-            continue;
-          int flags = 0;
-          if ("true".equals(field.get("flatten"))) {
-            flags = XPathRecordReader.FLATTEN;
-          }
-          String xpath = field.get(XPATH);
-          xpath = context.replaceTokens(xpath);
-          //!xpath.equals(field.get(XPATH) means the field xpath has a template
-          //in that case ensure that the XPathRecordReader is reinitialized
-          //for each xml
-          if (!xpath.equals(field.get(XPATH)) && !context.isRootEntity()) reinitXPathReader = true;
-          xpathReader.addField(field.get(DataImporter.COLUMN),
-                  xpath,
-                  Boolean.parseBoolean(field.get(DataImporter.MULTI_VALUED)),
-                  flags);
-        }
-      } catch (RuntimeException e) {
-        throw new DataImportHandlerException(SEVERE,
-                "Exception while reading xpaths for fields", e);
-      }
-    }
-    String url = context.getEntityAttribute(URL);
-    List<String> l = url == null ? Collections.emptyList() : resolver.getVariables(url);
-    for (String s : l) {
-      if (s.startsWith(entityName + ".")) {
-        if (placeHolderVariables == null)
-          placeHolderVariables = new ArrayList<>();
-        placeHolderVariables.add(s.substring(entityName.length() + 1));
-      }
-    }
-    for (Map<String, String> fld : context.getAllEntityFields()) {
-      if (fld.get(COMMON_FIELD) != null && "true".equals(fld.get(COMMON_FIELD))) {
-        if (commonFields == null)
-          commonFields = new ArrayList<>();
-        commonFields.add(fld.get(DataImporter.COLUMN));
-      }
-    }
-
-  }
-
-  @Override
-  public Map<String, Object> nextRow() {
-    Map<String, Object> result;
-
-    if (!context.isRootEntity())
-      return fetchNextRow();
-
-    while (true) {
-      result = fetchNextRow();
-
-      if (result == null)
-        return null;
-
-      if (pk == null || result.get(pk) != null)
-        return result;
-    }
-  }
-
-  @Override
-  public void postTransform(Map<String, Object> r) {
-    readUsefulVars(r);
-  }
-
-  @SuppressWarnings("unchecked")
-  private Map<String, Object> fetchNextRow() {
-    Map<String, Object> r = null;
-    while (true) {
-      if (rowIterator == null)
-        initQuery(context.replaceTokens(context.getEntityAttribute(URL)));
-      r = getNext();
-      if (r == null) {
-        Object hasMore = context.getSessionAttribute(HAS_MORE, Context.SCOPE_ENTITY);
-        try {
-          if ("true".equals(hasMore) || Boolean.TRUE.equals(hasMore)) {
-            String url = (String) context.getSessionAttribute(NEXT_URL, Context.SCOPE_ENTITY);
-            if (url == null)
-              url = context.getEntityAttribute(URL);
-            addNamespace();
-            initQuery(context.replaceTokens(url));
-            r = getNext();
-            if (r == null)
-              return null;
-          } else {
-            return null;
-          }
-        } finally {
-          context.setSessionAttribute(HAS_MORE,null,Context.SCOPE_ENTITY);
-          context.setSessionAttribute(NEXT_URL,null,Context.SCOPE_ENTITY);
-        }
-      }
-      addCommonFields(r);
-      return r;
-    }
-  }
-
-  private void addNamespace() {
-    Map<String, Object> namespace = new HashMap<>();
-    Set<String> allNames = new HashSet<>();
-    if (commonFields != null) allNames.addAll(commonFields);
-    if (placeHolderVariables != null) allNames.addAll(placeHolderVariables);
-    if(allNames.isEmpty()) return;
-
-    for (String name : allNames) {
-      Object val = context.getSessionAttribute(name, Context.SCOPE_ENTITY);
-      if (val != null) namespace.put(name, val);
-    }
-    context.getVariableResolver().addNamespace(entityName, namespace);
-  }
-
-  private void addCommonFields(Map<String, Object> r) {
-    if(commonFields != null){
-      for (String commonField : commonFields) {
-        if(r.get(commonField) == null) {
-          Object val = context.getSessionAttribute(commonField, Context.SCOPE_ENTITY);
-          if(val != null) r.put(commonField, val);
-        }
-
-      }
-    }
-
-  }
-
-  @SuppressWarnings({"unchecked"})
-  private void initQuery(String s) {
-    Reader data = null;
-    try {
-      final List<Map<String, Object>> rows = new ArrayList<>();
-      try {
-        data = dataSource.getData(s);
-      } catch (Exception e) {
-        if (ABORT.equals(onError)) {
-          wrapAndThrow(SEVERE, e);
-        } else if (SKIP.equals(onError)) {
-          if (log.isDebugEnabled()) {
-            log.debug("Skipping url : {}", s, e);
-          }
-          wrapAndThrow(DataImportHandlerException.SKIP, e);
-        } else {
-          log.warn("Failed for url : {}", s, e);
-          rowIterator = Collections.EMPTY_LIST.iterator();
-          return;
-        }
-      }
-      if (xslTransformer != null) {
-        try {
-          SimpleCharArrayReader caw = new SimpleCharArrayReader();
-          xslTransformer.transform(new StreamSource(data),
-                  new StreamResult(caw));
-          data = caw.getReader();
-        } catch (TransformerException e) {
-          if (ABORT.equals(onError)) {
-            wrapAndThrow(SEVERE, e, "Exception in applying XSL Transformation");
-          } else if (SKIP.equals(onError)) {
-            wrapAndThrow(DataImportHandlerException.SKIP, e);
-          } else {
-            log.warn("Failed for url : {}", s, e);
-            rowIterator = Collections.EMPTY_LIST.iterator();
-            return;
-          }
-        }
-      }
-      if (streamRows) {
-        rowIterator = getRowIterator(data, s);
-      } else {
-        try {
-          xpathReader.streamRecords(data, (record, xpath) -> rows.add(readRow(record, xpath)));
-        } catch (Exception e) {
-          String msg = "Parsing failed for xml, url:" + s + " rows processed:" + rows.size();
-          if (rows.size() > 0) msg += " last row: " + rows.get(rows.size() - 1);
-          if (ABORT.equals(onError)) {
-            wrapAndThrow(SEVERE, e, msg);
-          } else if (SKIP.equals(onError)) {
-            log.warn(msg, e);
-            Map<String, Object> map = new HashMap<>();
-            map.put(DocBuilder.SKIP_DOC, Boolean.TRUE);
-            rows.add(map);
-          } else if (CONTINUE.equals(onError)) {
-            log.warn(msg, e);
-          }
-        }
-        rowIterator = rows.iterator();
-      }
-    } finally {
-      if (!streamRows) {
-        closeIt(data);
-      }
-
-    }
-  }
-
-  private void closeIt(Reader data) {
-    try {
-      data.close();
-    } catch (Exception e) { /* Ignore */
-    }
-  }
-
-  @SuppressWarnings({"unchecked"})
-  protected Map<String, Object> readRow(Map<String, Object> record, String xpath) {
-    if (useSolrAddXml) {
-      List<String> names = (List<String>) record.get("name");
-      List<String> values = (List<String>) record.get("value");
-      Map<String, Object> row = new HashMap<>();
-      for (int i = 0; i < names.size() && i < values.size(); i++) {
-        if (row.containsKey(names.get(i))) {
-          Object existing = row.get(names.get(i));
-          if (existing instanceof List) {
-            @SuppressWarnings({"rawtypes"})
-            List list = (List) existing;
-            list.add(values.get(i));
-          } else {
-            @SuppressWarnings({"rawtypes"})
-            List list = new ArrayList();
-            list.add(existing);
-            list.add(values.get(i));
-            row.put(names.get(i), list);
-          }
-        } else {
-          row.put(names.get(i), values.get(i));
-        }
-      }
-      return row;
-    } else {
-      record.put(XPATH_FIELD_NAME, xpath);
-      return record;
-    }
-  }
-
-
-  private static class SimpleCharArrayReader extends CharArrayWriter {
-    public Reader getReader() {
-      return new CharArrayReader(super.buf, 0, super.count);
-    }
-
-  }
-
-  @SuppressWarnings("unchecked")
-  private Map<String, Object> readUsefulVars(Map<String, Object> r) {
-    Object val = r.get(HAS_MORE);
-    if (val != null)
-      context.setSessionAttribute(HAS_MORE, val,Context.SCOPE_ENTITY);
-    val = r.get(NEXT_URL);
-    if (val != null)
-      context.setSessionAttribute(NEXT_URL, val,Context.SCOPE_ENTITY);
-    if (placeHolderVariables != null) {
-      for (String s : placeHolderVariables) {
-        val = r.get(s);
-        context.setSessionAttribute(s, val,Context.SCOPE_ENTITY);
-      }
-    }
-    if (commonFields != null) {
-      for (String s : commonFields) {
-        Object commonVal = r.get(s);
-        if (commonVal != null) {
-          context.setSessionAttribute(s, commonVal,Context.SCOPE_ENTITY);
-        }
-      }
-    }
-    return r;
-
-  }
-
-  private Iterator<Map<String, Object>> getRowIterator(final Reader data, final String s) {
-    //nothing atomic about it. I just needed a StongReference
-    final AtomicReference<Exception> exp = new AtomicReference<>();
-    final BlockingQueue<Map<String, Object>> blockingQueue = new ArrayBlockingQueue<>(blockingQueueSize);
-    final AtomicBoolean isEnd = new AtomicBoolean(false);
-    final AtomicBoolean throwExp = new AtomicBoolean(true);
-    publisherThread = new Thread() {
-      @Override
-      public void run() {
-        try {
-          xpathReader.streamRecords(data, (record, xpath) -> {
-            if (isEnd.get()) {
-              throwExp.set(false);
-              //To end the streaming . otherwise the parsing will go on forever
-              //though consumer has gone away
-              throw new RuntimeException("BREAK");
-            }
-            Map<String, Object> row;
-            try {
-              row = readRow(record, xpath);
-            } catch (Exception e) {
-              isEnd.set(true);
-              return;
-            }
-            offer(row);
-          });
-        } catch (Exception e) {
-          if(throwExp.get()) exp.set(e);
-        } finally {
-          closeIt(data);
-          if (!isEnd.get()) {
-            offer(END_MARKER);
-          }
-        }
-      }
-      
-      private void offer(Map<String, Object> row) {
-        try {
-          while (!blockingQueue.offer(row, blockingQueueTimeOut, blockingQueueTimeOutUnits)) {
-            if (isEnd.get()) return;
-            log.debug("Timeout elapsed writing records.  Perhaps buffer size should be increased.");
-          }
-        } catch (InterruptedException e) {
-          return;
-        } finally {
-          synchronized (this) {
-            notifyAll();
-          }
-        }
-      }
-    };
-    
-    publisherThread.start();
-
-    return new Iterator<Map<String, Object>>() {
-      private Map<String, Object> lastRow;
-      int count = 0;
-
-      @Override
-      public boolean hasNext() {
-        return !isEnd.get();
-      }
-
-      @Override
-      public Map<String, Object> next() {
-        Map<String, Object> row;
-        
-        do {
-          try {
-            row = blockingQueue.poll(blockingQueueTimeOut, blockingQueueTimeOutUnits);
-            if (row == null) {
-              log.debug("Timeout elapsed reading records.");
-            }
-          } catch (InterruptedException e) {
-            log.debug("Caught InterruptedException while waiting for row.  Aborting.");
-            isEnd.set(true);
-            return null;
-          }
-        } while (row == null);
-        
-        if (row == END_MARKER) {
-          isEnd.set(true);
-          if (exp.get() != null) {
-            String msg = "Parsing failed for xml, url:" + s + " rows processed in this xml:" + count;
-            if (lastRow != null) msg += " last row in this xml:" + lastRow;
-            if (ABORT.equals(onError)) {
-              wrapAndThrow(SEVERE, exp.get(), msg);
-            } else if (SKIP.equals(onError)) {
-              wrapAndThrow(DataImportHandlerException.SKIP, exp.get());
-            } else {
-              log.warn(msg, exp.get());
-            }
-          }
-          return null;
-        } 
-        count++;
-        return lastRow = row;
-      }
-
-      @Override
-      public void remove() {
-        /*no op*/
-      }
-    };
-
-  }
-
-
-  public static final String URL = "url";
-
-  public static final String HAS_MORE = "$hasMore";
-
-  public static final String NEXT_URL = "$nextUrl";
-
-  public static final String XPATH_FIELD_NAME = "$forEach";
-
-  public static final String FOR_EACH = "forEach";
-
-  public static final String XPATH = "xpath";
-
-  public static final String COMMON_FIELD = "commonField";
-
-  public static final String USE_SOLR_ADD_SCHEMA = "useSolrAddSchema";
-
-  public static final String XSL = "xsl";
-
-  public static final String STREAM = "stream";
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathRecordReader.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathRecordReader.java
deleted file mode 100644
index 0a4638f..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathRecordReader.java
+++ /dev/null
@@ -1,670 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.util.XMLErrorLogger;
-import org.apache.solr.common.EmptyEntityResolver;
-import javax.xml.stream.XMLInputFactory;
-import static javax.xml.stream.XMLStreamConstants.*;
-import javax.xml.stream.XMLStreamException;
-import javax.xml.stream.XMLStreamReader;
-import java.io.IOException;
-import java.io.Reader;
-import java.lang.invoke.MethodHandles;
-import java.util.*;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * A streaming xpath parser which uses StAX for XML parsing. It supports only
- * a subset of xpath syntax.
- * </p><pre>
- * /a/b/subject[@qualifier='fullTitle']
- * /a/b/subject[@qualifier=]/subtag
- * /a/b/subject/@qualifier
- * //a
- * //a/b...
- * /a//b
- * /a//b...
- * /a/b/c
- * </pre>
- * A record is a Map&lt;String,Object&gt; . The key is the provided name
- * and the value is a String or a List&lt;String&gt;
- *
- * This class is thread-safe for parsing xml. But adding fields is not
- * thread-safe. The recommended usage is to addField() in one thread and 
- * then share the instance across threads.
- * <p>
- * <b>This API is experimental and may change in the future.</b>
- *
- * @since solr 1.3
- */
-public class XPathRecordReader {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private static final XMLErrorLogger XMLLOG = new XMLErrorLogger(log);
-
-  private Node rootNode = new Node("/", null);
-
-  /** 
-   * The FLATTEN flag indicates that all text and cdata under a specific
-   * tag should be recursivly fetched and appended to the current Node's
-   * value.
-   */
-  public static final int FLATTEN = 1;
-
-  /**
-   * A constructor called with a '|' separated list of Xpath expressions
-   * which define sub sections of the XML stream that are to be emitted as
-   * separate records.
-   * 
-   * @param forEachXpath  The XPATH for which a record is emitted. Once the
-   * xpath tag is encountered, the Node.parse method starts collecting wanted 
-   * fields and at the close of the tag, a record is emitted containing all 
-   * fields collected since the tag start. Once 
-   * emitted the collected fields are cleared. Any fields collected in the 
-   * parent tag or above will also be included in the record, but these are
-   * not cleared after emitting the record.
-   *
-   * It uses the ' | ' syntax of XPATH to pass in multiple xpaths.
-   */
-  public XPathRecordReader(String forEachXpath) {
-    String[] splits = forEachXpath.split("\\|");
-    for (String split : splits) {
-      split = split.trim();
-      if (split.startsWith("//"))
-         throw new RuntimeException("forEach cannot start with '//': " + split);
-      if (split.length() == 0)
-        continue;
-      // The created Node has a name set to the full forEach attribute xpath
-      addField0(split, split, false, true, 0);
-    }
-  }
-
-  /**
-   * A wrapper around <code>addField0</code> to create a series of  
-   * Nodes based on the supplied Xpath and a given fieldName. The created  
-   * nodes are inserted into a Node tree.
-   *
-   * @param name The name for this field in the emitted record
-   * @param xpath The xpath expression for this field
-   * @param multiValued If 'true' then the emitted record will have values in 
-   *                    a List&lt;String&gt;
-   */
-  public synchronized XPathRecordReader addField(String name, String xpath, boolean multiValued) {
-    addField0(xpath, name, multiValued, false, 0);
-    return this;
-  }
-
-  /**
-   * A wrapper around <code>addField0</code> to create a series of  
-   * Nodes based on the supplied Xpath and a given fieldName. The created  
-   * nodes are inserted into a Node tree.
-   *
-   * @param name The name for this field in the emitted record
-   * @param xpath The xpath expression for this field
-   * @param multiValued If 'true' then the emitted record will have values in 
-   *                    a List&lt;String&gt;
-   * @param flags FLATTEN: Recursively combine text from all child XML elements
-   */
-  public synchronized XPathRecordReader addField(String name, String xpath, boolean multiValued, int flags) {
-    addField0(xpath, name, multiValued, false, flags);
-    return this;
-  }
-
-  /**
-   * Splits the XPATH into a List of xpath segments and calls build() to
-   * construct a tree of Nodes representing xpath segments. The resulting
-   * tree structure ends up describing all the Xpaths we are interested in.
-   *
-   * @param xpath The xpath expression for this field
-   * @param name The name for this field in the emitted record
-   * @param multiValued If 'true' then the emitted record will have values in 
-   *                    a List&lt;String&gt;
-   * @param isRecord Flags that this XPATH is from a forEach statement
-   * @param flags The only supported flag is 'FLATTEN'
-   */
-  private void addField0(String xpath, String name, boolean multiValued,
-                         boolean isRecord, int flags) {
-    if (!xpath.startsWith("/"))
-      throw new RuntimeException("xpath must start with '/' : " + xpath);
-    List<String> paths = splitEscapeQuote(xpath);
-    // deal with how split behaves when separator starts a string!
-    if ("".equals(paths.get(0).trim()))
-      paths.remove(0);
-    rootNode.build(paths, name, multiValued, isRecord, flags);
-    rootNode.buildOptimise(null);
-  }
-
-  /** 
-   * Uses {@link #streamRecords streamRecords} to parse the XML source but with
-   * a handler that collects all the emitted records into a single List which 
-   * is returned upon completion.
-   *
-   * @param r the stream reader
-   * @return results a List of emitted records
-   */
-  public List<Map<String, Object>> getAllRecords(Reader r) {
-    final List<Map<String, Object>> results = new ArrayList<>();
-    streamRecords(r, (record, s) -> results.add(record));
-    return results;
-  }
-
-  /** 
-   * Creates an XML stream reader on top of whatever reader has been
-   * configured. Then calls parse() with a handler which is
-   * invoked forEach record emitted.
-   *
-   * @param r the stream reader
-   * @param handler The callback instance
-   */
-  public void streamRecords(Reader r, Handler handler) {
-    try {
-      XMLStreamReader parser = factory.createXMLStreamReader(r);
-      rootNode.parse(parser, handler, new HashMap<>(),
-          new Stack<>(), false);
-    } catch (Exception e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-
-  /**
-   * For each node/leaf in the Node tree there is one object of this class.
-   * This tree of objects represents all the XPaths we are interested in.
-   * For each Xpath segment of interest we create a node. In most cases the
-   * node (branch) is rather basic , but for the final portion (leaf) of any
-   * Xpath we add more information to the Node. When parsing the XML document 
-   * we step though this tree as we stream records from the reader. If the XML
-   * document departs from this tree we skip start tags till we are back on 
-   * the tree.
-   */
-  private static class Node {
-    String name;      // genrally: segment of the Xpath represented by this Node
-    String fieldName; // the fieldname in the emitted record (key of the map)
-    String xpathName; // the segment of the Xpath represented by this Node
-    String forEachPath; // the full Xpath from the forEach entity attribute
-    List<Node> attributes; // List of attribute Nodes associated with this Node
-    List<Node> childNodes; // List of immediate child Nodes of this node
-    List<Node> wildCardNodes; // List of '//' style decendants of this Node
-    List<Map.Entry<String, String>> attribAndValues;
-    Node wildAncestor; // ancestor Node containing '//' style decendants 
-    Node parent; // parent Node in the tree
-    boolean hasText=false; // flag: store/emit streamed text for this node
-    boolean multiValued=false; //flag: this fields values are returned as a List
-    boolean isRecord=false; //flag: this Node starts a new record
-    private boolean flatten; //flag: child text is also to be emitted
-
-
-    public Node(String name, Node p) {
-      // Create a basic Node, suitable for the mid portions of any Xpath.
-      // Node.xpathName and Node.name are set to same value.
-      xpathName = this.name = name;
-      parent = p;
-    }
-
-    public Node(String name, String fieldName, boolean multiValued) {
-      // This is only called from build() when describing an attribute.
-      this.name = name;               // a segment from the Xpath
-      this.fieldName = fieldName;     // name to store collected values against
-      this.multiValued = multiValued; // return collected values in a List
-    }
-
-    /**
-     * This is the method where all the XML parsing happens. For each 
-     * tag/subtag read from the source, this method is called recursively.
-     *
-     */
-    private void parse(XMLStreamReader parser, 
-                       Handler handler,
-                       Map<String, Object> values, 
-                       Stack<Set<String>> stack, // lists of values to purge
-                       boolean recordStarted
-                       ) throws IOException, XMLStreamException {
-      Set<String> valuesAddedinThisFrame = null;
-      if (isRecord) {
-        // This Node is a match for an XPATH from a forEach attribute, 
-        // prepare for the clean up that will occurr when the record
-        // is emitted after its END_ELEMENT is matched 
-        recordStarted = true;
-        valuesAddedinThisFrame = new HashSet<>();
-        stack.push(valuesAddedinThisFrame);
-      } else if (recordStarted) {
-        // This node is a child of some parent which matched against forEach 
-        // attribute. Continue to add values to an existing record.
-        valuesAddedinThisFrame = stack.peek();
-      }
-
-      try {
-        /* The input stream has deposited us at this Node in our tree of 
-         * intresting nodes. Depending on how this node is of interest,
-         * process further tokens from the input stream and decide what
-         * we do next
-         */
-        if (attributes != null) {
-          // we interested in storing attributes from the input stream
-          for (Node node : attributes) {
-            String value = parser.getAttributeValue(null, node.name);
-            if (value != null || (recordStarted && !isRecord)) {
-              putText(values, value, node.fieldName, node.multiValued);
-              valuesAddedinThisFrame.add(node.fieldName);
-            }
-          }
-        }
-
-        Set<Node> childrenFound = new HashSet<>();
-        int event = -1;
-        int flattenedStarts=0; // our tag depth when flattening elements
-        StringBuilder text = new StringBuilder();
-
-        while (true) {  
-          event = parser.next();
-   
-          if (event == END_ELEMENT) {
-            if (flattenedStarts > 0) flattenedStarts--;
-            else {
-              if (hasText && valuesAddedinThisFrame != null) {
-                valuesAddedinThisFrame.add(fieldName);
-                putText(values, text.toString(), fieldName, multiValued);
-              }
-              if (isRecord) handler.handle(getDeepCopy(values), forEachPath);
-              if (childNodes != null && recordStarted && !isRecord && !childrenFound.containsAll(childNodes)) {
-                // nonReccord nodes where we have not collected text for ALL
-                // the child nodes.
-                for (Node n : childNodes) {
-                  // For the multivalue child nodes where we could have, but
-                  // didnt, collect text. Push a null string into values.
-                  if (!childrenFound.contains(n)) n.putNulls(values, valuesAddedinThisFrame);
-                }
-              }
-              return;
-            }
-          }
-          else if (hasText && (event==CDATA || event==CHARACTERS || event==SPACE)) {
-            text.append(parser.getText());
-          } 
-          else if (event == START_ELEMENT) {
-            if ( flatten ) 
-               flattenedStarts++;
-            else 
-               handleStartElement(parser, childrenFound, handler, values, stack, recordStarted);
-          }
-          // END_DOCUMENT is least likely to appear and should be 
-          // last in if-then-else skip chain
-          else if (event == END_DOCUMENT) return;
-          }
-        }finally {
-        if ((isRecord || !recordStarted) && !stack.empty()) {
-          Set<String> cleanThis = stack.pop();
-          if (cleanThis != null) {
-            for (String fld : cleanThis) values.remove(fld);
-          }
-        }
-      }
-    }
-
-    /**
-     * If a new tag is encountered, check if it is of interest or not by seeing
-     * if it matches against our node tree. If we have deperted from the node 
-     * tree then walk back though the tree's ancestor nodes checking to see if
-     * any // expressions exist for the node and compare them against the new
-     * tag. If matched then "jump" to that node, otherwise ignore the tag.
-     *
-     * Note, the list of // expressions found while walking back up the tree
-     * is chached in the HashMap decends. Then if the new tag is to be skipped,
-     * any inner chil tags are compared against the cache and jumped to if
-     * matched.
-     */
-    private void handleStartElement(XMLStreamReader parser, Set<Node> childrenFound,
-                                    Handler handler, Map<String, Object> values,
-                                    Stack<Set<String>> stack, boolean recordStarted)
-            throws IOException, XMLStreamException {
-      Node n = getMatchingNode(parser,childNodes);
-      Map<String, Object> decends=new HashMap<>();
-      if (n != null) {
-        childrenFound.add(n);
-        n.parse(parser, handler, values, stack, recordStarted);
-        return;
-        }
-      // The stream has diverged from the tree of interesting elements, but
-      // are there any wildCardNodes ... anywhere in our path from the root?
-      Node dn = this; // checking our Node first!
-            
-      do {
-        if (dn.wildCardNodes != null) {
-          // Check to see if the streams tag matches one of the "//" all
-          // decendents type expressions for this node.
-          n = getMatchingNode(parser, dn.wildCardNodes);
-          if (n != null) {
-            childrenFound.add(n);
-            n.parse(parser, handler, values, stack, recordStarted);
-            break;
-          }
-          // add the list of this nodes wild decendents to the cache
-          for (Node nn : dn.wildCardNodes) decends.put(nn.name, nn);
-        }
-        dn = dn.wildAncestor; // leap back along the tree toward root
-      } while (dn != null) ;
- 
-      if (n == null) {
-        // we have a START_ELEMENT which is not within the tree of
-        // interesting nodes. Skip over the contents of this element
-        // but recursivly repeat the above for any START_ELEMENTs
-        // found within this element.
-        int count = 1; // we have had our first START_ELEMENT
-        while (count != 0) {
-          int token = parser.next();
-          if (token == START_ELEMENT) {
-            Node nn = (Node) decends.get(parser.getLocalName());
-            if (nn != null) {
-              // We have a //Node which matches the stream's parser.localName
-              childrenFound.add(nn);
-              // Parse the contents of this stream element
-              nn.parse(parser, handler, values, stack, recordStarted);
-            } 
-            else count++;
-          } 
-          else if (token == END_ELEMENT) count--;
-        }
-      }
-    }
-
-
-    /**
-     * Check if the current tag is to be parsed or not. We step through the
-     * supplied List "searchList" looking for a match. If matched, return the
-     * Node object.
-     */
-    private Node getMatchingNode(XMLStreamReader parser,List<Node> searchL){
-      if (searchL == null)
-        return null;
-      String localName = parser.getLocalName();
-      for (Node n : searchL) {
-        if (n.name.equals(localName)) {
-          if (n.attribAndValues == null)
-            return n;
-          if (checkForAttributes(parser, n.attribAndValues))
-            return n;
-        }
-      }
-      return null;
-    }
-
-    private boolean checkForAttributes(XMLStreamReader parser,
-                                       List<Map.Entry<String, String>> attrs) {
-      for (Map.Entry<String, String> e : attrs) {
-        String val = parser.getAttributeValue(null, e.getKey());
-        if (val == null)
-          return false;
-        if (e.getValue() != null && !e.getValue().equals(val))
-          return false;
-      }
-      return true;
-    }
-
-    /**
-     * A recursive routine that walks the Node tree from a supplied start
-     * pushing a null string onto every multiValued fieldName's List of values
-     * where a value has not been provided from the stream.
-     */
-    private void putNulls(Map<String, Object> values, Set<String> valuesAddedinThisFrame) {
-      if (attributes != null) {
-        for (Node n : attributes) {
-          if (n.multiValued) {
-            putANull(n.fieldName, values, valuesAddedinThisFrame);
-          }
-        }
-      }
-      if (hasText && multiValued) {
-        putANull(fieldName, values, valuesAddedinThisFrame);
-      }
-      if (childNodes != null) {
-        for (Node childNode : childNodes) {
-          childNode.putNulls(values, valuesAddedinThisFrame);
-        }
-      }
-    }
-    
-    private void putANull(String thisFieldName, Map<String, Object> values, Set<String> valuesAddedinThisFrame) {
-      putText(values, null, thisFieldName, true);
-      if( valuesAddedinThisFrame != null) {
-        valuesAddedinThisFrame.add(thisFieldName);
-      }
-    }
-
-    /**
-     * Add the field name and text into the values Map. If it is a non
-     * multivalued field, then the text is simply placed in the object
-     * portion of the Map. If it is a multivalued field then the text is
-     * pushed onto a List which is the object portion of the Map.
-     */
-    @SuppressWarnings("unchecked")
-    private void putText(Map<String, Object> values, String value,
-                         String fieldName, boolean multiValued) {
-      if (multiValued) {
-        List<String> v = (List<String>) values.get(fieldName);
-        if (v == null) {
-          v = new ArrayList<>();
-          values.put(fieldName, v);
-        }
-        v.add(value);
-      } else {
-        values.put(fieldName, value);
-      }
-    }
-
-
-    /**
-     * Walk the Node tree propagating any wildDescentant information to
-     * child nodes. This allows us to optimise the performance of the
-     * main parse method.
-     */
-    private void buildOptimise(Node wa) {
-     wildAncestor=wa;
-     if ( wildCardNodes != null ) wa = this;
-     if ( childNodes != null )
-       for ( Node n : childNodes ) n.buildOptimise(wa);
-     }
-
-    /**
-     * Build a Node tree structure representing all Xpaths of intrest to us.
-     * This must be done before parsing of the XML stream starts. Each node 
-     * holds one portion of an Xpath. Taking each Xpath segment in turn this
-     * method walks the Node tree  and finds where the new segment should be
-     * inserted. It creates a Node representing a field's name, XPATH and 
-     * some flags and inserts the Node into the Node tree.
-     */
-    private void build(
-        List<String> paths,   // a List of segments from the split xpaths
-        String fieldName,     // the fieldName assoc with this Xpath
-        boolean multiValued,  // flag if this fieldName is multiValued or not
-        boolean record,       // is this xpath a record or a field
-        int flags             // are we to flatten matching xpaths
-        ) {
-      // recursivly walk the paths Lists adding new Nodes as required
-      String xpseg = paths.remove(0); // shift out next Xpath segment
-
-      if (paths.isEmpty() && xpseg.startsWith("@")) {
-        // we have reached end of element portion of Xpath and can now only
-        // have an element attribute. Add it to this nodes list of attributes
-        if (attributes == null) {
-          attributes = new ArrayList<>();
-        }
-        xpseg = xpseg.substring(1); // strip the '@'
-        attributes.add(new Node(xpseg, fieldName, multiValued));
-      }
-      else if ( xpseg.length() == 0) {
-        // we have a '//' selector for all decendents of the current nodes
-        xpseg = paths.remove(0); // shift out next Xpath segment
-        if (wildCardNodes == null) wildCardNodes = new ArrayList<>();
-        Node n = getOrAddNode(xpseg, wildCardNodes);
-        if (paths.isEmpty()) {
-          // We are current a leaf node.
-          // xpath with content we want to store and return
-          n.hasText = true;        // we have to store text found here
-          n.fieldName = fieldName; // name to store collected text against
-          n.multiValued = multiValued; // true: text be stored in a List
-          n.flatten = flags == FLATTEN; // true: store text from child tags
-        }
-        else {
-          // recurse to handle next paths segment
-          n.build(paths, fieldName, multiValued, record, flags);
-        }
-      }
-      else {
-        if (childNodes == null)
-          childNodes = new ArrayList<>();
-        // does this "name" already exist as a child node.
-        Node n = getOrAddNode(xpseg,childNodes);
-        if (paths.isEmpty()) {
-          // We have emptied paths, we are for the moment a leaf of the tree.
-          // When parsing the actual input we have traversed to a position 
-          // where we actutally have to do something. getOrAddNode() will
-          // have created and returned a new minimal Node with name and
-          // xpathName already populated. We need to add more information.
-          if (record) {
-            // forEach attribute
-            n.isRecord = true; // flag: forEach attribute, prepare to emit rec
-            n.forEachPath = fieldName; // the full forEach attribute xpath
-          } else {
-            // xpath with content we want to store and return
-            n.hasText = true;        // we have to store text found here
-            n.fieldName = fieldName; // name to store collected text against
-            n.multiValued = multiValued; // true: text be stored in a List
-            n.flatten = flags == FLATTEN; // true: store text from child tags
-          }
-        } else {
-          // recurse to handle next paths segment
-          n.build(paths, fieldName, multiValued, record, flags);
-        }
-      }
-    }
-
-    private Node getOrAddNode(String xpathName, List<Node> searchList ) {
-      for (Node n : searchList)
-        if (n.xpathName.equals(xpathName)) return n;
-      // new territory! add a new node for this Xpath bitty
-      Node n = new Node(xpathName, this); // a minimal Node initialization
-      Matcher m = ATTRIB_PRESENT_WITHVAL.matcher(xpathName);
-      if (m.find()) {
-        n.name = m.group(1);
-        int start = m.start(2);
-        while (true) {
-          HashMap<String, String> attribs = new HashMap<>();
-          if (!m.find(start))
-            break;
-          attribs.put(m.group(3), m.group(5));
-          start = m.end(6);
-          if (n.attribAndValues == null)
-            n.attribAndValues = new ArrayList<>();
-          n.attribAndValues.addAll(attribs.entrySet());
-        }
-      }
-      searchList.add(n);
-      return n;
-    }
-
-    /**
-     * Copies a supplied Map to a new Map which is returned. Used to copy a
-     * records values. If a fields value is a List then they have to be
-     * deep-copied for thread safety
-     */
-    @SuppressWarnings({"unchecked", "rawtypes"})
-    private static Map<String, Object> getDeepCopy(Map<String, Object> values) {
-      Map<String, Object> result = new HashMap<>();
-      for (Map.Entry<String, Object> entry : values.entrySet()) {
-        if (entry.getValue() instanceof List) {
-          result.put(entry.getKey(), new ArrayList((List) entry.getValue()));
-        } else {
-          result.put(entry.getKey(), entry.getValue());
-        }
-      }
-      return result;
-    }
-  } // end of class Node
-
-
-  /**
-   * The Xpath is split into segments using the '/' as a separator. However
-   * this method deals with special cases where there is a slash '/' character
-   * inside the attribute value e.g. x/@html='text/html'. We split by '/' but 
-   * then reassemble things were the '/' appears within a quoted sub-string.
-   *
-   * We have already enforced that the string must begin with a separator. This
-   * method depends heavily on how split behaves if the string starts with the
-   * separator or if a sequence of multiple separator's appear. 
-   */
-  private static List<String> splitEscapeQuote(String str) {
-    List<String> result = new LinkedList<>();
-    String[] ss = str.split("/");
-    for (int i=0; i<ss.length; i++) { // i=1: skip separator at start of string
-      StringBuilder sb = new StringBuilder();
-      int quoteCount = 0;
-      while (true) {
-        sb.append(ss[i]);
-        for (int j=0; j<ss[i].length(); j++)
-            if (ss[i].charAt(j) == '\'') quoteCount++;
-        // have we got a split inside quoted sub-string?
-        if ((quoteCount % 2) == 0) break;
-        // yes!; replace the '/' and loop to concat next token
-        i++;
-        sb.append("/");
-      }
-      result.add(sb.toString());
-    }
-    return result;
-  }
-
-  static XMLInputFactory factory = XMLInputFactory.newInstance();
-  static {
-    EmptyEntityResolver.configureXMLInputFactory(factory);
-    factory.setXMLReporter(XMLLOG);
-    try {
-      // The java 1.6 bundled stax parser (sjsxp) does not currently have a thread-safe
-      // XMLInputFactory, as that implementation tries to cache and reuse the
-      // XMLStreamReader.  Setting the parser-specific "reuse-instance" property to false
-      // prevents this.
-      // All other known open-source stax parsers (and the bea ref impl)
-      // have thread-safe factories.
-      factory.setProperty("reuse-instance", Boolean.FALSE);
-    } catch (IllegalArgumentException ex) {
-      // Other implementations will likely throw this exception since "reuse-instance"
-      // isimplementation specific.
-      log.debug("Unable to set the 'reuse-instance' property for the input chain: {}", factory);
-    }
-  }
-
-  /**Implement this interface to stream records as and when one is found.
-   *
-   */
-  public interface Handler {
-    /**
-     * @param record The record map. The key is the field name as provided in 
-     * the addField() methods. The value can be a single String (for single 
-     * valued fields) or a List&lt;String&gt; (for multiValued).
-     * @param xpath The forEach XPATH for which this record is being emitted
-     * If there is any change all parsing will be aborted and the Exception
-     * is propagated up
-     */
-    void handle(Map<String, Object> record, String xpath);
-  }
-
-  private static final Pattern ATTRIB_PRESENT_WITHVAL = Pattern
-          .compile("(\\S*?)?(\\[@)(\\S*?)(='(.*?)')?(\\])");
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ZKPropertiesWriter.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ZKPropertiesWriter.java
deleted file mode 100644
index 2d83202..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/ZKPropertiesWriter.java
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.StringReader;
-import java.io.StringWriter;
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.StandardCharsets;
-import java.util.Map;
-import java.util.Properties;
-
-import org.apache.solr.common.cloud.SolrZkClient;
-import org.apache.zookeeper.KeeperException.NodeExistsException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- *  A SolrCloud-friendly extension of {@link SimplePropertiesWriter}.  
- *  This implementation ignores the "directory" parameter, saving
- *  the properties file under /configs/[solrcloud collection name]/
- */
-public class ZKPropertiesWriter extends SimplePropertiesWriter {
-  
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  private String path;
-  private SolrZkClient zkClient;
-  
-  @Override
-  public void init(DataImporter dataImporter, Map<String, String> params) {
-    super.init(dataImporter, params);    
-    zkClient = dataImporter.getCore().getCoreContainer().getZkController().getZkClient();
-  }
-  
-  @Override
-  protected void findDirectory(DataImporter dataImporter, Map<String, String> params) {
-    String collection = dataImporter.getCore().getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    path = "/configs/" + collection + "/" + filename;
-  }
-  
-  @Override
-  public boolean isWritable() {
-    return true;
-  }
-  
-  @Override
-  public void persist(Map<String, Object> propObjs) {
-    Properties existing = mapToProperties(readIndexerProperties());
-    existing.putAll(mapToProperties(propObjs));
-    StringWriter output = new StringWriter();
-    try {
-      existing.store(output, null);
-      byte[] bytes = output.toString().getBytes(StandardCharsets.UTF_8);
-      if (!zkClient.exists(path, false)) {
-        try {
-          zkClient.makePath(path, false);
-        } catch (NodeExistsException e) {}
-      }
-      zkClient.setData(path, bytes, false);
-    } catch (Exception e) {
-      SolrZkClient.checkInterrupted(e);
-      log.warn("Could not persist properties to {} : {}", path, e.getClass(), e);
-    }
-  }
-  
-  @Override
-  public Map<String, Object> readIndexerProperties() {
-    Properties props = new Properties();
-    try {
-      byte[] data = zkClient.getData(path, null, null, true);
-      if (data != null) {
-        props.load(new StringReader(new String(data, StandardCharsets.UTF_8)));
-      }
-    } catch (Exception e) {
-      SolrZkClient.checkInterrupted(e);
-      log.warn("Could not read DIH properties from {} : {}", path, e.getClass(), e);
-    }
-    return propertiesToMap(props);
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Zipper.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Zipper.java
deleted file mode 100644
index 096b3a8..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/Zipper.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.util.Iterator;
-import java.util.Map;
-
-import org.apache.solr.handler.dataimport.DIHCacheSupport.Relation;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.collect.Iterators;
-import com.google.common.collect.PeekingIterator;
-
-class Zipper {
-  
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private final DIHCacheSupport.Relation relation;
-  
-  @SuppressWarnings("rawtypes")
-  private Comparable parentId;
-  @SuppressWarnings("rawtypes")
-  private Comparable lastChildId;
-  
-  private Iterator<Map<String,Object>> rowIterator;
-  private PeekingIterator<Map<String,Object>> peeker;
-  
-  /** @return initialized zipper or null */
-  public static Zipper createOrNull(Context context){
-    if("zipper".equals(context.getEntityAttribute("join"))){
-      DIHCacheSupport.Relation r = new DIHCacheSupport.Relation(context);
-      if(r.doKeyLookup){
-        return new Zipper(r); 
-      }
-    } 
-    return null;
-  }
-  
-  
-  private Zipper(Relation relation) {
-    this.relation = relation;
-  }
-  
-  @SuppressWarnings({"rawtypes", "unchecked"})
-  public Map<String,Object> supplyNextChild(
-      Iterator<Map<String,Object>> rowIterator) {
-    preparePeeker(rowIterator);
-      
-    while(peeker.hasNext()){
-      Map<String,Object> current = peeker.peek();
-      Comparable childId = (Comparable) current.get(relation.primaryKey);
-      
-      if(lastChildId!=null && lastChildId.compareTo(childId)>0){
-        throw new IllegalArgumentException("expect increasing foreign keys for "+relation+
-            " got: "+lastChildId+","+childId);
-      }
-      lastChildId = childId;
-      int cmp = childId.compareTo(parentId);
-      if(cmp==0){
-        Map<String,Object> child = peeker.next();
-        assert child==current: "peeker should be right but "+current+" != " + child;
-        log.trace("yeild child {} entry {}",relation, current);
-        return child;// TODO it's for one->many for many->one it should be just peek() 
-      }else{
-        if(cmp<0){ // child belongs to 10th and parent is 20th, skip for the next one
-          Map<String,Object> child = peeker.next();
-          assert child==current: "peeker should be right but "+current+" != " + child;
-          log.trace("skip child {}, {} > {}",relation, parentId, childId);
-        }else{ // child belongs to 20th and  parent is 10th, no more children, go to next parent
-          log.trace("childen is over {}, {} < {}", relation, parentId, current);
-          return null;
-        }
-      }
-    }
-    
-    return null;
-  }
-
-  private void preparePeeker(Iterator<Map<String,Object>> rowIterator) {
-    if(this.rowIterator==null){
-      this.rowIterator = rowIterator;
-      peeker = Iterators.peekingIterator(rowIterator);
-    }else{
-      assert this.rowIterator==rowIterator: "rowIterator should never change but "+this.rowIterator+
-          " supplied before has been changed to "+rowIterator; 
-    }
-  }
-
-  @SuppressWarnings({"rawtypes", "unchecked"})
-  public void onNewParent(Context context) {
-    Comparable newParent = (Comparable) context.resolve(relation.foreignKey);
-    if(parentId!=null && parentId.compareTo(newParent)>=0){
-      throw new IllegalArgumentException("expect strictly increasing primary keys for "+relation+
-          " got: "+parentId+","+newParent);
-    }
-    log.trace("{}: {}->{}",relation, newParent, parentId);
-    parentId = newParent;
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/ConfigNameConstants.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/ConfigNameConstants.java
deleted file mode 100644
index ee7f875..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/ConfigNameConstants.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import java.util.HashSet;
-import java.util.Set;
-
-import org.apache.solr.handler.dataimport.SolrWriter;
-
-public class ConfigNameConstants {
-  public static final String SCRIPT = "script";
-
-  public static final String NAME = "name";
-
-  public static final String PROCESSOR = "processor";
-  
-  public static final String PROPERTY_WRITER = "propertyWriter";
-
-  public static final String IMPORTER_NS = "dataimporter";
-
-  public static final String IMPORTER_NS_SHORT = "dih";
-
-  public static final String ROOT_ENTITY = "rootEntity";
-  
-  public static final String CHILD = "child";
-
-  public static final String FUNCTION = "function";
-
-  public static final String CLASS = "class";
-
-  public static final String DATA_SRC = "dataSource";
-
-  public static final Set<String> RESERVED_WORDS;
-  static{
-    Set<String> rw =  new HashSet<>();
-    rw.add(IMPORTER_NS);
-    rw.add(IMPORTER_NS_SHORT);
-    rw.add("request");
-    rw.add("delta");
-    rw.add("functions");
-    rw.add("session");
-    rw.add(SolrWriter.LAST_INDEX_KEY);
-    RESERVED_WORDS = Set.copyOf(rw);
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/ConfigParseUtil.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/ConfigParseUtil.java
deleted file mode 100644
index c369296..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/ConfigParseUtil.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import org.w3c.dom.Element;
-import org.w3c.dom.NamedNodeMap;
-import org.w3c.dom.Node;
-import org.w3c.dom.NodeList;
-
-public class ConfigParseUtil {
-  
-  public static String getStringAttribute(Element e, String name, String def) {
-    String r = e.getAttribute(name);
-    if (r == null || "".equals(r.trim())) r = def;
-    return r;
-  }
-  
-  public static HashMap<String,String> getAllAttributes(Element e) {
-    HashMap<String,String> m = new HashMap<>();
-    NamedNodeMap nnm = e.getAttributes();
-    for (int i = 0; i < nnm.getLength(); i++) {
-      m.put(nnm.item(i).getNodeName(), nnm.item(i).getNodeValue());
-    }
-    return m;
-  }
-  
-  public static String getText(Node elem, StringBuilder buffer) {
-    if (elem.getNodeType() != Node.CDATA_SECTION_NODE) {
-      NodeList childs = elem.getChildNodes();
-      for (int i = 0; i < childs.getLength(); i++) {
-        Node child = childs.item(i);
-        short childType = child.getNodeType();
-        if (childType != Node.COMMENT_NODE
-            && childType != Node.PROCESSING_INSTRUCTION_NODE) {
-          getText(child, buffer);
-        }
-      }
-    } else {
-      buffer.append(elem.getNodeValue());
-    }
-    
-    return buffer.toString();
-  }
-  
-  public static List<Element> getChildNodes(Element e, String byName) {
-    List<Element> result = new ArrayList<>();
-    NodeList l = e.getChildNodes();
-    for (int i = 0; i < l.getLength(); i++) {
-      if (e.equals(l.item(i).getParentNode())
-          && byName.equals(l.item(i).getNodeName())) result.add((Element) l
-          .item(i));
-    }
-    return result;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/DIHConfiguration.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/DIHConfiguration.java
deleted file mode 100644
index 3832355..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/DIHConfiguration.java
+++ /dev/null
@@ -1,199 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
-import org.apache.solr.handler.dataimport.DataImporter;
-import org.apache.solr.handler.dataimport.DocBuilder;
-import org.apache.solr.schema.IndexSchema;
-import org.apache.solr.schema.SchemaField;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.w3c.dom.Element;
-
-/**
- * <p>
- * Mapping for data-config.xml
- * </p>
- * <p>
- * Refer to <a
- * href="http://wiki.apache.org/solr/DataImportHandler">http://wiki.apache.org/solr/DataImportHandler</a>
- * for more details.
- * </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- * @since solr 1.3
- */
-public class DIHConfiguration {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  // TODO - remove from here and add it to entity
-  private final String deleteQuery;
-  
-  private final List<Entity> entities;
-  private final String onImportStart;
-  private final String onImportEnd;
-  private final String onError;
-  private final List<Map<String, String>> functions;
-  private final Script script;
-  private final Map<String, Map<String,String>> dataSources;
-  private final PropertyWriter propertyWriter;
-  private final IndexSchema schema;
-  private final Map<String,SchemaField> lowerNameVsSchemaField;
-  
-  public DIHConfiguration(Element element, DataImporter di,
-      List<Map<String,String>> functions, Script script,
-      Map<String,Map<String,String>> dataSources, PropertyWriter pw) {
-    schema = di.getSchema();
-    lowerNameVsSchemaField = null == schema ? Collections.<String,SchemaField>emptyMap() : loadSchemaFieldMap();
-    this.deleteQuery = ConfigParseUtil.getStringAttribute(element, "deleteQuery", null);
-    this.onImportStart = ConfigParseUtil.getStringAttribute(element, "onImportStart", null);
-    this.onImportEnd = ConfigParseUtil.getStringAttribute(element, "onImportEnd", null);
-    this.onError = ConfigParseUtil.getStringAttribute(element, "onError", null);
-    List<Entity> modEntities = new ArrayList<>();
-    List<Element> l = ConfigParseUtil.getChildNodes(element, "entity");
-    boolean docRootFound = false;
-    for (Element e : l) {
-      Entity entity = new Entity(docRootFound, e, di, this, null);
-      Map<String, EntityField> fields = gatherAllFields(di, entity);
-      verifyWithSchema(fields);    
-      modEntities.add(entity);
-    }
-    this.entities = Collections.unmodifiableList(modEntities);
-    if(functions==null) {
-      functions = Collections.emptyList();
-    }
-    List<Map<String, String>> modFunc = new ArrayList<>(functions.size());
-    for(Map<String, String> f : functions) {
-      modFunc.add(Collections.unmodifiableMap(f));
-    }
-    this.functions = Collections.unmodifiableList(modFunc);
-    this.script = script;
-    this.dataSources = Collections.unmodifiableMap(dataSources);
-    this.propertyWriter = pw;
-  }
-
-  private void verifyWithSchema(Map<String,EntityField> fields) {
-    Map<String,SchemaField> schemaFields = null;
-    if (schema == null) {
-      schemaFields = Collections.emptyMap();
-    } else {
-      schemaFields = schema.getFields();
-    }
-    for (Map.Entry<String,SchemaField> entry : schemaFields.entrySet()) {
-      SchemaField sf = entry.getValue();
-      if (!fields.containsKey(sf.getName())) {
-        if (sf.isRequired()) {
-          if (log.isInfoEnabled()) {
-            log.info("{} is a required field in SolrSchema . But not found in DataConfig", sf.getName());
-          }
-        }
-      }
-    }
-    for (Map.Entry<String,EntityField> entry : fields.entrySet()) {
-      EntityField fld = entry.getValue();
-      SchemaField field = getSchemaField(fld.getName());
-      if (field == null && !isSpecialCommand(fld.getName())) {
-        if (log.isInfoEnabled()) {
-          log.info("The field :{} present in DataConfig does not have a counterpart in Solr Schema", fld.getName());
-        }
-      }
-    }
-  }
-
-  private Map<String,EntityField> gatherAllFields(DataImporter di, Entity e) {
-    Map<String,EntityField> fields = new HashMap<>();
-    if (e.getFields() != null) {
-      for (EntityField f : e.getFields()) {
-        fields.put(f.getName(), f);
-      }
-    }
-    for (Entity e1 : e.getChildren()) {
-      fields.putAll(gatherAllFields(di, e1));
-    }
-    return fields;
-  }
-
-  private Map<String,SchemaField> loadSchemaFieldMap() {
-    Map<String, SchemaField> modLnvsf = new HashMap<>();
-    for (Map.Entry<String, SchemaField> entry : schema.getFields().entrySet()) {
-      modLnvsf.put(entry.getKey().toLowerCase(Locale.ROOT), entry.getValue());
-    }
-    return Collections.unmodifiableMap(modLnvsf);
-  }
-
-  public SchemaField getSchemaField(String caseInsensitiveName) {
-    SchemaField schemaField = null;
-    if(schema!=null) {
-      schemaField = schema.getFieldOrNull(caseInsensitiveName);
-    }
-    if (schemaField == null) {
-      schemaField = lowerNameVsSchemaField.get(caseInsensitiveName.toLowerCase(Locale.ROOT));
-    }
-    return schemaField;
-  }
-
-
-  public String getDeleteQuery() {
-    return deleteQuery;
-  }
-  public List<Entity> getEntities() {
-    return entities;
-  }
-  public String getOnImportStart() {
-    return onImportStart;
-  }
-  public String getOnImportEnd() {
-    return onImportEnd;
-  }
-  public String getOnError() {
-    return onError;
-  }
-  public List<Map<String,String>> getFunctions() {
-    return functions;
-  }
-  public Map<String,Map<String,String>> getDataSources() {
-    return dataSources;
-  }
-  public Script getScript() {
-    return script;
-  }
-  public PropertyWriter getPropertyWriter() {
-    return propertyWriter;
-  }
-
-  public IndexSchema getSchema() {
-    return schema;
-  }
-
-  public static boolean isSpecialCommand(String fld) {
-    return DocBuilder.DELETE_DOC_BY_ID.equals(fld) ||
-        DocBuilder.DELETE_DOC_BY_QUERY.equals(fld) ||
-        DocBuilder.DOC_BOOST.equals(fld) ||
-        DocBuilder.SKIP_DOC.equals(fld) ||
-        DocBuilder.SKIP_ROW.equals(fld);
-
-  }
-}
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Entity.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Entity.java
deleted file mode 100644
index 0d0ba4f..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Entity.java
+++ /dev/null
@@ -1,228 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import org.apache.solr.handler.dataimport.DataImportHandlerException;
-import org.apache.solr.handler.dataimport.DataImporter;
-import org.apache.solr.schema.SchemaField;
-import org.w3c.dom.Element;
-
-public class Entity {
-  private final String name;
-  private final String pk;
-  private final String pkMappingFromSchema;
-  private final String dataSourceName;
-  private final String processorName;
-  private final Entity parentEntity;
-  private final boolean docRoot;
-  private final boolean child;
-  private final List<Entity> children;
-  private final List<EntityField> fields;
-  private final Map<String,Set<EntityField>> colNameVsField;
-  private final Map<String,String> allAttributes;
-  private final List<Map<String,String>> allFieldAttributes;
-  private final DIHConfiguration config;
-  
-  public Entity(boolean docRootFound, Element element, DataImporter di, DIHConfiguration config, Entity parent) {
-    this.parentEntity = parent;
-    this.config = config;
-    
-    String modName = ConfigParseUtil.getStringAttribute(element, ConfigNameConstants.NAME, null);
-    if (modName == null) {
-      throw new DataImportHandlerException(SEVERE, "Entity must have a name.");
-    }
-    if (modName.indexOf(".") != -1) {
-      throw new DataImportHandlerException(SEVERE,
-          "Entity name must not have period (.): '" + modName);
-    }
-    if (ConfigNameConstants.RESERVED_WORDS.contains(modName)) {
-      throw new DataImportHandlerException(SEVERE, "Entity name : '" + modName
-          + "' is a reserved keyword. Reserved words are: " + ConfigNameConstants.RESERVED_WORDS);
-    }
-    this.name = modName;
-    this.pk = ConfigParseUtil.getStringAttribute(element, "pk", null);
-    this.processorName = ConfigParseUtil.getStringAttribute(element, ConfigNameConstants.PROCESSOR,null);
-    this.dataSourceName = ConfigParseUtil.getStringAttribute(element, DataImporter.DATA_SRC, null);
-    
-    String rawDocRootValue = ConfigParseUtil.getStringAttribute(element, ConfigNameConstants.ROOT_ENTITY, null);
-    if (!docRootFound && !"false".equals(rawDocRootValue)) {
-      // if in this chain no document root is found()
-      docRoot = true;
-    } else {
-      docRoot = false;
-    }
-    
-    String childValue = ConfigParseUtil.getStringAttribute(element, ConfigNameConstants.CHILD, null);
-    child = "true".equals(childValue);
-    
-    Map<String,String> modAttributes = ConfigParseUtil
-        .getAllAttributes(element);
-    modAttributes.put(ConfigNameConstants.DATA_SRC, this.dataSourceName);
-    this.allAttributes = Collections.unmodifiableMap(modAttributes);
-    
-    List<Element> n = ConfigParseUtil.getChildNodes(element, "field");
-    List<EntityField> modFields = new ArrayList<>(n.size());
-    Map<String,Set<EntityField>> modColNameVsField = new HashMap<>();
-    List<Map<String,String>> modAllFieldAttributes = new ArrayList<>();
-    for (Element elem : n) {
-      EntityField.Builder fieldBuilder = new EntityField.Builder(elem);
-      if (config.getSchema() != null) {
-        if (fieldBuilder.getNameOrColumn() != null
-            && fieldBuilder.getNameOrColumn().contains("${")) {
-          fieldBuilder.dynamicName = true;
-        } else {
-          SchemaField schemaField = config.getSchemaField
-              (fieldBuilder.getNameOrColumn());
-          if (schemaField != null) {
-            fieldBuilder.name = schemaField.getName();
-            fieldBuilder.multiValued = schemaField.multiValued();
-            fieldBuilder.allAttributes.put(DataImporter.MULTI_VALUED, Boolean
-                .toString(schemaField.multiValued()));
-            fieldBuilder.allAttributes.put(DataImporter.TYPE, schemaField
-                .getType().getTypeName());
-            fieldBuilder.allAttributes.put("indexed", Boolean
-                .toString(schemaField.indexed()));
-            fieldBuilder.allAttributes.put("stored", Boolean
-                .toString(schemaField.stored()));
-            fieldBuilder.allAttributes.put("defaultValue", schemaField
-                .getDefaultValue());
-          } else {
-            fieldBuilder.toWrite = false;
-          }
-        }
-      }
-      Set<EntityField> fieldSet = modColNameVsField.get(fieldBuilder.column);
-      if (fieldSet == null) {
-        fieldSet = new HashSet<>();
-        modColNameVsField.put(fieldBuilder.column, fieldSet);
-      }
-      fieldBuilder.allAttributes.put("boost", Float
-          .toString(fieldBuilder.boost));
-      fieldBuilder.allAttributes.put("toWrite", Boolean
-          .toString(fieldBuilder.toWrite));
-      modAllFieldAttributes.add(fieldBuilder.allAttributes);
-      fieldBuilder.entity = this;
-      EntityField field = new EntityField(fieldBuilder);
-      fieldSet.add(field);
-      modFields.add(field);
-    }
-    Map<String,Set<EntityField>> modColNameVsField1 = new HashMap<>();
-    for (Map.Entry<String,Set<EntityField>> entry : modColNameVsField
-        .entrySet()) {
-      if (entry.getValue().size() > 0) {
-        modColNameVsField1.put(entry.getKey(), Collections
-            .unmodifiableSet(entry.getValue()));
-      }
-    }
-    this.colNameVsField = Collections.unmodifiableMap(modColNameVsField1);
-    this.fields = Collections.unmodifiableList(modFields);
-    this.allFieldAttributes = Collections
-        .unmodifiableList(modAllFieldAttributes);
-    
-    String modPkMappingFromSchema = null;
-    if (config.getSchema() != null) {
-      SchemaField uniqueKey = config.getSchema().getUniqueKeyField();
-      if (uniqueKey != null) {
-        modPkMappingFromSchema = uniqueKey.getName();
-        // if no fields are mentioned . solr uniqueKey is same as dih 'pk'
-        for (EntityField field : fields) {
-          if (field.getName().equals(modPkMappingFromSchema)) {
-            modPkMappingFromSchema = field.getColumn();
-            // get the corresponding column mapping for the solr uniqueKey
-            // But if there are multiple columns mapping to the solr uniqueKey,
-            // it will fail
-            // so , in one off cases we may need pk
-            break;
-          }
-        }
-      }
-    }
-    pkMappingFromSchema = modPkMappingFromSchema;
-    n = ConfigParseUtil.getChildNodes(element, "entity");
-    List<Entity> modEntities = new ArrayList<>();
-    for (Element elem : n) {
-      modEntities.add(new Entity((docRootFound || this.docRoot), elem, di, config, this));
-    }
-    this.children = Collections.unmodifiableList(modEntities);
-  }
-  
-  public String getPk() {
-    return pk == null ? pkMappingFromSchema : pk;
-  }
-  
-  public String getSchemaPk() {
-    return pkMappingFromSchema != null ? pkMappingFromSchema : pk;
-  }
-  
-  public String getName() {
-    return name;
-  }
-  
-  public String getPkMappingFromSchema() {
-    return pkMappingFromSchema;
-  }
-  
-  public String getDataSourceName() {
-    return dataSourceName;
-  }
-  
-  public String getProcessorName() {
-    return processorName;
-  }
-  
-  public Entity getParentEntity() {
-    return parentEntity;
-  }
-  
-  public boolean isDocRoot() {
-    return docRoot;
-  }
-  
-  public List<Entity> getChildren() {
-    return children;
-  }
-  
-  public List<EntityField> getFields() {
-    return fields;
-  }
-  
-  public Map<String,Set<EntityField>> getColNameVsField() {
-    return colNameVsField;
-  }
-  
-  public Map<String,String> getAllAttributes() {
-    return allAttributes;
-  }
-  
-  public List<Map<String,String>> getAllFieldsList() {
-    return allFieldAttributes;
-  }
-
-  public boolean isChild() {
-    return child;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/EntityField.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/EntityField.java
deleted file mode 100644
index 2b28cb7..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/EntityField.java
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.solr.handler.dataimport.ConfigParseUtil;
-import org.apache.solr.handler.dataimport.DataImportHandlerException;
-import org.apache.solr.handler.dataimport.DataImporter;
-import org.w3c.dom.Element;
-
-public class EntityField {
-  private final String column;
-  private final String name;
-  private final boolean toWrite;
-  private final boolean multiValued;
-  private final boolean dynamicName;
-  private final Entity entity;
-  private final Map<String, String> allAttributes;
-
-  public EntityField(Builder b) {
-    this.column = b.column;
-    this.name = b.name;
-    this.toWrite = b.toWrite;
-    this.multiValued = b.multiValued;
-    this.dynamicName = b.dynamicName;
-    this.entity = b.entity;
-    this.allAttributes = Collections.unmodifiableMap(new HashMap<>(b.allAttributes));
-  }
-
-  public String getName() {
-    return name == null ? column : name;
-  }
-
-  public Entity getEntity() {
-    return entity;
-  }
-
-  public String getColumn() {
-    return column;
-  }
-
-  public boolean isToWrite() {
-    return toWrite;
-  }
-
-  public boolean isMultiValued() {
-    return multiValued;
-  }
-
-  public boolean isDynamicName() {
-    return dynamicName;
-  }
-
-  public Map<String,String> getAllAttributes() {
-    return allAttributes;
-  }
-  
-  public static class Builder {    
-    public String column;
-    public String name;
-    public float boost;
-    public boolean toWrite = true;
-    public boolean multiValued = false;
-    public boolean dynamicName = false;
-    public Entity entity;
-    public Map<String, String> allAttributes = new HashMap<>();
-    
-    public Builder(Element e) {
-      this.name = ConfigParseUtil.getStringAttribute(e, DataImporter.NAME, null);
-      this.column = ConfigParseUtil.getStringAttribute(e, DataImporter.COLUMN, null);
-      if (column == null) {
-        throw new DataImportHandlerException(SEVERE, "Field must have a column attribute");
-      }
-      this.boost = Float.parseFloat(ConfigParseUtil.getStringAttribute(e, "boost", "1.0f"));
-      this.allAttributes = new HashMap<>(ConfigParseUtil.getAllAttributes(e));
-    }
-    
-    public String getNameOrColumn() {
-      return name==null ? column : name;
-    }
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Field.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Field.java
deleted file mode 100644
index 7ff4832..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Field.java
+++ /dev/null
@@ -1,108 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import static org.apache.solr.handler.dataimport.DataImportHandlerException.SEVERE;
-
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.solr.handler.dataimport.ConfigParseUtil;
-import org.apache.solr.handler.dataimport.DataImportHandlerException;
-import org.apache.solr.handler.dataimport.DataImporter;
-import org.w3c.dom.Element;
-
-public class Field {
-  private final String column;
-  private final String name;
-  private final float boost;
-  private final boolean toWrite;
-  private final boolean multiValued;
-  private final boolean dynamicName;
-  private final Entity entity;
-  private final Map<String, String> allAttributes;
-
-  public Field(Builder b) {
-    this.column = b.column;
-    this.name = b.name;
-    this.boost = b.boost;
-    this.toWrite = b.toWrite;
-    this.multiValued = b.multiValued;
-    this.dynamicName = b.dynamicName;
-    this.entity = b.entity;
-    this.allAttributes = Collections.unmodifiableMap(new HashMap<>(b.allAttributes));
-  }
-
-  public String getName() {
-    return name == null ? column : name;
-  }
-
-  public Entity getEntity() {
-    return entity;
-  }
-
-  public String getColumn() {
-    return column;
-  }
-
-  public float getBoost() {
-    return boost;
-  }
-
-  public boolean isToWrite() {
-    return toWrite;
-  }
-
-  public boolean isMultiValued() {
-    return multiValued;
-  }
-
-  public boolean isDynamicName() {
-    return dynamicName;
-  }
-
-  public Map<String,String> getAllAttributes() {
-    return allAttributes;
-  }
-  
-  public static class Builder {    
-    public String column;
-    public String name;
-    public float boost;
-    public boolean toWrite = true;
-    public boolean multiValued = false;
-    public boolean dynamicName;
-    public Entity entity;
-    public Map<String, String> allAttributes = new HashMap<>();
-    
-    public Builder(Element e) {
-      this.name = ConfigParseUtil.getStringAttribute(e, DataImporter.NAME, null);
-      this.column = ConfigParseUtil.getStringAttribute(e, DataImporter.COLUMN, null);
-      if (column == null) {
-        throw new DataImportHandlerException(SEVERE, "Field must have a column attribute");
-      }
-      this.boost = Float.parseFloat(ConfigParseUtil.getStringAttribute(e, "boost", "1.0f"));
-      this.allAttributes = new HashMap<>(ConfigParseUtil.getAllAttributes(e));
-    }
-    
-    public String getNameOrColumn() {
-      return name==null ? column : name;
-    }
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/PropertyWriter.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/PropertyWriter.java
deleted file mode 100644
index 6e91a19..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/PropertyWriter.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import java.util.HashMap;
-import java.util.Map;
-
-public class PropertyWriter {
-  private final String type;
-  private Map<String,String> parameters;
-  
-  public PropertyWriter(String type, Map<String,String> parameters) {
-    this.type = type;
-    this.parameters = new HashMap<String,String>(parameters);
-  }
-
-  public Map<String,String> getParameters() {
-    return parameters;
-  }
-  
-  public String getType() {
-    return type;
-  }  
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Script.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Script.java
deleted file mode 100644
index 9a4bc59..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/Script.java
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport.config;
-
-import org.w3c.dom.Element;
-
-public class Script {
-  private final String language;
-  private final String text;
-  
-  public Script(Element e) {
-    this.language = ConfigParseUtil.getStringAttribute(e, "language", "JavaScript");
-    StringBuilder buffer = new StringBuilder();
-    String script = ConfigParseUtil.getText(e, buffer);
-    if (script != null) {
-      this.text = script.trim();
-    } else {
-      this.text = null;
-    }
-  }  
-  public String getLanguage() {
-    return language;
-  }  
-  public String getText() {
-    return text;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/package-info.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/package-info.java
deleted file mode 100644
index 50c6d4e..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/config/package-info.java
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
- 
-/** 
- * Utility classes for parsing &amp; modeling DIH configuration.
- */
-package org.apache.solr.handler.dataimport.config;
-
-
-
diff --git a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/package-info.java b/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/package-info.java
deleted file mode 100644
index 4a69d23..0000000
--- a/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/package-info.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
- 
-/** 
- * {@link org.apache.solr.handler.dataimport.DataImportHandler} and related code.
- */
-package org.apache.solr.handler.dataimport;
-
-
-
-
diff --git a/solr/contrib/dataimporthandler/src/java/overview.html b/solr/contrib/dataimporthandler/src/java/overview.html
deleted file mode 100644
index 4c2d595..0000000
--- a/solr/contrib/dataimporthandler/src/java/overview.html
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<html>
-<body>
-Apache Solr Search Server: DataImportHandler contrib. <b>This contrib module is deprecated as of 8.6.</b>
-</body>
-</html>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/contentstream-solrconfig.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/contentstream-solrconfig.xml
deleted file mode 100644
index d3ee34c..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/contentstream-solrconfig.xml
+++ /dev/null
@@ -1,287 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <indexConfig>
-    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>
-  </indexConfig>
-
-  <!-- Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.
-       If replication is in use, this should match the replication configuration. -->
-       <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <!-- the default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- A prefix of "solr." for class names is an alias that
-         causes solr to search appropriate packages, including
-         org.apache.solr.(search|update|request|core|analysis)
-     -->
-
-    <!-- Limit the number of deletions Solr will buffer during doc updating.
-        
-        Setting this lower can help bound memory use during indexing.
-    -->
-    <maxPendingDeletes>100000</maxPendingDeletes>
-
-  </updateHandler>
-
-
-  <query>
-    <!-- Maximum number of clauses in a boolean query... can affect
-        range or prefix queries that expand to big boolean
-        queries.  An exception is thrown if exceeded.  -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-    
-    <!-- Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.
-         When a new searcher is opened, its caches may be prepopulated
-         or "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.  For CaffeineCache,
-         the autowarmed items will be the most recently accessed items.
-       Parameters:
-         class - the SolrCache implementation (currently only CaffeineCache)
-         size - the maximum number of entries in the cache
-         initialSize - the initial capacity (number of entries) of
-           the cache.  (seel java.util.HashMap)
-         autowarmCount - the number of entries to prepopulate from
-           and old cache.
-         -->
-    <filterCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-   <!-- queryResultCache caches results of searches - ordered lists of
-         document ids (DocList) based on a query, a sort, and the range
-         of documents requested.  -->
-    <queryResultCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-  <!-- documentCache caches Lucene Document objects (the stored fields for each document).
-       Since Lucene internal document ids are transient, this cache will not be autowarmed.  -->
-    <documentCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="0"/>
-
-    <!-- If true, stored fields that are not requested will be loaded lazily.
-
-    This can result in a significant speed improvement if the usual case is to
-    not load all stored fields, especially if the skipped fields are large compressed
-    text fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-    <!-- Example of a generic cache.  These caches may be accessed by name
-         through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().
-         The purpose is to enable easy caching of user/application level data.
-         The regenerator argument should be specified as an implementation
-         of solr.search.CacheRegenerator if autowarming is desired.  -->
-    <!--
-    <cache name="myUserCache"
-      class="solr.CaffeineCache"
-      size="4096"
-      initialSize="1024"
-      autowarmCount="1024"
-      regenerator="org.mycompany.mypackage.MyRegenerator"
-      />
-    -->
-
-   <!-- An optimization that attempts to use a filter to satisfy a search.
-         If the requested sort does not include score, then the filterCache
-         will be checked for a filter matching the query. If found, the filter
-         will be used as the source of document ids, and then the sort will be
-         applied to that.
-    <useFilterForSortedQuery>true</useFilterForSortedQuery>
-   -->
-
-   <!-- An optimization for use with the queryResultCache.  When a search
-         is requested, a superset of the requested number of document ids
-         are collected.  For example, if a search for a particular query
-         requests matching documents 10 through 19, and queryWindowSize is 50,
-         then documents 0 through 49 will be collected and cached.  Any further
-         requests in that range can be satisfied via the cache.  -->
-    <queryResultWindowSize>50</queryResultWindowSize>
-    
-    <!-- Maximum number of documents to cache for any entry in the
-         queryResultCache. -->
-    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-    <!-- a newSearcher event is fired whenever a new searcher is being prepared
-         and there is a current searcher handling requests (aka registered). -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence. -->
-    <!--<listener event="newSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-        <!--<lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst>-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- a firstSearcher event is fired whenever a new searcher is being
-         prepared but there is no current registered searcher to handle
-         requests or to gain autowarming data from. -->
-    <!--<listener event="firstSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- If a search request comes in and there is no current registered searcher,
-         then immediately register the still warming searcher and use it.  If
-         "false" then all requests will block until the first searcher is done
-         warming. -->
-    <useColdSearcher>false</useColdSearcher>
-
-    <!-- Maximum number of searchers that may be warming in the background
-      concurrently.  An error is returned if this limit is exceeded. Recommend
-      1-2 for read-only slaves, higher for masters w/o cache warming. -->
-    <maxWarmingSearchers>4</maxWarmingSearchers>
-
-  </query>
-
-  <requestDispatcher>
-    <!--Make sure your system has some authentication before enabling remote streaming!
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1" />
-    -->
-
-    <!-- Set HTTP caching related parameters (for proxy caches and clients).
-          
-         To get the behaviour of Solr 1.2 (ie: no caching related headers)
-         use the never304="true" option and do not specify a value for
-         <cacheControl>
-    -->
-    <httpCaching never304="true">
-    <!--httpCaching lastModifiedFrom="openTime"
-                 etagSeed="Solr"-->
-       <!-- lastModFrom="openTime" is the default, the Last-Modified value
-            (and validation against If-Modified-Since requests) will all be
-            relative to when the current Searcher was opened.
-            You can change it to lastModFrom="dirLastMod" if you want the
-            value to exactly corrispond to when the physical index was last
-            modified.
-               
-            etagSeed="..." is an option you can change to force the ETag
-            header (and validation against If-None-Match requests) to be
-            differnet even if the index has not changed (ie: when making
-            significant changes to your config file)
-
-            lastModifiedFrom and etagSeed are both ignored if you use the
-            never304="true" option.
-       -->
-       <!-- If you include a <cacheControl> directive, it will be used to
-            generate a Cache-Control header, as well as an Expires header
-            if the value contains "max-age="
-               
-            By default, no Cache-Control header is generated.
-
-            You can use the <cacheControl> option even if you have set
-            never304="true"
-       -->
-       <!-- <cacheControl>max-age=30, public</cacheControl> -->
-    </httpCaching>
-  </requestDispatcher>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <!-- 
-       <int name="rows">10</int>
-       <str name="fl">*</str>
-       <str name="version">2.1</str>
-        -->
-     </lst>
-  </requestHandler>
-  
-  <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
-    <lst name="defaults">
-      <str name="config">data-config.xml</str>
-
-    </lst>
-  </requestHandler>
-    
-  <!--
-   
-   Search components are registered to SolrCore and used by Search Handlers
-   
-   By default, the following components are avaliable:
-    
-   <searchComponent name="query"     class="org.apache.solr.handler.component.QueryComponent" />
-   <searchComponent name="facet"     class="org.apache.solr.handler.component.FacetComponent" />
-   <searchComponent name="mlt"       class="org.apache.solr.handler.component.MoreLikeThisComponent" />
-   <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
-   <searchComponent name="debug"     class="org.apache.solr.handler.component.DebugComponent" />
-  
-   If you register a searchComponent to one of the standard names, that will be used instead.
-  
-   -->
- 
-  <requestHandler name="/search" class="org.apache.solr.handler.component.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-    </lst>
-    <!--
-    By default, this will register the following components:
-    
-    <arr name="components">
-      <str>query</str>
-      <str>facet</str>
-      <str>mlt</str>
-      <str>highlight</str>
-      <str>debug</str>
-    </arr>
-    
-    To insert handlers before or after the 'standard' components, use:
-    
-    <arr name="first-components">
-      <str>first</str>
-    </arr>
-    
-    <arr name="last-components">
-      <str>last</str>
-    </arr>
-    
-    -->
-  </requestHandler>
-
-  <!-- config for the admin interface --> 
-  <admin>
-    <defaultQuery>*:*</defaultQuery>
-  </admin>
-
-  <updateRequestProcessorChain key="contentstream" default="true">
-    <processor class="org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase$TestUpdateRequestProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-    <processor class="solr.LogUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-</config>
-
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-end-to-end.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-end-to-end.xml
deleted file mode 100644
index a582112..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-end-to-end.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-<dataConfig>
-  <dataSource name="hsqldb" driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:mem:." />
-  <document name="dih_end_to_end">
-    <entity 
-      name="People" 
-      processor="SqlEntityProcessor"
-      dataSource="hsqldb" 
-      query="SELECT ID, NAME, COUNTRY_CODES FROM PEOPLE"
-      transformer="RegexTransformer"
-    >
-      <field column="ID" name="id" />
-      <field column="COUNTRY_CODE" sourceColName="COUNTRY_CODES" splitBy="," />
- 
-<!-- 
- Instead of using 'cachePk'/'cacheLookup' as done below, we could have done:
-  where="CODE=People.COUNTRY_CODE"
---> 
-      <entity 
-        name="Countries"
-        processor="SqlEntityProcessor"
-        dataSource="hsqldb" 
-        cacheImpl="SortedMapBackedCache"
-        cacheKey="CODE"
-        cacheLookup="People.COUNTRY_CODE"
-        
-        query="SELECT CODE, COUNTRY_NAME FROM COUNTRIES"
-      >
-        <field column="CODE" name="DO_NOT_INDEX" />
-      </entity>
-         
-      <entity 
-        name="Sports"
-        processor="SqlEntityProcessor"
-        dataSource="hsqldb"               
-        query="SELECT PERSON_ID, SPORT_NAME FROM PEOPLE_SPORTS WHERE PERSON_ID=${People.ID}"
-      />
-
-    </entity>
-  </document>
-</dataConfig>
-         
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-with-datasource.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-with-datasource.xml
deleted file mode 100644
index 46a6603..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-with-datasource.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-<dataConfig>
-  <dataSource type="MockDataSource" />
-  <document>
-    <entity name="x" query="select * from x">
-      <field column="id" />
-      <field column="desc" />
-    </entity>
-  </document>
-</dataConfig>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-with-transformer.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-with-transformer.xml
deleted file mode 100644
index 925e6c2..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/data-config-with-transformer.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<dataConfig>
-  <dataSource  type="MockDataSource" />
-  <dataSource name="mockDs" type="TestDocBuilder2$MockDataSource2" />
-  <document>
-    <entity name="x" query="select * from x" transformer="TestDocBuilder2$MockTransformer">
-      <field column="id" />
-      <field column="desc" />
-    </entity>
-  </document>
-</dataConfig>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataconfig-contentstream.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataconfig-contentstream.xml
deleted file mode 100644
index 7520e74..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataconfig-contentstream.xml
+++ /dev/null
@@ -1,10 +0,0 @@
-<dataConfig>
-  <dataSource type="ContentStreamDataSource" name="c"/>
-  <document>
-    <entity name="b" dataSource="c" processor="XPathEntityProcessor"
-            forEach="/root/b">
-      <field column="desc" xpath="/root/b/c"/>
-      <field column="id" xpath="/root/b/id"/>
-    </entity>
-  </document>
-</dataConfig>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-nodatasource-solrconfig.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-nodatasource-solrconfig.xml
deleted file mode 100644
index 2fd15b9..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-nodatasource-solrconfig.xml
+++ /dev/null
@@ -1,279 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <!-- Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.
-       If replication is in use, this should match the replication configuration. -->
-       <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <indexConfig>
-    <lockType>single</lockType>
-    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>
-  </indexConfig>
-
-  <!-- the default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- A prefix of "solr." for class names is an alias that
-         causes solr to search appropriate packages, including
-         org.apache.solr.(search|update|request|core|analysis)
-     -->
-
-    <!-- Limit the number of deletions Solr will buffer during doc updating.
-        
-        Setting this lower can help bound memory use during indexing.
-    -->
-    <maxPendingDeletes>100000</maxPendingDeletes>
-
-  </updateHandler>
-
-
-  <query>
-    <!-- Maximum number of clauses in a boolean query... can affect
-        range or prefix queries that expand to big boolean
-        queries.  An exception is thrown if exceeded.  -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-    
-    <!-- Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.
-         When a new searcher is opened, its caches may be prepopulated
-         or "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.  For CaffeineCache,
-         the autowarmed items will be the most recently accessed items.
-       Parameters:
-         class - the SolrCache implementation (currently only CaffeineCache)
-         size - the maximum number of entries in the cache
-         initialSize - the initial capacity (number of entries) of
-           the cache.  (seel java.util.HashMap)
-         autowarmCount - the number of entries to prepopulate from
-           and old cache.
-         -->
-    <filterCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-   <!-- queryResultCache caches results of searches - ordered lists of
-         document ids (DocList) based on a query, a sort, and the range
-         of documents requested.  -->
-    <queryResultCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-  <!-- documentCache caches Lucene Document objects (the stored fields for each document).
-       Since Lucene internal document ids are transient, this cache will not be autowarmed.  -->
-    <documentCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="0"/>
-
-    <!-- If true, stored fields that are not requested will be loaded lazily.
-
-    This can result in a significant speed improvement if the usual case is to
-    not load all stored fields, especially if the skipped fields are large compressed
-    text fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-    <!-- Example of a generic cache.  These caches may be accessed by name
-         through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().
-         The purpose is to enable easy caching of user/application level data.
-         The regenerator argument should be specified as an implementation
-         of solr.search.CacheRegenerator if autowarming is desired.  -->
-    <!--
-    <cache name="myUserCache"
-      class="solr.CaffeineCache"
-      size="4096"
-      initialSize="1024"
-      autowarmCount="1024"
-      regenerator="org.mycompany.mypackage.MyRegenerator"
-      />
-    -->
-
-   <!-- An optimization that attempts to use a filter to satisfy a search.
-         If the requested sort does not include score, then the filterCache
-         will be checked for a filter matching the query. If found, the filter
-         will be used as the source of document ids, and then the sort will be
-         applied to that.
-    <useFilterForSortedQuery>true</useFilterForSortedQuery>
-   -->
-
-   <!-- An optimization for use with the queryResultCache.  When a search
-         is requested, a superset of the requested number of document ids
-         are collected.  For example, if a search for a particular query
-         requests matching documents 10 through 19, and queryWindowSize is 50,
-         then documents 0 through 49 will be collected and cached.  Any further
-         requests in that range can be satisfied via the cache.  -->
-    <queryResultWindowSize>50</queryResultWindowSize>
-    
-    <!-- Maximum number of documents to cache for any entry in the
-         queryResultCache. -->
-    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-    <!-- a newSearcher event is fired whenever a new searcher is being prepared
-         and there is a current searcher handling requests (aka registered). -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence. -->
-    <!--<listener event="newSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-        <!--<lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst>-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- a firstSearcher event is fired whenever a new searcher is being
-         prepared but there is no current registered searcher to handle
-         requests or to gain autowarming data from. -->
-    <!--<listener event="firstSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- If a search request comes in and there is no current registered searcher,
-         then immediately register the still warming searcher and use it.  If
-         "false" then all requests will block until the first searcher is done
-         warming. -->
-    <useColdSearcher>false</useColdSearcher>
-
-    <!-- Maximum number of searchers that may be warming in the background
-      concurrently.  An error is returned if this limit is exceeded. Recommend
-      1-2 for read-only slaves, higher for masters w/o cache warming. -->
-    <maxWarmingSearchers>4</maxWarmingSearchers>
-
-  </query>
-
-  <requestDispatcher>
-    <!--Make sure your system has some authentication before enabling remote streaming!
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1" />
-    -->
-        
-    <!-- Set HTTP caching related parameters (for proxy caches and clients).
-          
-         To get the behaviour of Solr 1.2 (ie: no caching related headers)
-         use the never304="true" option and do not specify a value for
-         <cacheControl>
-    -->
-    <httpCaching never304="true">
-    <!--httpCaching lastModifiedFrom="openTime"
-                 etagSeed="Solr"-->
-       <!-- lastModFrom="openTime" is the default, the Last-Modified value
-            (and validation against If-Modified-Since requests) will all be
-            relative to when the current Searcher was opened.
-            You can change it to lastModFrom="dirLastMod" if you want the
-            value to exactly corrispond to when the physical index was last
-            modified.
-               
-            etagSeed="..." is an option you can change to force the ETag
-            header (and validation against If-None-Match requests) to be
-            differnet even if the index has not changed (ie: when making
-            significant changes to your config file)
-
-            lastModifiedFrom and etagSeed are both ignored if you use the
-            never304="true" option.
-       -->
-       <!-- If you include a <cacheControl> directive, it will be used to
-            generate a Cache-Control header, as well as an Expires header
-            if the value contains "max-age="
-               
-            By default, no Cache-Control header is generated.
-
-            You can use the <cacheControl> option even if you have set
-            never304="true"
-       -->
-       <!-- <cacheControl>max-age=30, public</cacheControl> -->
-    </httpCaching>
-  </requestDispatcher>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <!-- 
-       <int name="rows">10</int>
-       <str name="fl">*</str>
-       <str name="version">2.1</str>
-        -->
-     </lst>
-  </requestHandler>
-  
-  <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
-  </requestHandler>
-    
-  <!--
-   
-   Search components are registered to SolrCore and used by Search Handlers
-   
-   By default, the following components are avaliable:
-    
-   <searchComponent name="query"     class="org.apache.solr.handler.component.QueryComponent" />
-   <searchComponent name="facet"     class="org.apache.solr.handler.component.FacetComponent" />
-   <searchComponent name="mlt"       class="org.apache.solr.handler.component.MoreLikeThisComponent" />
-   <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
-   <searchComponent name="debug"     class="org.apache.solr.handler.component.DebugComponent" />
-  
-   If you register a searchComponent to one of the standard names, that will be used instead.
-  
-   -->
- 
-  <requestHandler name="/search" class="org.apache.solr.handler.component.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-    </lst>
-    <!--
-    By default, this will register the following components:
-    
-    <arr name="components">
-      <str>query</str>
-      <str>facet</str>
-      <str>mlt</str>
-      <str>highlight</str>
-      <str>debug</str>
-    </arr>
-    
-    To insert handlers before or after the 'standard' components, use:
-    
-    <arr name="first-components">
-      <str>first</str>
-    </arr>
-    
-    <arr name="last-components">
-      <str>last</str>
-    </arr>
-    
-    -->
-  </requestHandler>
-  
-  <!-- config for the admin interface -->
-  <admin>
-    <defaultQuery>*:*</defaultQuery>
-  </admin>
-
-</config>
-
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-schema.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-schema.xml
deleted file mode 100644
index 7187138..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-schema.xml
+++ /dev/null
@@ -1,70 +0,0 @@
-<schema name="dih_test" version="4.0">
-
-  <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
-  <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
-  <fieldType name="tint" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tfloat" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tlong" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tdouble" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="date" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" sortMissingLast="true" omitNorms="true"/>
-  <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1"
-              catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory" />
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0"
-              catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-  <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1"
-              catenateNumbers="1" catenateAll="0"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory" />
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1"
-              catenateNumbers="1" catenateAll="0"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-  <fieldType name="ignored" stored="false" indexed="false" class="solr.StrField"/>
-
-  <field name="id" type="string" indexed="true" stored="true" required="true"/>
-  <field name="desc" type="string" indexed="true" stored="true" multiValued="true"/>
-  <field name="date" type="date" indexed="true" stored="true"/>
-  <field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/>
-
-  <field name="NAME" type="text" indexed="true" stored="true" multiValued="false"/>
-  <field name="COUNTRY_NAME" type="text" indexed="true" stored="true" multiValued="true"/>
-  <field name="SPORT_NAME" type="text" indexed="true" stored="true" multiValued="true"/>
-  <field name="DO_NOT_INDEX" type="ignored"/>
-
-  <field name="_version_" type="tlong" indexed="true" stored="true" multiValued="false"/>
-  <field name="_root_" type="string" indexed="true" stored="true" multiValued="false"/>
-
-  <dynamicField name="*_i" type="tint" indexed="true" stored="true"/>
-  <dynamicField name="*_s" type="string" indexed="true" stored="true"/>
-  <dynamicField name="*_mult_s" type="string" indexed="true" stored="true" multiValued="true"/>
-  <dynamicField name="*_l" type="tlong" indexed="true" stored="true"/>
-  <dynamicField name="*_t" type="text" indexed="true" stored="true"/>
-  <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/>
-  <dynamicField name="*_f" type="tfloat" indexed="true" stored="true"/>
-  <dynamicField name="*_d" type="tdouble" indexed="true" stored="true"/>
-  <dynamicField name="*_dt" type="date" indexed="true" stored="true"/>
-
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-solr_id-schema.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-solr_id-schema.xml
deleted file mode 100644
index e8deee4..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-solr_id-schema.xml
+++ /dev/null
@@ -1,313 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
--->
-
-<schema name="test" version="1.1">
-  <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       Applications should change this to reflect the nature of the search collection.
-       version="1.1" is Solr's version number for the schema syntax and semantics.  It should
-       not normally be changed by applications.
-       1.0: multiValued attribute did not exist, all fields are multiValued by nature
-       1.1: multiValued attribute introduced, false by default -->
-
-
-  <!-- field type definitions. The "name" attribute is
-     just a label to be used by field definitions.  The "class"
-     attribute and any other attributes determine the real
-     behavior of the fieldType.
-       Class names starting with "solr" refer to java classes in the
-     org.apache.solr.analysis package.
-  -->
-
-  <!-- The StrField type is not analyzed, but indexed/stored verbatim.  
-     - StrField and TextField support an optional compressThreshold which
-     limits compression (if enabled in the derived fields) to values which
-     exceed a certain size (in characters).
-  -->
-  <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
-
-  <!-- boolean type: "true" or "false" -->
-  <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
-
-  <!-- The optional sortMissingLast and sortMissingFirst attributes are
-       currently supported on types that are sorted internally as strings.
-     - If sortMissingLast="true", then a sort on this field will cause documents
-       without the field to come after documents with the field,
-       regardless of the requested sort order (asc or desc).
-     - If sortMissingFirst="true", then a sort on this field will cause documents
-       without the field to come before documents with the field,
-       regardless of the requested sort order.
-     - If sortMissingLast="false" and sortMissingFirst="false" (the default),
-       then default lucene sorting will be used which places docs without the
-       field first in an ascending sort and last in a descending sort.
-  -->
-
-
-  <!--
-    Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
-  -->
-  <fieldType name="int" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="float" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  <fieldType name="double" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-
-  <!--
-   Numeric field types that index each value at various levels of precision
-   to accelerate range queries when the number of values between the range
-   endpoints is large. See the javadoc for LegacyNumericRangeQuery for internal
-   implementation details.
-
-   Smaller precisionStep values (specified in bits) will lead to more tokens
-   indexed per value, slightly larger index size, and faster range queries.
-   A precisionStep of 0 disables indexing at different precision levels.
-  -->
-  <fieldType name="tint" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tfloat" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tlong" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tdouble" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-
-
-  <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
-       is a more restricted form of the canonical representation of dateTime
-       http://www.w3.org/TR/xmlschema-2/#dateTime    
-       The trailing "Z" designates UTC time and is mandatory.
-       Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
-       All other components are mandatory.
-
-       Expressions can also be used to denote calculations that should be
-       performed relative to "NOW" to determine the value, ie...
-
-             NOW/HOUR
-                ... Round to the start of the current hour
-             NOW-1DAY
-                ... Exactly 1 day prior to now
-             NOW/DAY+6MONTHS+3DAYS
-                ... 6 months and 3 days in the future from the start of
-                    the current day
-                    
-       Consult the TrieDateField javadocs for more information.
-    -->
-  <fieldType name="date" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" sortMissingLast="true" omitNorms="true"/>
-
-
-  <!-- The "RandomSortField" is not used to store or search any
-       data.  You can declare fields of this type it in your schema
-       to generate psuedo-random orderings of your docs for sorting 
-       purposes.  The ordering is generated based on the field name 
-       and the version of the index, As long as the index version
-       remains unchanged, and the same field name is reused,
-       the ordering of the docs will be consistent.  
-       If you want differend psuedo-random orderings of documents,
-       for the same version of the index, use a dynamicField and
-       change the name
-   -->
-  <fieldType name="random" class="solr.RandomSortField" indexed="true"/>
-
-  <!-- solr.TextField allows the specification of custom text analyzers
-       specified as a tokenizer and a list of token filters. Different
-       analyzers may be specified for indexing and querying.
-
-       The optional positionIncrementGap puts space between multiple fields of
-       this type on the same document, with the purpose of preventing false phrase
-       matching across fields.
-
-       For more info on customizing your analyzer chain, please see
-       http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-   -->
-
-  <!-- One can also specify an existing Analyzer class that has a
-       default constructor via the class attribute on the analyzer element
-  <fieldType name="text_greek" class="solr.TextField">
-    <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
-  </fieldType>
-  -->
-
-  <!-- A text field that only splits on whitespace for exact matching of words -->
-  <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
-    <analyzer>
-      <tokenizer class="solr.MockTokenizerFactory"/>
-    </analyzer>
-  </fieldType>
-
-  <!-- A text field that uses WordDelimiterGraphFilter to enable splitting and matching of
-      words on case-change, alpha numeric boundaries, and non-alphanumeric chars,
-      so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi".
-      Synonyms and stopwords are customized by external files, and stemming is enabled.
-      Duplicate tokens at the same position (which may result from Stemmed Synonyms or
-      WordDelim parts) are removed.
-      -->
-  <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <!-- in this example, we will only use synonyms at query time
-      <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-      -->
-      <!--<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1"
-              catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!--<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
-      <filter class="solr.PorterStemFilterFactory"/>-->
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory"/>
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <!--<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>-->
-      <!--<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0"
-              catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!--<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
-      <filter class="solr.PorterStemFilterFactory"/>-->
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-
-
-  <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
-       but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
-  <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <!--<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>-->
-      <!--<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1"
-              catenateNumbers="1" catenateAll="0"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!--<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
-      <filter class="solr.EnglishMinimalStemFilterFactory"/>-->
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory"/>
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <!--<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>-->
-      <!--<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1"
-              catenateNumbers="1" catenateAll="0"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!--<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
-      <filter class="solr.EnglishMinimalStemFilterFactory"/>-->
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-
-  <!-- This is an example of using the KeywordTokenizer along
-       With various TokenFilterFactories to produce a sortable field
-       that does not include some properties of the source text
-    -->
-  <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
-    <analyzer>
-      <!-- KeywordTokenizer does no actual tokenizing, so the entire
-           input string is preserved as a single token
-        -->
-      <tokenizer class="solr.MockTokenizerFactory" pattern="keyword"/>
-      <!-- The LowerCase TokenFilter does what you expect, which can be
-           when you want your sorting to be case insensitive
-        -->
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <!-- The TrimFilter removes any leading or trailing whitespace -->
-      <filter class="solr.TrimFilterFactory"/>
-      <!-- The PatternReplaceFilter gives you the flexibility to use
-           Java Regular expression to replace any sequence of characters
-           matching a pattern with an arbitrary replacement string, 
-           which may include back refrences to portions of the orriginal
-           string matched by the pattern.
-           
-           See the Java Regular Expression documentation for more
-           infomation on pattern and replacement string syntax.
-           
-           http://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html
-        -->
-      <filter class="solr.PatternReplaceFilterFactory"
-              pattern="([^a-z])" replacement="" replace="all"
-      />
-    </analyzer>
-  </fieldType>
-
-  <!-- since fields of this type are by default not stored or indexed, any data added to 
-       them will be ignored outright 
-   -->
-  <fieldType name="ignored" stored="false" indexed="false" class="solr.StrField"/>
-
-  <!-- Valid attributes for fields:
-    name: mandatory - the name for the field
-    type: mandatory - the name of a previously defined type from the <fieldType>s
-    indexed: true if this field should be indexed (searchable or sortable)
-    stored: true if this field should be retrievable
-    multiValued: true if this field may contain multiple values per document
-    omitNorms: (expert) set to true to omit the norms associated with
-      this field (this disables length normalization and index-time
-      boosting for the field, and saves some memory).  Only full-text
-      fields or fields that need an index-time boost need norms.
-    termVectors: [false] set to true to store the term vector for a given field.
-      When using MoreLikeThis, fields used for similarity should be stored for 
-      best performance.
-  -->
-
-  <field name="solr_id" type="string" indexed="true" stored="true" required="true"/>
-  <field name="desc" type="string" indexed="true" stored="true" multiValued="true"/>
-
-  <field name="date" type="date" indexed="true" stored="true"/>
-
-  <field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/>
-
-
-  <!-- Dynamic field definitions.  If a field name is not found, dynamicFields
-       will be used if the name matches any of the patterns.
-       RESTRICTION: the glob-like pattern in the name attribute must have
-       a "*" only at the start or the end.
-       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
-       Longer patterns will be matched first.  if equal size patterns
-       both match, the first appearing in the schema will be used.  -->
-  <dynamicField name="*_i" type="int" indexed="true" stored="true"/>
-  <dynamicField name="*_s" type="string" indexed="true" stored="true"/>
-  <dynamicField name="*_l" type="long" indexed="true" stored="true"/>
-  <dynamicField name="*_t" type="text" indexed="true" stored="true"/>
-  <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/>
-  <dynamicField name="*_f" type="float" indexed="true" stored="true"/>
-  <dynamicField name="*_d" type="double" indexed="true" stored="true"/>
-  <dynamicField name="*_dt" type="date" indexed="true" stored="true"/>
-
-  <dynamicField name="random*" type="random"/>
-
-  <!-- uncomment the following to ignore any fields that don't already match an existing 
-       field name or dynamic field, rather than reporting them as an error. 
-       alternately, change the type="ignored" to some other type e.g. "text" if you want 
-       unknown fields indexed and/or stored by default -->
-  <!--dynamicField name="*" type="ignored" /-->
-
-
-  <!-- Field to use to determine and enforce document uniqueness. 
-       Unless this field is marked with required="false", it will be a required field
-    -->
-  <uniqueKey>solr_id</uniqueKey>
-</schema>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-solrconfig.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-solrconfig.xml
deleted file mode 100644
index ec6e6a9..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/dataimport-solrconfig.xml
+++ /dev/null
@@ -1,287 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <indexConfig>
-    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>
-  </indexConfig>
-
-  <!-- Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.
-       If replication is in use, this should match the replication configuration. -->
-       <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <!-- the default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- A prefix of "solr." for class names is an alias that
-         causes solr to search appropriate packages, including
-         org.apache.solr.(search|update|request|core|analysis)
-     -->
-
-    <!-- Limit the number of deletions Solr will buffer during doc updating.
-        
-        Setting this lower can help bound memory use during indexing.
-    -->
-    <maxPendingDeletes>100000</maxPendingDeletes>
-
-  </updateHandler>
-
-
-  <query>
-    <!-- Maximum number of clauses in a boolean query... can affect
-        range or prefix queries that expand to big boolean
-        queries.  An exception is thrown if exceeded.  -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-    
-    <!-- Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.
-         When a new searcher is opened, its caches may be prepopulated
-         or "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.  For CaffeineCache,
-         the autowarmed items will be the most recently accessed items.
-       Parameters:
-         class - the SolrCache implementation (currently only CaffeineCache)
-         size - the maximum number of entries in the cache
-         initialSize - the initial capacity (number of entries) of
-           the cache.  (seel java.util.HashMap)
-         autowarmCount - the number of entries to prepopulate from
-           and old cache.
-         -->
-    <filterCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-   <!-- queryResultCache caches results of searches - ordered lists of
-         document ids (DocList) based on a query, a sort, and the range
-         of documents requested.  -->
-    <queryResultCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-  <!-- documentCache caches Lucene Document objects (the stored fields for each document).
-       Since Lucene internal document ids are transient, this cache will not be autowarmed.  -->
-    <documentCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="0"/>
-
-    <!-- If true, stored fields that are not requested will be loaded lazily.
-
-    This can result in a significant speed improvement if the usual case is to
-    not load all stored fields, especially if the skipped fields are large compressed
-    text fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-    <!-- Example of a generic cache.  These caches may be accessed by name
-         through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().
-         The purpose is to enable easy caching of user/application level data.
-         The regenerator argument should be specified as an implementation
-         of solr.search.CacheRegenerator if autowarming is desired.  -->
-    <!--
-    <cache name="myUserCache"
-      class="solr.CaffeineCache"
-      size="4096"
-      initialSize="1024"
-      autowarmCount="1024"
-      regenerator="org.mycompany.mypackage.MyRegenerator"
-      />
-    -->
-
-   <!-- An optimization that attempts to use a filter to satisfy a search.
-         If the requested sort does not include score, then the filterCache
-         will be checked for a filter matching the query. If found, the filter
-         will be used as the source of document ids, and then the sort will be
-         applied to that.
-    <useFilterForSortedQuery>true</useFilterForSortedQuery>
-   -->
-
-   <!-- An optimization for use with the queryResultCache.  When a search
-         is requested, a superset of the requested number of document ids
-         are collected.  For example, if a search for a particular query
-         requests matching documents 10 through 19, and queryWindowSize is 50,
-         then documents 0 through 49 will be collected and cached.  Any further
-         requests in that range can be satisfied via the cache.  -->
-    <queryResultWindowSize>50</queryResultWindowSize>
-    
-    <!-- Maximum number of documents to cache for any entry in the
-         queryResultCache. -->
-    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-    <!-- a newSearcher event is fired whenever a new searcher is being prepared
-         and there is a current searcher handling requests (aka registered). -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence. -->
-    <!--<listener event="newSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-        <!--<lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst>-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- a firstSearcher event is fired whenever a new searcher is being
-         prepared but there is no current registered searcher to handle
-         requests or to gain autowarming data from. -->
-    <!--<listener event="firstSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- If a search request comes in and there is no current registered searcher,
-         then immediately register the still warming searcher and use it.  If
-         "false" then all requests will block until the first searcher is done
-         warming. -->
-    <useColdSearcher>false</useColdSearcher>
-
-    <!-- Maximum number of searchers that may be warming in the background
-      concurrently.  An error is returned if this limit is exceeded. Recommend
-      1-2 for read-only slaves, higher for masters w/o cache warming. -->
-    <maxWarmingSearchers>4</maxWarmingSearchers>
-
-  </query>
-
-  <requestDispatcher>
-    <!--Make sure your system has some authentication before enabling remote streaming!
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1" />
-    -->
-        
-    <!-- Set HTTP caching related parameters (for proxy caches and clients).
-          
-         To get the behaviour of Solr 1.2 (ie: no caching related headers)
-         use the never304="true" option and do not specify a value for
-         <cacheControl>
-    -->
-    <httpCaching never304="true">
-    <!--httpCaching lastModifiedFrom="openTime"
-                 etagSeed="Solr"-->
-       <!-- lastModFrom="openTime" is the default, the Last-Modified value
-            (and validation against If-Modified-Since requests) will all be
-            relative to when the current Searcher was opened.
-            You can change it to lastModFrom="dirLastMod" if you want the
-            value to exactly corrispond to when the physical index was last
-            modified.
-               
-            etagSeed="..." is an option you can change to force the ETag
-            header (and validation against If-None-Match requests) to be
-            differnet even if the index has not changed (ie: when making
-            significant changes to your config file)
-
-            lastModifiedFrom and etagSeed are both ignored if you use the
-            never304="true" option.
-       -->
-       <!-- If you include a <cacheControl> directive, it will be used to
-            generate a Cache-Control header, as well as an Expires header
-            if the value contains "max-age="
-               
-            By default, no Cache-Control header is generated.
-
-            You can use the <cacheControl> option even if you have set
-            never304="true"
-       -->
-       <!-- <cacheControl>max-age=30, public</cacheControl> -->
-    </httpCaching>
-  </requestDispatcher>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <str name="df">desc</str>
-       <!-- 
-       <int name="rows">10</int>
-       <str name="fl">*</str>
-       <str name="version">2.1</str>
-        -->
-     </lst>
-  </requestHandler>
-  
-  <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
-    <lst name="defaults">
-      <str name="dots.in.hsqldb.driver">org.hsqldb.jdbcDriver</str>
-    </lst>
-  </requestHandler>
-    
-  <!--
-   
-   Search components are registered to SolrCore and used by Search Handlers
-   
-   By default, the following components are avaliable:
-    
-   <searchComponent name="query"     class="org.apache.solr.handler.component.QueryComponent" />
-   <searchComponent name="facet"     class="org.apache.solr.handler.component.FacetComponent" />
-   <searchComponent name="mlt"       class="org.apache.solr.handler.component.MoreLikeThisComponent" />
-   <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
-   <searchComponent name="debug"     class="org.apache.solr.handler.component.DebugComponent" />
-  
-   If you register a searchComponent to one of the standard names, that will be used instead.
-  
-   -->
- 
-  <requestHandler name="/search" class="org.apache.solr.handler.component.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-    </lst>
-    <!--
-    By default, this will register the following components:
-    
-    <arr name="components">
-      <str>query</str>
-      <str>facet</str>
-      <str>mlt</str>
-      <str>highlight</str>
-      <str>debug</str>
-    </arr>
-    
-    To insert handlers before or after the 'standard' components, use:
-    
-    <arr name="first-components">
-      <str>first</str>
-    </arr>
-    
-    <arr name="last-components">
-      <str>last</str>
-    </arr>
-    
-    -->
-  </requestHandler>
-
-  <!-- config for the admin interface --> 
-  <admin>
-    <defaultQuery>*:*</defaultQuery>
-  </admin>
-
-  <updateRequestProcessorChain key="dataimport" default="true">
-    <processor class="org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase$TestUpdateRequestProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-    <processor class="solr.LogUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-</config>
-
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/protwords.txt b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/protwords.txt
deleted file mode 100644
index 7878147..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/protwords.txt
+++ /dev/null
@@ -1,20 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#use a protected word file to avoid stemming two
-#unrelated words to the same base word.
-#to test, we will use words that would normally obviously be stemmed.
-cats
-ridding
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/single-entity-data-config.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/single-entity-data-config.xml
deleted file mode 100644
index 7375a2c..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/single-entity-data-config.xml
+++ /dev/null
@@ -1,9 +0,0 @@
-<dataConfig>
-  <dataSource type="MockDataSource"/>
-  <document>
-    <entity name="x" query="select * from x">
-      <field column="id" />
-      <field column="desc" />
-    </entity>
-  </document>
-</dataConfig>
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/stopwords.txt b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/stopwords.txt
deleted file mode 100644
index 688e307..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/stopwords.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-stopworda
-stopwordb
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/synonyms.txt b/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/synonyms.txt
deleted file mode 100644
index a7624f0..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/collection1/conf/synonyms.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-a => aa
-b => b1 b2
-c => c1,c2
-a\=>a => b\=>b
-a\,a => b\,b
-foo,bar,baz
-
-Television,TV,Televisions
diff --git a/solr/contrib/dataimporthandler/src/test-files/dih/solr/solr.xml b/solr/contrib/dataimporthandler/src/test-files/dih/solr/solr.xml
deleted file mode 100644
index 330eef1..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/dih/solr/solr.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
- solr.xml mimicking the old default solr.xml
--->
-
-<solr>
-    <shardHandlerFactory name="shardHandlerFactory" class="HttpShardHandlerFactory">
-      <str name="urlScheme">${urlScheme:}</str>
-    </shardHandlerFactory>
-</solr>
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/test-files/log4j2.xml b/solr/contrib/dataimporthandler/src/test-files/log4j2.xml
deleted file mode 100644
index 5795615..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/log4j2.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-<!-- We're configuring testing to be synchronous due to "logging polution", see SOLR-13268 -->
-<Configuration>
-  <Appenders>
-    <Console name="STDERR" target="SYSTEM_ERR">
-      <PatternLayout>
-        <Pattern>
-          %maxLen{%-4r %-5p (%t) [%X{node_name} %X{collection} %X{shard} %X{replica} %X{core} %X{trace_id}] %c{1.} %m%notEmpty{
-          =>%ex{short}}}{10240}%n
-        </Pattern>
-      </PatternLayout>
-    </Console>
-  </Appenders>
-  <Loggers>
-    <!-- Use <AsyncLogger/<AsyncRoot and <Logger/<Root for asynchronous logging or synchonous logging respectively -->
-    <Logger name="org.apache.zookeeper" level="WARN"/>
-    <Logger name="org.apache.hadoop" level="WARN"/>
-    <Logger name="org.apache.directory" level="WARN"/>
-    <Logger name="org.apache.solr.hadoop" level="INFO"/>
-    <Logger name="org.eclipse.jetty" level="INFO"/>
-
-    <Root level="INFO">
-      <AppenderRef ref="STDERR"/>
-    </Root>
-  </Loggers>
-</Configuration>
diff --git a/solr/contrib/dataimporthandler/src/test-files/solr/collection1/README b/solr/contrib/dataimporthandler/src/test-files/solr/collection1/README
deleted file mode 100644
index a6f23b2..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/solr/collection1/README
+++ /dev/null
@@ -1 +0,0 @@
-The collection1 directory is needed because it is used as a marker in SolrTestCaseJ4.TEST_PATH() to find the configsets
diff --git a/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/README b/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/README
deleted file mode 100644
index 559f2fe..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/README
+++ /dev/null
@@ -1,2 +0,0 @@
-The files here are copies of "dataimport-solrconfig.xml" and "dataimport-schema.xml"
-This config set is used by test org.apache.solr.handler.dataimport.TestZKPropertiesWriter that is starting a SolrCloud mini cluster.
diff --git a/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/schema.xml b/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/schema.xml
deleted file mode 100644
index 7187138..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/schema.xml
+++ /dev/null
@@ -1,70 +0,0 @@
-<schema name="dih_test" version="4.0">
-
-  <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
-  <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
-  <fieldType name="tint" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tfloat" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tlong" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="tdouble" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="8" positionIncrementGap="0"/>
-  <fieldType name="date" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" sortMissingLast="true" omitNorms="true"/>
-  <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1"
-              catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory" />
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0"
-              catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-  <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1"
-              catenateNumbers="1" catenateAll="0"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-      <filter class="solr.FlattenGraphFilterFactory" />
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer class="solr.MockTokenizerFactory"/>
-      <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1"
-              catenateNumbers="1" catenateAll="0"/>
-      <filter class="solr.LowerCaseFilterFactory"/>
-      <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-    </analyzer>
-  </fieldType>
-  <fieldType name="ignored" stored="false" indexed="false" class="solr.StrField"/>
-
-  <field name="id" type="string" indexed="true" stored="true" required="true"/>
-  <field name="desc" type="string" indexed="true" stored="true" multiValued="true"/>
-  <field name="date" type="date" indexed="true" stored="true"/>
-  <field name="timestamp" type="date" indexed="true" stored="true" default="NOW" multiValued="false"/>
-
-  <field name="NAME" type="text" indexed="true" stored="true" multiValued="false"/>
-  <field name="COUNTRY_NAME" type="text" indexed="true" stored="true" multiValued="true"/>
-  <field name="SPORT_NAME" type="text" indexed="true" stored="true" multiValued="true"/>
-  <field name="DO_NOT_INDEX" type="ignored"/>
-
-  <field name="_version_" type="tlong" indexed="true" stored="true" multiValued="false"/>
-  <field name="_root_" type="string" indexed="true" stored="true" multiValued="false"/>
-
-  <dynamicField name="*_i" type="tint" indexed="true" stored="true"/>
-  <dynamicField name="*_s" type="string" indexed="true" stored="true"/>
-  <dynamicField name="*_mult_s" type="string" indexed="true" stored="true" multiValued="true"/>
-  <dynamicField name="*_l" type="tlong" indexed="true" stored="true"/>
-  <dynamicField name="*_t" type="text" indexed="true" stored="true"/>
-  <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/>
-  <dynamicField name="*_f" type="tfloat" indexed="true" stored="true"/>
-  <dynamicField name="*_d" type="tdouble" indexed="true" stored="true"/>
-  <dynamicField name="*_dt" type="date" indexed="true" stored="true"/>
-
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/solrconfig.xml b/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/solrconfig.xml
deleted file mode 100644
index ec6e6a9..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/solr/configsets/dihconfigset/conf/solrconfig.xml
+++ /dev/null
@@ -1,287 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <indexConfig>
-    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>
-  </indexConfig>
-
-  <!-- Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.
-       If replication is in use, this should match the replication configuration. -->
-       <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <!-- the default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- A prefix of "solr." for class names is an alias that
-         causes solr to search appropriate packages, including
-         org.apache.solr.(search|update|request|core|analysis)
-     -->
-
-    <!-- Limit the number of deletions Solr will buffer during doc updating.
-        
-        Setting this lower can help bound memory use during indexing.
-    -->
-    <maxPendingDeletes>100000</maxPendingDeletes>
-
-  </updateHandler>
-
-
-  <query>
-    <!-- Maximum number of clauses in a boolean query... can affect
-        range or prefix queries that expand to big boolean
-        queries.  An exception is thrown if exceeded.  -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-    
-    <!-- Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.
-         When a new searcher is opened, its caches may be prepopulated
-         or "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.  For CaffeineCache,
-         the autowarmed items will be the most recently accessed items.
-       Parameters:
-         class - the SolrCache implementation (currently only CaffeineCache)
-         size - the maximum number of entries in the cache
-         initialSize - the initial capacity (number of entries) of
-           the cache.  (seel java.util.HashMap)
-         autowarmCount - the number of entries to prepopulate from
-           and old cache.
-         -->
-    <filterCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-   <!-- queryResultCache caches results of searches - ordered lists of
-         document ids (DocList) based on a query, a sort, and the range
-         of documents requested.  -->
-    <queryResultCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="256"/>
-
-  <!-- documentCache caches Lucene Document objects (the stored fields for each document).
-       Since Lucene internal document ids are transient, this cache will not be autowarmed.  -->
-    <documentCache
-      class="solr.CaffeineCache"
-      size="512"
-      initialSize="512"
-      autowarmCount="0"/>
-
-    <!-- If true, stored fields that are not requested will be loaded lazily.
-
-    This can result in a significant speed improvement if the usual case is to
-    not load all stored fields, especially if the skipped fields are large compressed
-    text fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-    <!-- Example of a generic cache.  These caches may be accessed by name
-         through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert().
-         The purpose is to enable easy caching of user/application level data.
-         The regenerator argument should be specified as an implementation
-         of solr.search.CacheRegenerator if autowarming is desired.  -->
-    <!--
-    <cache name="myUserCache"
-      class="solr.CaffeineCache"
-      size="4096"
-      initialSize="1024"
-      autowarmCount="1024"
-      regenerator="org.mycompany.mypackage.MyRegenerator"
-      />
-    -->
-
-   <!-- An optimization that attempts to use a filter to satisfy a search.
-         If the requested sort does not include score, then the filterCache
-         will be checked for a filter matching the query. If found, the filter
-         will be used as the source of document ids, and then the sort will be
-         applied to that.
-    <useFilterForSortedQuery>true</useFilterForSortedQuery>
-   -->
-
-   <!-- An optimization for use with the queryResultCache.  When a search
-         is requested, a superset of the requested number of document ids
-         are collected.  For example, if a search for a particular query
-         requests matching documents 10 through 19, and queryWindowSize is 50,
-         then documents 0 through 49 will be collected and cached.  Any further
-         requests in that range can be satisfied via the cache.  -->
-    <queryResultWindowSize>50</queryResultWindowSize>
-    
-    <!-- Maximum number of documents to cache for any entry in the
-         queryResultCache. -->
-    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-    <!-- a newSearcher event is fired whenever a new searcher is being prepared
-         and there is a current searcher handling requests (aka registered). -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence. -->
-    <!--<listener event="newSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-        <!--<lst> <str name="q">solr</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst> <str name="q">rocks</str> <str name="start">0</str> <str name="rows">10</str> </lst>-->
-        <!--<lst><str name="q">static newSearcher warming query from solrconfig.xml</str></lst>-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- a firstSearcher event is fired whenever a new searcher is being
-         prepared but there is no current registered searcher to handle
-         requests or to gain autowarming data from. -->
-    <!--<listener event="firstSearcher" class="solr.QuerySenderListener">-->
-      <!--<arr name="queries">-->
-      <!--</arr>-->
-    <!--</listener>-->
-
-    <!-- If a search request comes in and there is no current registered searcher,
-         then immediately register the still warming searcher and use it.  If
-         "false" then all requests will block until the first searcher is done
-         warming. -->
-    <useColdSearcher>false</useColdSearcher>
-
-    <!-- Maximum number of searchers that may be warming in the background
-      concurrently.  An error is returned if this limit is exceeded. Recommend
-      1-2 for read-only slaves, higher for masters w/o cache warming. -->
-    <maxWarmingSearchers>4</maxWarmingSearchers>
-
-  </query>
-
-  <requestDispatcher>
-    <!--Make sure your system has some authentication before enabling remote streaming!
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1" />
-    -->
-        
-    <!-- Set HTTP caching related parameters (for proxy caches and clients).
-          
-         To get the behaviour of Solr 1.2 (ie: no caching related headers)
-         use the never304="true" option and do not specify a value for
-         <cacheControl>
-    -->
-    <httpCaching never304="true">
-    <!--httpCaching lastModifiedFrom="openTime"
-                 etagSeed="Solr"-->
-       <!-- lastModFrom="openTime" is the default, the Last-Modified value
-            (and validation against If-Modified-Since requests) will all be
-            relative to when the current Searcher was opened.
-            You can change it to lastModFrom="dirLastMod" if you want the
-            value to exactly corrispond to when the physical index was last
-            modified.
-               
-            etagSeed="..." is an option you can change to force the ETag
-            header (and validation against If-None-Match requests) to be
-            differnet even if the index has not changed (ie: when making
-            significant changes to your config file)
-
-            lastModifiedFrom and etagSeed are both ignored if you use the
-            never304="true" option.
-       -->
-       <!-- If you include a <cacheControl> directive, it will be used to
-            generate a Cache-Control header, as well as an Expires header
-            if the value contains "max-age="
-               
-            By default, no Cache-Control header is generated.
-
-            You can use the <cacheControl> option even if you have set
-            never304="true"
-       -->
-       <!-- <cacheControl>max-age=30, public</cacheControl> -->
-    </httpCaching>
-  </requestDispatcher>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <str name="df">desc</str>
-       <!-- 
-       <int name="rows">10</int>
-       <str name="fl">*</str>
-       <str name="version">2.1</str>
-        -->
-     </lst>
-  </requestHandler>
-  
-  <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
-    <lst name="defaults">
-      <str name="dots.in.hsqldb.driver">org.hsqldb.jdbcDriver</str>
-    </lst>
-  </requestHandler>
-    
-  <!--
-   
-   Search components are registered to SolrCore and used by Search Handlers
-   
-   By default, the following components are avaliable:
-    
-   <searchComponent name="query"     class="org.apache.solr.handler.component.QueryComponent" />
-   <searchComponent name="facet"     class="org.apache.solr.handler.component.FacetComponent" />
-   <searchComponent name="mlt"       class="org.apache.solr.handler.component.MoreLikeThisComponent" />
-   <searchComponent name="highlight" class="org.apache.solr.handler.component.HighlightComponent" />
-   <searchComponent name="debug"     class="org.apache.solr.handler.component.DebugComponent" />
-  
-   If you register a searchComponent to one of the standard names, that will be used instead.
-  
-   -->
- 
-  <requestHandler name="/search" class="org.apache.solr.handler.component.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-    </lst>
-    <!--
-    By default, this will register the following components:
-    
-    <arr name="components">
-      <str>query</str>
-      <str>facet</str>
-      <str>mlt</str>
-      <str>highlight</str>
-      <str>debug</str>
-    </arr>
-    
-    To insert handlers before or after the 'standard' components, use:
-    
-    <arr name="first-components">
-      <str>first</str>
-    </arr>
-    
-    <arr name="last-components">
-      <str>last</str>
-    </arr>
-    
-    -->
-  </requestHandler>
-
-  <!-- config for the admin interface --> 
-  <admin>
-    <defaultQuery>*:*</defaultQuery>
-  </admin>
-
-  <updateRequestProcessorChain key="dataimport" default="true">
-    <processor class="org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase$TestUpdateRequestProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-    <processor class="solr.LogUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-</config>
-
diff --git a/solr/contrib/dataimporthandler/src/test-files/solr/solr.xml b/solr/contrib/dataimporthandler/src/test-files/solr/solr.xml
deleted file mode 100644
index 330eef1..0000000
--- a/solr/contrib/dataimporthandler/src/test-files/solr/solr.xml
+++ /dev/null
@@ -1,27 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
- solr.xml mimicking the old default solr.xml
--->
-
-<solr>
-    <shardHandlerFactory name="shardHandlerFactory" class="HttpShardHandlerFactory">
-      <str name="urlScheme">${urlScheme:}</str>
-    </shardHandlerFactory>
-</solr>
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDIHCacheTestCase.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDIHCacheTestCase.java
deleted file mode 100644
index 7a7b3ec..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDIHCacheTestCase.java
+++ /dev/null
@@ -1,235 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.Reader;
-import java.math.BigDecimal;
-import java.sql.Clob;
-import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-
-import javax.sql.rowset.serial.SerialClob;
-
-import org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.TestContext;
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.Before;
-
-public class AbstractDIHCacheTestCase {
-  protected static final Date Feb21_2011 = new Date(1298268000000l);
-  protected final String[] fieldTypes = { "INTEGER", "BIGDECIMAL", "STRING", "STRING",   "FLOAT",   "DATE",   "CLOB" };
-  protected final String[] fieldNames = { "a_id",    "PI",         "letter", "examples", "a_float", "a_date", "DESCRIPTION" };
-  protected List<ControlData> data = new ArrayList<>();
-  protected Clob APPLE = null;
-
-  @Before
-  public void setup() {
-    try {
-      APPLE = new SerialClob("Apples grow on trees and they are good to eat.".toCharArray());
-    } catch (SQLException sqe) {
-      Assert.fail("Could not Set up Test");
-    }
-
-    // The first row needs to have all non-null fields,
-    // otherwise we would have to always send the fieldTypes & fieldNames as CacheProperties when building.
-    data = new ArrayList<>();
-    data.add(new ControlData(new Object[] {1, new BigDecimal(Math.PI), "A", "Apple", 1.11f, Feb21_2011, APPLE }));
-    data.add(new ControlData(new Object[] {2, new BigDecimal(Math.PI), "B", "Ball", 2.22f, Feb21_2011, null }));
-    data.add(new ControlData(new Object[] {4, new BigDecimal(Math.PI), "D", "Dog", 4.44f, Feb21_2011, null }));
-    data.add(new ControlData(new Object[] {3, new BigDecimal(Math.PI), "C", "Cookie", 3.33f, Feb21_2011, null }));
-    data.add(new ControlData(new Object[] {4, new BigDecimal(Math.PI), "D", "Daisy", 4.44f, Feb21_2011, null }));
-    data.add(new ControlData(new Object[] {4, new BigDecimal(Math.PI), "D", "Drawing", 4.44f, Feb21_2011, null }));
-    data.add(new ControlData(new Object[] {5, new BigDecimal(Math.PI), "E",
-        Arrays.asList("Eggplant", "Ear", "Elephant", "Engine"), 5.55f, Feb21_2011, null }));
-  }
-
-  @After
-  public void teardown() {
-    APPLE = null;
-    data = null;
-  }
-
-  //A limitation of this test class is that the primary key needs to be the first one in the list.
-  //DIHCaches, however, can handle any field being the primary key.
-  static class ControlData implements Comparable<ControlData>, Iterable<Object> {
-    Object[] data;
-
-    ControlData(Object[] data) {
-      this.data = data;
-    }
-
-    @Override
-    @SuppressWarnings({"unchecked", "rawtypes"})
-    public int compareTo(ControlData cd) {
-      Comparable c1 = (Comparable) data[0];
-      Comparable c2 = (Comparable) cd.data[0];
-      return c1.compareTo(c2);
-    }
-
-    @Override
-    public Iterator<Object> iterator() {
-      return Arrays.asList(data).iterator();
-    }
-  }
-
-  protected void loadData(DIHCache cache, List<ControlData> theData, String[] theFieldNames, boolean keepOrdered) {
-    for (ControlData cd : theData) {
-      cache.add(controlDataToMap(cd, theFieldNames, keepOrdered));
-    }
-  }
-
-  protected List<ControlData> extractDataInKeyOrder(DIHCache cache, String[] theFieldNames) {
-    List<Object[]> data = new ArrayList<>();
-    Iterator<Map<String, Object>> cacheIter = cache.iterator();
-    while (cacheIter.hasNext()) {
-      data.add(mapToObjectArray(cacheIter.next(), theFieldNames));
-    }
-    return listToControlData(data);
-  }
-
-  //This method assumes that the Primary Keys are integers and that the first id=1.
-  //It will look for id's sequentially until one is skipped, then will stop.
-  protected List<ControlData> extractDataByKeyLookup(DIHCache cache, String[] theFieldNames) {
-    int recId = 1;
-    List<Object[]> data = new ArrayList<>();
-    while (true) {
-      Iterator<Map<String, Object>> listORecs = cache.iterator(recId);
-      if (listORecs == null) {
-        break;
-      }
-
-      while(listORecs.hasNext()) {
-        data.add(mapToObjectArray(listORecs.next(), theFieldNames));
-      }
-      recId++;
-    }
-    return listToControlData(data);
-  }
-
-  protected List<ControlData> listToControlData(List<Object[]> data) {
-    List<ControlData> returnData = new ArrayList<>(data.size());
-    for (int i = 0; i < data.size(); i++) {
-      returnData.add(new ControlData(data.get(i)));
-    }
-    return returnData;
-  }
-
-  protected Object[] mapToObjectArray(Map<String, Object> rec, String[] theFieldNames) {
-    Object[] oos = new Object[theFieldNames.length];
-    for (int i = 0; i < theFieldNames.length; i++) {
-      oos[i] = rec.get(theFieldNames[i]);
-    }
-    return oos;
-  }
-
-  protected void compareData(List<ControlData> theControl, List<ControlData> test) {
-    // The test data should come back primarily in Key order and secondarily in insertion order.
-    List<ControlData> control = new ArrayList<>(theControl);
-    Collections.sort(control);
-
-    StringBuilder errors = new StringBuilder();
-    if (test.size() != control.size()) {
-      errors.append("-Returned data has " + test.size() + " records.  expected: " + control.size() + "\n");
-    }
-    for (int i = 0; i < control.size() && i < test.size(); i++) {
-      Object[] controlRec = control.get(i).data;
-      Object[] testRec = test.get(i).data;
-      if (testRec.length != controlRec.length) {
-        errors.append("-Record indexAt=" + i + " has " + testRec.length + " data elements.  extpected: " + controlRec.length + "\n");
-      }
-      for (int j = 0; j < controlRec.length && j < testRec.length; j++) {
-        Object controlObj = controlRec[j];
-        Object testObj = testRec[j];
-        if (controlObj == null && testObj != null) {
-          errors.append("-Record indexAt=" + i + ", Data Element indexAt=" + j + " is not NULL as expected.\n");
-        } else if (controlObj != null && testObj == null) {
-          errors.append("-Record indexAt=" + i + ", Data Element indexAt=" + j + " is NULL.  Expected: " + controlObj + " (class="
-              + controlObj.getClass().getName() + ")\n");
-        } else if (controlObj != null && testObj != null && controlObj instanceof Clob) {
-          String controlString = clobToString((Clob) controlObj);
-          String testString = clobToString((Clob) testObj);
-          if (!controlString.equals(testString)) {
-            errors.append("-Record indexAt=" + i + ", Data Element indexAt=" + j + " has: " + testString + " (class=Clob) ... Expected: " + controlString
-                + " (class=Clob)\n");
-          }
-        } else if (controlObj != null && !controlObj.equals(testObj)) {
-          errors.append("-Record indexAt=" + i + ", Data Element indexAt=" + j + " has: " + testObj + " (class=" + testObj.getClass().getName()
-              + ") ... Expected: " + controlObj + " (class=" + controlObj.getClass().getName() + ")\n");
-        }
-      }
-    }
-    if (errors.length() > 0) {
-      Assert.fail(errors.toString());
-    }
-  }
-
-  protected Map<String, Object> controlDataToMap(ControlData cd, String[] theFieldNames, boolean keepOrdered) {
-    Map<String, Object> rec = null;
-    if (keepOrdered) {
-      rec = new LinkedHashMap<>();
-    } else {
-      rec = new HashMap<>();
-    }
-    for (int i = 0; i < cd.data.length; i++) {
-      String fieldName = theFieldNames[i];
-      Object data = cd.data[i];
-      rec.put(fieldName, data);
-    }
-    return rec;
-  }
-
-  protected String stringArrayToCommaDelimitedList(String[] strs) {
-    StringBuilder sb = new StringBuilder();
-    for (String a : strs) {
-      if (sb.length() > 0) {
-        sb.append(",");
-      }
-      sb.append(a);
-    }
-    return sb.toString();
-  }
-
-  protected String clobToString(Clob cl) {
-    StringBuilder sb = new StringBuilder();
-    try {
-      Reader in = cl.getCharacterStream();
-      char[] cbuf = new char[1024];
-      int numGot = -1;
-      while ((numGot = in.read(cbuf)) != -1) {
-        sb.append(String.valueOf(cbuf, 0, numGot));
-      }
-    } catch (Exception e) {
-      Assert.fail(e.toString());
-    }
-    return sb.toString();
-  }
-
-  public static Context getContext(final Map<String, String> entityAttrs) {
-    VariableResolver resolver = new VariableResolver();
-    final Context delegate = new ContextImpl(null, resolver, null, null, new HashMap<String, Object>(), null, null);
-    return new TestContext(entityAttrs, delegate, null, true);
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDIHJdbcTestCase.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDIHJdbcTestCase.java
deleted file mode 100644
index 5428e18..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDIHJdbcTestCase.java
+++ /dev/null
@@ -1,198 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.OutputStream;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import junit.framework.Assert;
-
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-
-/**
- * This sets up an in-memory Sql database with a little sample data.
- */
-public abstract class AbstractDIHJdbcTestCase extends
-    AbstractDataImportHandlerTestCase {
-  
-  protected Database dbToUse;
-  
-  public enum Database {
-    RANDOM, DERBY, HSQLDB
-  }
-  
-  protected boolean skipThisTest = false;
-  
-  private static final Pattern totalRequestsPattern = Pattern
-      .compile(".str name..Total Requests made to DataSource..(\\d+)..str.");
-    
-  @BeforeClass
-  public static void beforeClassDihJdbcTest() throws Exception {
-    try {
-      Class.forName("org.hsqldb.jdbcDriver").getConstructor().newInstance();
-      String oldProp = System.getProperty("derby.stream.error.field");
-      System
-          .setProperty("derby.stream.error.field",
-              "org.apache.solr.handler.dataimport.AbstractDIHJdbcTestCase$DerbyUtil.DEV_NULL");
-      Class.forName("org.apache.derby.jdbc.EmbeddedDriver").getConstructor().newInstance();
-      if (oldProp != null) {
-        System.setProperty("derby.stream.error.field", oldProp);
-      }
-    } catch (Exception e) {
-      throw e;
-    }
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml");
-  }
-  
-  @AfterClass
-  public static void afterClassDihJdbcTest() throws Exception {
-    try {
-      DriverManager.getConnection("jdbc:derby:;shutdown=true");
-    } catch (SQLException e) {
-      // ignore...we might not even be using derby this time...
-    }
-  }
-  
-  protected Database setAllowedDatabases() {
-    return Database.RANDOM;
-  }
-  
-  @Before
-  public void beforeDihJdbcTest() throws Exception {  
-    skipThisTest = false;
-    dbToUse = setAllowedDatabases();
-    if (dbToUse == Database.RANDOM) {
-      if (random().nextBoolean()) {
-        dbToUse = Database.DERBY;
-      } else {
-        dbToUse = Database.HSQLDB;
-      }
-    }
-    
-    clearIndex();
-    assertU(commit());
-    buildDatabase();
-  }
-  
-  @After
-  public void afterDihJdbcTest() throws Exception {
-    Connection conn = null;
-    Statement s = null;
-    try {
-      if (dbToUse == Database.DERBY) {
-        try {
-          conn = DriverManager
-              .getConnection("jdbc:derby:memory:derbyDB;drop=true;territory=en_US");
-        } catch (SQLException e) {
-          if (!"08006".equals(e.getSQLState())) {
-            throw e;
-          }
-        }
-      } else if (dbToUse == Database.HSQLDB) {
-        conn = DriverManager.getConnection("jdbc:hsqldb:mem:.");
-        s = conn.createStatement();
-        s.executeUpdate("shutdown");
-      }
-    } catch (SQLException e) {
-      if(!skipThisTest) {
-        throw e;
-      }
-    } finally {
-      try {
-        s.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-  }
-  
-  protected Connection newConnection() throws Exception {
-    if (dbToUse == Database.DERBY) {
-      return DriverManager.getConnection("jdbc:derby:memory:derbyDB;territory=en_US");
-    } else if (dbToUse == Database.HSQLDB) {
-      return DriverManager.getConnection("jdbc:hsqldb:mem:.");
-    }
-    throw new AssertionError("Invalid database to use: " + dbToUse);
-  }
-  
-  protected void buildDatabase() throws Exception {
-    Connection conn = null;
-    try {
-      if (dbToUse == Database.DERBY) {
-        conn = DriverManager
-            .getConnection("jdbc:derby:memory:derbyDB;create=true;territory=en_US");
-      } else if (dbToUse == Database.HSQLDB) {
-        conn = DriverManager.getConnection("jdbc:hsqldb:mem:.");
-      } else {
-        throw new AssertionError("Invalid database to use: " + dbToUse);
-      }
-      populateData(conn);
-    } catch (SQLException sqe) {
-      Throwable cause = sqe;
-      while(cause.getCause()!=null) {
-        cause = cause.getCause();
-      }
-    } finally {
-      try {
-        conn.close();
-      } catch (Exception e1) {}
-    }
-  }
-  
-  protected void populateData(Connection conn) throws Exception {
-    // no-op
-  }
-  
-  public int totalDatabaseRequests(String dihHandlerName) throws Exception {
-    LocalSolrQueryRequest request = lrf.makeRequest("indent", "true");
-    String response = h.query(dihHandlerName, request);
-    Matcher m = totalRequestsPattern.matcher(response);
-    Assert.assertTrue("The handler " + dihHandlerName
-        + " is not reporting any database requests. ",
-        m.find() && m.groupCount() == 1);
-    return Integer.parseInt(m.group(1));
-  }
-  
-  public int totalDatabaseRequests() throws Exception {
-    return totalDatabaseRequests("/dataimport");
-  }
-  
-  protected LocalSolrQueryRequest generateRequest() {
-    return lrf.makeRequest("command", "full-import", "dataConfig",
-        generateConfig(), "clean", "true", "commit", "true", "synchronous",
-        "true", "indent", "true");
-  }
-  
-  protected abstract String generateConfig();
-  
-  public static class DerbyUtil {
-    public static final OutputStream DEV_NULL = new OutputStream() {
-      @Override
-      public void write(int b) {}
-    };
-  }
-}
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDataImportHandlerTestCase.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDataImportHandlerTestCase.java
deleted file mode 100644
index 7a31acf..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractDataImportHandlerTestCase.java
+++ /dev/null
@@ -1,379 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.SuppressForbidden;
-import org.apache.solr.common.util.Utils;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.response.SolrQueryResponse;
-import org.apache.solr.update.AddUpdateCommand;
-import org.apache.solr.update.CommitUpdateCommand;
-import org.apache.solr.update.DeleteUpdateCommand;
-import org.apache.solr.update.MergeIndexesCommand;
-import org.apache.solr.update.RollbackUpdateCommand;
-import org.apache.solr.update.processor.UpdateRequestProcessor;
-import org.apache.solr.update.processor.UpdateRequestProcessorFactory;
-import org.junit.BeforeClass;
-
-/**
- * <p>
- * Abstract base class for DataImportHandler tests
- * </p>
- * <p>
- * <b>This API is experimental and subject to change</b>
- *
- *
- * @since solr 1.3
- */
-public abstract class AbstractDataImportHandlerTestCase extends
-        SolrTestCaseJ4 {
-
-  // note, a little twisted that we shadow this static method
-  public static void initCore(String config, String schema) throws Exception {
-    File testHome = createTempDir("core-home").toFile();
-    FileUtils.copyDirectory(getFile("dih/solr"), testHome);
-    initCore(config, schema, testHome.getAbsolutePath());
-  }
-
-  @BeforeClass
-  public static void baseBeforeClass() {
-    System.setProperty(DataImportHandler.ENABLE_DIH_DATA_CONFIG_PARAM, "true");
-  }
-
-  protected String loadDataConfig(String dataConfigFileName) {
-    try {
-      SolrCore core = h.getCore();
-      return SolrWriter.getResourceAsString(core.getResourceLoader()
-              .openResource(dataConfigFileName));
-    } catch (IOException e) {
-      e.printStackTrace();
-      return null;
-    }
-  }
-
-  protected String runFullImport(String dataConfig) throws Exception {
-    LocalSolrQueryRequest request = lrf.makeRequest("command", "full-import",
-            "debug", "on", "clean", "true", "commit", "true", "dataConfig",
-            dataConfig);
-    return h.query("/dataimport", request);
-  }
-
-  protected void runDeltaImport(String dataConfig) throws Exception {
-    LocalSolrQueryRequest request = lrf.makeRequest("command", "delta-import",
-            "debug", "on", "clean", "false", "commit", "true", "dataConfig",
-            dataConfig);
-    h.query("/dataimport", request);
-  }
-
-  /**
-   * Redirect {@link SimplePropertiesWriter#filename} to a temporary location 
-   * and return it.
-   */
-  protected File redirectTempProperties(DataImporter di) {
-    try {
-      File tempFile = createTempFile().toFile();
-      di.getConfig().getPropertyWriter().getParameters()
-        .put(SimplePropertiesWriter.FILENAME, tempFile.getAbsolutePath());
-      return tempFile;
-    } catch (IOException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  /**
-   * Runs a full-import using the given dataConfig and the provided request parameters.
-   *
-   * By default, debug=on, clean=true and commit=true are passed which can be overridden.
-   *
-   * @param dataConfig the data-config xml as a string
-   * @param extraParams any extra request parameters needed to be passed to DataImportHandler
-   * @throws Exception in case of any error
-   */
-  @SuppressWarnings({"unchecked"})
-  protected void runFullImport(String dataConfig, Map<String, String> extraParams) throws Exception {
-    HashMap<String, String> params = new HashMap<>();
-    params.put("command", "full-import");
-    params.put("debug", "on");
-    params.put("dataConfig", dataConfig);
-    params.put("clean", "true");
-    params.put("commit", "true");
-    params.putAll(extraParams);
-    @SuppressWarnings({"rawtypes"})
-    NamedList l = new NamedList();
-    for (Map.Entry<String, String> e : params.entrySet()) {
-      l.add(e.getKey(),e.getValue());
-    }
-    LocalSolrQueryRequest request = new LocalSolrQueryRequest(h.getCore(), l);  
-    h.query("/dataimport", request);
-  }
-
-  /**
-   * Helper for creating a Context instance. Useful for testing Transformers
-   */
-  @SuppressWarnings("unchecked")
-  public static TestContext getContext(EntityProcessorWrapper parent,
-                                   VariableResolver resolver, @SuppressWarnings({"rawtypes"})DataSource parentDataSource,
-                                   String currProcess, final List<Map<String, String>> entityFields,
-                                   final Map<String, String> entityAttrs) {
-    if (resolver == null) resolver = new VariableResolver();
-    final Context delegate = new ContextImpl(parent, resolver,
-            parentDataSource, currProcess,
-        new HashMap<>(), null, null);
-    return new TestContext(entityAttrs, delegate, entityFields, parent == null);
-  }
-
-  /**
-   * Strings at even index are keys, odd-index strings are values in the
-   * returned map
-   */
-  @SuppressWarnings({"rawtypes"})
-  public static Map createMap(Object... args) {
-   return Utils.makeMap(args);
-  }
-
-  @SuppressForbidden(reason = "Needs currentTimeMillis to set modified time for a file")
-  public static File createFile(File tmpdir, String name, byte[] content,
-                                boolean changeModifiedTime) throws IOException {
-    File file = new File(tmpdir.getAbsolutePath() + File.separator + name);
-    file.deleteOnExit();
-    FileOutputStream f = new FileOutputStream(file);
-    f.write(content);
-    f.close();
-    if (changeModifiedTime)
-      file.setLastModified(System.currentTimeMillis() - 3600000);
-    return file;
-  }
-  
-  public static Map<String, String> getField(String col, String type,
-                                             String re, String srcCol, String splitBy) {
-    HashMap<String, String> vals = new HashMap<>();
-    vals.put("column", col);
-    vals.put("type", type);
-    vals.put("regex", re);
-    vals.put("sourceColName", srcCol);
-    vals.put("splitBy", splitBy);
-    return vals;
-  }
-  
-  static class TestContext extends Context {
-    private final Map<String, String> entityAttrs;
-    private final Context delegate;
-    private final List<Map<String, String>> entityFields;
-    private final boolean root;
-    String script,scriptlang;
-
-    public TestContext(Map<String, String> entityAttrs, Context delegate,
-                       List<Map<String, String>> entityFields, boolean root) {
-      this.entityAttrs = entityAttrs;
-      this.delegate = delegate;
-      this.entityFields = entityFields;
-      this.root = root;
-    }
-
-    @Override
-    public String getEntityAttribute(String name) {
-      return entityAttrs == null ? delegate.getEntityAttribute(name) : entityAttrs.get(name);
-    }
-
-    @Override
-    public String getResolvedEntityAttribute(String name) {
-      return entityAttrs == null ? delegate.getResolvedEntityAttribute(name) :
-              delegate.getVariableResolver().replaceTokens(entityAttrs.get(name));
-    }
-
-    @Override
-    public List<Map<String, String>> getAllEntityFields() {
-      return entityFields == null ? delegate.getAllEntityFields()
-              : entityFields;
-    }
-
-    @Override
-    public VariableResolver getVariableResolver() {
-      return delegate.getVariableResolver();
-    }
-
-    @Override
-    @SuppressWarnings({"rawtypes"})
-    public DataSource getDataSource() {
-      return delegate.getDataSource();
-    }
-
-    @Override
-    public boolean isRootEntity() {
-      return root;
-    }
-
-    @Override
-    public String currentProcess() {
-      return delegate.currentProcess();
-    }
-
-    @Override
-    public Map<String, Object> getRequestParameters() {
-      return delegate.getRequestParameters();
-    }
-
-    @Override
-    public EntityProcessor getEntityProcessor() {
-      return null;
-    }
-
-    @Override
-    public void setSessionAttribute(String name, Object val, String scope) {
-      delegate.setSessionAttribute(name, val, scope);
-    }
-
-    @Override
-    public Object getSessionAttribute(String name, String scope) {
-      return delegate.getSessionAttribute(name, scope);
-    }
-
-    @Override
-    public Context getParentContext() {
-      return delegate.getParentContext();
-    }
-
-    @Override
-    @SuppressWarnings({"rawtypes"})public DataSource getDataSource(String name) {
-      return delegate.getDataSource(name);
-    }
-
-    @Override
-    public SolrCore getSolrCore() {
-      return delegate.getSolrCore();
-    }
-
-    @Override
-    public Map<String, Object> getStats() {
-      return delegate.getStats();
-    }
-
-
-    @Override
-    public String getScript() {
-      return script == null ? delegate.getScript() : script;
-    }
-
-    @Override
-    public String getScriptLanguage() {
-      return scriptlang == null ? delegate.getScriptLanguage() : scriptlang;
-    }
-
-    @Override
-    public void deleteDoc(String id) {
-
-    }
-
-    @Override
-    public void deleteDocByQuery(String query) {
-
-    }
-
-    @Override
-    public Object resolve(String var) {
-      return delegate.resolve(var);
-    }
-
-    @Override
-    public String replaceTokens(String template) {
-      return delegate.replaceTokens(template);
-    }
-  }
-
-  public static class TestUpdateRequestProcessorFactory extends UpdateRequestProcessorFactory {
-
-    @Override
-    public UpdateRequestProcessor getInstance(SolrQueryRequest req,
-        SolrQueryResponse rsp, UpdateRequestProcessor next) {
-      return new TestUpdateRequestProcessor(next);
-    }
-    
-  }
-  
-  public static class TestUpdateRequestProcessor extends UpdateRequestProcessor {
-  
-    public static boolean finishCalled = false;
-    public static boolean processAddCalled = false;
-    public static boolean processCommitCalled = false;
-    public static boolean processDeleteCalled = false;
-    public static boolean mergeIndexesCalled = false;
-    public static boolean rollbackCalled = false;
-  
-    public static void reset() {
-      finishCalled = false;
-      processAddCalled = false;
-      processCommitCalled = false;
-      processDeleteCalled = false;
-      mergeIndexesCalled = false;
-      rollbackCalled = false;
-    }
-    
-    public TestUpdateRequestProcessor(UpdateRequestProcessor next) {
-      super(next);
-      reset();
-    }
-
-    @Override
-    public void finish() throws IOException {
-      finishCalled = true;
-      super.finish();
-    }
-
-    @Override
-    public void processAdd(AddUpdateCommand cmd) throws IOException {
-      processAddCalled = true;
-      super.processAdd(cmd);
-    }
-
-    @Override
-    public void processCommit(CommitUpdateCommand cmd) throws IOException {
-      processCommitCalled = true;
-      super.processCommit(cmd);
-    }
-
-    @Override
-    public void processDelete(DeleteUpdateCommand cmd) throws IOException {
-      processDeleteCalled = true;
-      super.processDelete(cmd);
-    }
-
-    @Override
-    public void processMergeIndexes(MergeIndexesCommand cmd) throws IOException {
-      mergeIndexesCalled = true;
-      super.processMergeIndexes(cmd);
-    }
-
-    @Override
-    public void processRollback(RollbackUpdateCommand cmd) throws IOException {
-      rollbackCalled = true;
-      super.processRollback(cmd);
-    }
-    
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractSqlEntityProcessorTestCase.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractSqlEntityProcessorTestCase.java
deleted file mode 100644
index ee5ec82..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AbstractSqlEntityProcessorTestCase.java
+++ /dev/null
@@ -1,848 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import junit.framework.Assert;
-import org.apache.solr.common.util.SuppressForbidden;
-import org.junit.After;
-import org.junit.Before;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.File;
-import java.lang.invoke.MethodHandles;
-import java.nio.file.Files;
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.sql.Timestamp;
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Set;
-
-public abstract class AbstractSqlEntityProcessorTestCase extends
-    AbstractDIHJdbcTestCase {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  protected boolean underlyingDataModified;  
-  protected boolean useSimpleCaches;
-  protected boolean countryEntity;
-  protected boolean countryCached;
-  protected boolean countryZipper;
-  protected boolean sportsEntity;
-  protected boolean sportsCached;
-  protected boolean sportsZipper;
-  
-  protected boolean wrongPeopleOrder ;
-  protected boolean wrongSportsOrder ;
-  protected boolean wrongCountryOrder;
-    
-  protected String rootTransformerName;
-  protected boolean countryTransformer;
-  protected boolean sportsTransformer;    
-  protected String fileLocation;
-  protected String fileName;
-  
-  @Before
-  public void beforeSqlEntitiyProcessorTestCase() throws Exception {
-    File tmpdir = createTempDir().toFile();
-    fileLocation = tmpdir.getPath();
-    fileName = "the.properties";
-  } 
-  
-  @After
-  public void afterSqlEntitiyProcessorTestCase() throws Exception {
-    useSimpleCaches = false;
-    countryEntity = false;
-    countryCached = false;
-    countryZipper = false;
-    sportsEntity = false;
-    sportsCached = false;
-    sportsZipper = false;
-    
-    wrongPeopleOrder = false;
-    wrongSportsOrder = false;
-    wrongCountryOrder= false;
-    
-    rootTransformerName = null;
-    countryTransformer = false;
-    sportsTransformer = false;
-    underlyingDataModified = false;
-    
-    //If an Assume was tripped while setting up the test, 
-    //the file might not ever have been created...
-    if(fileLocation!=null) {
-      Files.deleteIfExists(new File(fileLocation + File.separatorChar + fileName).toPath());
-      Files.deleteIfExists(new File(fileLocation).toPath());
-    }
-  }
-  
-  protected void logPropertiesFile() {
-    Map<String,String> init = new HashMap<>();
-    init.put("filename", fileName);
-    init.put("directory", fileLocation);
-    SimplePropertiesWriter spw = new SimplePropertiesWriter();
-    spw.init(new DataImporter(), init);
-    Map<String,Object> props = spw.readIndexerProperties();
-    if(props!=null) {
-      StringBuilder sb = new StringBuilder();
-      sb.append("\ndataimporter.properties: \n");
-      for(Map.Entry<String,Object> entry : props.entrySet()) {
-        sb.append("  > key=" + entry.getKey() + " / value=" + entry.getValue() + "\n");
-      }
-      log.debug("{}", sb);
-    }
-  }
-  
-  protected abstract String deltaQueriesCountryTable();
-  
-  protected abstract String deltaQueriesPersonTable();
-  
-  protected void singleEntity(int numToExpect) throws Exception {
-    h.query("/dataimport", generateRequest());
-    assertQ("There should be 1 document per person in the database: "
-        + totalPeople(), req("*:*"), "//*[@numFound='" + totalPeople() + "']");
-    Assert.assertTrue("Expecting " + numToExpect
-        + " database calls, but DIH reported " + totalDatabaseRequests(),
-        totalDatabaseRequests() == numToExpect);
-  }
-  
-  protected void simpleTransform(int numToExpect) throws Exception {
-    rootTransformerName = "AddAColumnTransformer";
-    h.query("/dataimport", generateRequest());
-    assertQ(
-        "There should be 1 document with a transformer-added column per person is the database: "
-            + totalPeople(), req("AddAColumn_s:Added"), "//*[@numFound='"
-            + totalPeople() + "']");
-    Assert.assertTrue("Expecting " + numToExpect
-        + " database calls, but DIH reported " + totalDatabaseRequests(),
-        totalDatabaseRequests() == numToExpect);
-  }
-  
-  /**
-   * A delta update will not clean up documents added by a transformer even if
-   * the parent document that the transformer used to base the new documents
-   * were deleted
-   */
-  protected void complexTransform(int numToExpect, int numDeleted)
-      throws Exception {
-    rootTransformerName = "TripleThreatTransformer";
-    h.query("/dataimport", generateRequest());
-    int totalDocs = ((totalPeople() * 3) + (numDeleted * 2));
-    int totalAddedDocs = (totalPeople() + numDeleted);
-    assertQ(
-        req("q", "*:*", "rows", "" + (totalPeople() * 3), "sort", "id asc"),
-        "//*[@numFound='" + totalDocs + "']");
-    assertQ(req("id:TripleThreat-1-*"), "//*[@numFound='" + totalAddedDocs
-        + "']");
-    assertQ(req("id:TripleThreat-2-*"), "//*[@numFound='" + totalAddedDocs
-        + "']");
-    if (personNameExists("Michael") && countryCodeExists("NR")) {
-      assertQ(
-          "Michael and NR are assured to be in the database.  Therefore the transformer should have added leahciM and RN on the same document as id:TripleThreat-1-3",
-          req("+id:TripleThreat-1-3 +NAME_mult_s:Michael +NAME_mult_s:leahciM  +COUNTRY_CODES_mult_s:NR +COUNTRY_CODES_mult_s:RN"),
-          "//*[@numFound='1']");
-    }
-    assertQ(req("AddAColumn_s:Added"), "//*[@numFound='" + totalAddedDocs
-        + "']");
-    Assert.assertTrue("Expecting " + numToExpect
-        + " database calls, but DIH reported " + totalDatabaseRequests(),
-        totalDatabaseRequests() == numToExpect);
-  }
-  
-  protected void withChildEntities(boolean cached, boolean checkDatabaseRequests)
-      throws Exception {
-    rootTransformerName = random().nextBoolean() ? null
-        : "AddAColumnTransformer";
-    int numChildren = random().nextInt(1) + 1;
-    int numDatabaseRequests = 1;
-    if (underlyingDataModified) {
-      if (countryEntity) {
-        if (cached) {
-          numDatabaseRequests++;
-        } else {
-          numDatabaseRequests += totalPeople();
-        }
-      }
-      if (sportsEntity) {
-        if (cached) {
-          numDatabaseRequests++;
-        } else {
-          numDatabaseRequests += totalPeople();
-        }
-      }
-    } else {
-      countryEntity = true;
-      sportsEntity = true;
-      if(countryZipper||sportsZipper){// zipper tests fully cover nums of children
-        countryEntity = countryZipper;
-        sportsEntity = sportsZipper;
-      }else{// apply default randomization on cached cases
-        if (numChildren == 1) {
-          countryEntity = random().nextBoolean();
-          sportsEntity = !countryEntity;
-        }
-      }
-      if (countryEntity) {
-        countryTransformer = random().nextBoolean();
-        if (cached) {
-          numDatabaseRequests++;
-          countryCached = true;
-        } else {
-          numDatabaseRequests += totalPeople();
-        }
-      }
-      if (sportsEntity) {
-        sportsTransformer = random().nextBoolean();
-        if (cached) {
-          numDatabaseRequests++;
-          sportsCached = true;
-        } else {
-          numDatabaseRequests += totalPeople();
-        }
-      }
-    }
-    h.query("/dataimport", generateRequest());
-    
-    assertQ("There should be 1 document per person in the database: "
-        + totalPeople(), req("*:*"), "//*[@numFound='" + (totalPeople()) + "']");
-    if (!underlyingDataModified
-        && "AddAColumnTransformer".equals(rootTransformerName)) {
-      assertQ(
-          "There should be 1 document with a transformer-added column per person is the database: "
-              + totalPeople(), req("AddAColumn_s:Added"), "//*[@numFound='"
-              + (totalPeople()) + "']");
-    }
-    if (countryEntity) {
-      {
-        String[] people = getStringsFromQuery("SELECT NAME FROM PEOPLE WHERE DELETED != 'Y'");
-        String man = people[random().nextInt(people.length)];
-        String[] countryNames = getStringsFromQuery("SELECT C.COUNTRY_NAME FROM PEOPLE P "
-            + "INNER JOIN COUNTRIES C ON P.COUNTRY_CODE=C.CODE "
-            + "WHERE P.DELETED!='Y' AND C.DELETED!='Y' AND P.NAME='" + man + "'");
-
-        assertQ(req("{!term f=NAME_mult_s}"+ man), "//*[@numFound='1']",
-            countryNames.length>0?
-             "//doc/str[@name='COUNTRY_NAME_s']='" + countryNames[random().nextInt(countryNames.length)] + "'"
-            :"//doc[count(*[@name='COUNTRY_NAME_s'])=0]");
-      }
-      {
-        String[] countryCodes = getStringsFromQuery("SELECT CODE FROM COUNTRIES WHERE DELETED != 'Y'");
-        String theCode = countryCodes[random().nextInt(countryCodes.length)];
-        int num = numberPeopleByCountryCode(theCode);
-        if(num>0){
-          String nrName = countryNameByCode(theCode);
-          assertQ(req("COUNTRY_CODES_mult_s:"+theCode), "//*[@numFound='" + num + "']",
-              "//doc/str[@name='COUNTRY_NAME_s']='" + nrName + "'");
-        }else{ // no one lives there anyway
-          assertQ(req("COUNTRY_CODES_mult_s:"+theCode), "//*[@numFound='" + num + "']");
-        }
-      }
-      if (countryTransformer && !underlyingDataModified) {
-        assertQ(req("countryAdded_s:country_added"), "//*[@numFound='"
-            + totalPeople() + "']");
-      }
-    }
-    if (sportsEntity) {
-      if (!underlyingDataModified) {
-        assertQ(req("SPORT_NAME_mult_s:Sailing"), "//*[@numFound='2']");
-      }
-      String [] names = getStringsFromQuery("SELECT NAME FROM PEOPLE WHERE DELETED != 'Y'");
-      String name = names[random().nextInt(names.length)];
-      int personId = getIntFromQuery("SELECT ID FROM PEOPLE WHERE DELETED != 'Y' AND NAME='"+name+"'");
-      String[] michaelsSports = sportNamesByPersonId(personId);
-
-      String[] xpath = new String[michaelsSports.length + 1];
-      xpath[0] = "//*[@numFound='1']";
-      int i = 1;
-      for (String ms : michaelsSports) {
-        xpath[i] = "//doc/arr[@name='SPORT_NAME_mult_s']/str='"//[" + i + "]='" don't care about particular order
-            + ms + "'";
-        i++;
-      }
-      assertQ(req("NAME_mult_s:" + name.replaceAll("\\W", "\\\\$0")),
-            xpath);
-      if (!underlyingDataModified && sportsTransformer) {
-        assertQ(req("sportsAdded_s:sport_added"), "//*[@numFound='"
-            + (totalSportsmen()) + "']");
-      }
-      assertQ("checking orphan sport is absent",
-          req("{!term f=SPORT_NAME_mult_s}No Fishing"), "//*[@numFound='0']");
-    }
-    if (checkDatabaseRequests) {
-      Assert.assertTrue("Expecting " + numDatabaseRequests
-          + " database calls, but DIH reported " + totalDatabaseRequests(),
-          totalDatabaseRequests() == numDatabaseRequests);
-    }
-  }
-  
-  protected void simpleCacheChildEntities(boolean checkDatabaseRequests)
-      throws Exception {
-    useSimpleCaches = true;
-    countryEntity = true;
-    sportsEntity = true;
-    countryCached = true;
-    sportsCached = true;
-    int dbRequestsMoreThan = 3;
-    int dbRequestsLessThan = totalPeople() * 2 + 1;
-    h.query("/dataimport", generateRequest());
-    assertQ(req("*:*"), "//*[@numFound='" + (totalPeople()) + "']");
-    if (!underlyingDataModified
-        || (personNameExists("Samantha") && "Nauru"
-            .equals(countryNameByCode("NR")))) {
-      assertQ(req("NAME_mult_s:Samantha"), "//*[@numFound='1']",
-          "//doc/str[@name='COUNTRY_NAME_s']='Nauru'");
-    }
-    if (!underlyingDataModified) {
-      assertQ(req("COUNTRY_CODES_mult_s:NR"), "//*[@numFound='2']",
-          "//doc/str[@name='COUNTRY_NAME_s']='Nauru'");
-      assertQ(req("SPORT_NAME_mult_s:Sailing"), "//*[@numFound='2']");
-    }
-    String[] michaelsSports = sportNamesByPersonId(3);
-    if (!underlyingDataModified || michaelsSports.length > 0) {
-      String[] xpath = new String[michaelsSports.length + 1];
-      xpath[0] = "//*[@numFound='1']";
-      int i = 1;
-      for (String ms : michaelsSports) {
-        xpath[i] = "//doc/arr[@name='SPORT_NAME_mult_s']/str[" + i + "]='" + ms
-            + "'";
-        i++;
-      }
-      assertQ(req("NAME_mult_s:Michael"), xpath);
-    }
-    if (checkDatabaseRequests) {
-      Assert.assertTrue("Expecting more than " + dbRequestsMoreThan
-          + " database calls, but DIH reported " + totalDatabaseRequests(),
-          totalDatabaseRequests() > dbRequestsMoreThan);
-      Assert.assertTrue("Expecting fewer than " + dbRequestsLessThan
-          + " database calls, but DIH reported " + totalDatabaseRequests(),
-          totalDatabaseRequests() < dbRequestsLessThan);
-    }
-  }
-  
-
-  private int getIntFromQuery(String query) throws Exception {
-    Connection conn = null;
-    Statement s = null;
-    ResultSet rs = null;
-    try {
-      conn = newConnection();
-      s = conn.createStatement();
-      rs = s.executeQuery(query);
-      if (rs.next()) {
-        return rs.getInt(1);
-      }
-      return 0;
-    } catch (SQLException e) {
-      throw e;
-    } finally {
-      try {
-        rs.close();
-      } catch (Exception ex) {}
-      try {
-        s.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-  }
-  
-  private String[] getStringsFromQuery(String query) throws Exception {
-    Connection conn = null;
-    Statement s = null;
-    ResultSet rs = null;
-    try {
-      conn = newConnection();
-      s = conn.createStatement();
-      rs = s.executeQuery(query);
-      List<String> results = new ArrayList<>();
-      while (rs.next()) {
-        results.add(rs.getString(1));
-      }
-      return results.toArray(new String[results.size()]);
-    } catch (SQLException e) {
-      throw e;
-    } finally {
-      try {
-        rs.close();
-      } catch (Exception ex) {}
-      try {
-        s.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-  }
-  
-  public int totalCountries() throws Exception {
-    return getIntFromQuery("SELECT COUNT(1) FROM COUNTRIES WHERE DELETED != 'Y' ");
-  }
-  
-  public int totalPeople() throws Exception {
-    return getIntFromQuery("SELECT COUNT(1) FROM PEOPLE WHERE DELETED != 'Y' ");
-  }
-  
-  public int totalSportsmen() throws Exception {
-    return getIntFromQuery("SELECT COUNT(*) FROM PEOPLE WHERE "
-        + "EXISTS(SELECT ID FROM PEOPLE_SPORTS WHERE PERSON_ID=PEOPLE.ID AND PEOPLE_SPORTS.DELETED != 'Y')"
-        + " AND PEOPLE.DELETED != 'Y'");
-  }
-  
-  public boolean countryCodeExists(String cc) throws Exception {
-    return getIntFromQuery("SELECT COUNT(1) country_name FROM COUNTRIES WHERE DELETED != 'Y' AND CODE='"
-        + cc + "'") > 0;
-  }
-  
-  public String countryNameByCode(String cc) throws Exception {
-    String[] s = getStringsFromQuery("SELECT country_name FROM COUNTRIES WHERE DELETED != 'Y' AND CODE='"
-        + cc + "'");
-    return s.length == 0 ? null : s[0];
-  }
-  
-  public int numberPeopleByCountryCode(String cc) throws Exception {
-    return getIntFromQuery("Select count(1) " + "from people p "
-        + "inner join countries c on p.country_code=c.code "
-        + "where p.deleted!='Y' and c.deleted!='Y' and c.code='" + cc + "'");
-  }
-  
-  public String[] sportNamesByPersonId(int personId) throws Exception {
-    return getStringsFromQuery("SELECT ps.SPORT_NAME "
-        + "FROM people_sports ps "
-        + "INNER JOIN PEOPLE p ON p.id = ps.person_Id "
-        + "WHERE ps.DELETED != 'Y' AND p.DELETED != 'Y' " + "AND ps.person_id="
-        + personId + " " + "ORDER BY ps.id");
-  }
-  
-  public boolean personNameExists(String pn) throws Exception {
-    return getIntFromQuery("SELECT COUNT(1) FROM PEOPLE WHERE DELETED != 'Y' AND NAME='"
-        + pn + "'") > 0;
-  }
-  
-  public String personNameById(int id) throws Exception {
-    String[] nameArr = getStringsFromQuery("SELECT NAME FROM PEOPLE WHERE ID="
-        + id);
-    if (nameArr.length == 0) {
-      return null;
-    }
-    return nameArr[0];
-  }
-  
-  @SuppressForbidden(reason = "Needs currentTimeMillis to set change time for SQL query")
-  public IntChanges modifySomePeople() throws Exception {
-    underlyingDataModified = true;
-    int numberToChange = random().nextInt(people.length + 1);
-    Set<Integer> changeSet = new HashSet<>();
-    Set<Integer> deleteSet = new HashSet<>();
-    Set<Integer> addSet = new HashSet<>();
-    Connection conn = null;
-    PreparedStatement change = null;
-    PreparedStatement delete = null;
-    PreparedStatement add = null;
-    // One second in the future ensures a change time after the last import (DIH
-    // uses second precision only)
-    Timestamp theTime = new Timestamp(System.currentTimeMillis() + 1000);
-    if (log.isDebugEnabled()) {
-      log.debug("PEOPLE UPDATE USING TIMESTAMP: {}"
-          , new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ", Locale.ROOT).format(theTime));
-    }
-    try {
-      conn = newConnection();
-      change = conn
-          .prepareStatement("update people set name=?, last_modified=? where id=?");
-      delete = conn
-          .prepareStatement("update people set deleted='Y', last_modified=? where id=?");
-      add = conn
-          .prepareStatement("insert into people (id,name,country_code,last_modified) values (?,?,'ZZ',?)");
-      for (int i = 0; i < numberToChange; i++) {
-        int tryIndex = random().nextInt(people.length);
-        Integer id = (Integer) people[tryIndex][0];
-        if (!changeSet.contains(id) && !deleteSet.contains(id)) {
-          boolean changeDontDelete = random().nextBoolean();
-          if (changeDontDelete) {
-            changeSet.add(id);
-            change.setString(1, "MODIFIED " + people[tryIndex][1]);
-            change.setTimestamp(2, theTime);
-            change.setInt(3, id);
-            Assert.assertEquals(1, change.executeUpdate());
-          } else {
-            deleteSet.add(id);
-            delete.setTimestamp(1, theTime);
-            delete.setInt(2, id);
-            Assert.assertEquals(1, delete.executeUpdate());
-          }
-        }
-      }
-      int numberToAdd = random().nextInt(3);
-      for (int i = 0; i < numberToAdd; i++) {
-        int tryIndex = random().nextInt(people.length);
-        Integer id = (Integer) people[tryIndex][0];
-        Integer newId = id + 1000;
-        String newDesc = "ADDED " + people[tryIndex][1];
-        if (!addSet.contains(newId)) {
-          addSet.add(newId);
-          add.setInt(1, newId);
-          add.setString(2, newDesc);
-          add.setTimestamp(3, theTime);
-          Assert.assertEquals(1, add.executeUpdate());
-        }
-      }
-      conn.commit();
-    } catch (SQLException e) {
-      throw e;
-    } finally {
-      try {
-        change.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-    IntChanges c = new IntChanges();
-    c.changedKeys = changeSet.toArray(new Integer[changeSet.size()]);
-    c.deletedKeys = deleteSet.toArray(new Integer[deleteSet.size()]);
-    c.addedKeys = addSet.toArray(new Integer[addSet.size()]);
-    return c;
-  }
-
-  @SuppressForbidden(reason = "Needs currentTimeMillis to set change time for SQL query")
-  public String[] modifySomeCountries() throws Exception {
-    underlyingDataModified = true;
-    int numberToChange = random().nextInt(countries.length + 1);
-    Set<String> changeSet = new HashSet<>();
-    Connection conn = null;
-    PreparedStatement change = null;
-    // One second in the future ensures a change time after the last import (DIH
-    // uses second precision only)
-    Timestamp theTime = new Timestamp(System.currentTimeMillis() + 1000);
-    if (log.isDebugEnabled()) {
-      log.debug("COUNTRY UPDATE USING TIMESTAMP: {}"
-          , new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ", Locale.ROOT).format(theTime));
-    }
-    try {
-      conn = newConnection();
-      change = conn
-          .prepareStatement("update countries set country_name=?, last_modified=? where code=?");
-      for (int i = 0; i < numberToChange; i++) {
-        int tryIndex = random().nextInt(countries.length);
-        String code = countries[tryIndex][0];
-        if (!changeSet.contains(code)) {
-          changeSet.add(code);
-          change.setString(1, "MODIFIED " + countries[tryIndex][1]);
-          change.setTimestamp(2, theTime);
-          change.setString(3, code);
-          Assert.assertEquals(1, change.executeUpdate());
-          
-        }
-      }
-    } catch (SQLException e) {
-      throw e;
-    } finally {
-      try {
-        change.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-    return changeSet.toArray(new String[changeSet.size()]);
-  }
-
-  static class IntChanges {
-    public Integer[] changedKeys;
-    public Integer[] deletedKeys;
-    public Integer[] addedKeys;
-    
-    @Override
-    public String toString() {
-      StringBuilder sb = new StringBuilder();
-      if(changedKeys!=null) {
-        sb.append("changes: ");
-        for(int i : changedKeys) {
-          sb.append(i).append(" ");
-        }
-      }
-      if(deletedKeys!=null) {
-        sb.append("deletes: ");
-        for(int i : deletedKeys) {
-          sb.append(i).append(" ");
-        }
-      }
-      if(addedKeys!=null) {
-        sb.append("adds: ");
-        for(int i : addedKeys) {
-          sb.append(i).append(" ");
-        }
-      }
-      return sb.toString();
-    }
-  }
-  
-  @Override
-  protected String generateConfig() {
-    String ds = null;
-    if (dbToUse == Database.DERBY) {
-      ds = "derby";
-    } else if (dbToUse == Database.HSQLDB) {
-      ds = "hsqldb";
-    } else {
-      throw new AssertionError("Invalid database to use: " + dbToUse);
-    }
-    StringBuilder sb = new StringBuilder();
-    sb.append("\n<dataConfig> \n");
-    sb.append("<propertyWriter type=''SimplePropertiesWriter'' directory=''" + fileLocation + "'' filename=''" + fileName + "'' />\n");
-    sb.append("<dataSource name=''hsqldb'' driver=''org.hsqldb.jdbcDriver'' url=''jdbc:hsqldb:mem:.'' /> \n");
-    sb.append("<dataSource name=''derby'' driver=''org.apache.derby.jdbc.EmbeddedDriver'' url=''jdbc:derby:memory:derbyDB;territory=en_US'' /> \n");
-    sb.append("<document name=''TestSqlEntityProcessor''> \n");
-    sb.append("<entity name=''People'' ");
-    sb.append("pk=''" + (random().nextBoolean() ? "ID" : "People.ID") + "'' ");
-    sb.append("processor=''SqlEntityProcessor'' ");
-    sb.append("dataSource=''" + ds + "'' ");
-    sb.append(rootTransformerName != null ? "transformer=''"
-        + rootTransformerName + "'' " : "");
-  
-    sb.append("query=''SELECT ID, NAME, COUNTRY_CODE FROM PEOPLE WHERE DELETED != 'Y' "
-                    +((sportsZipper||countryZipper?"ORDER BY ID":"")
-                     +(wrongPeopleOrder? " DESC":""))+"'' ");
-
-    sb.append(deltaQueriesPersonTable());
-    sb.append("> \n");
-    
-    sb.append("<field column=''NAME'' name=''NAME_mult_s'' /> \n");
-    sb.append("<field column=''COUNTRY_CODE'' name=''COUNTRY_CODES_mult_s'' /> \n");
-    
-    if (countryEntity) {
-      sb.append("<entity name=''Countries'' ");
-      sb.append("pk=''" + (random().nextBoolean() ? "CODE" : "Countries.CODE")
-          + "'' ");
-      sb.append("dataSource=''" + ds + "'' ");
-      sb.append(countryTransformer ? "transformer=''AddAColumnTransformer'' "
-          + "newColumnName=''countryAdded_s'' newColumnValue=''country_added'' "
-          : "");
-      if (countryCached) {
-        sb.append("processor=''SqlEntityProcessor'' cacheImpl=''SortedMapBackedCache'' ");
-        if (useSimpleCaches) {
-          sb.append("query=''SELECT CODE, COUNTRY_NAME FROM COUNTRIES WHERE DELETED != 'Y' AND CODE='${People.COUNTRY_CODE}' ''>\n");
-        } else {
-          
-          if(countryZipper){// really odd join btw. it sends duped countries 
-            sb.append(random().nextBoolean() ? "cacheKey=''ID'' cacheLookup=''People.ID'' "
-                : "where=''ID=People.ID'' ");
-            sb.append("join=''zipper'' query=''SELECT PEOPLE.ID, CODE, COUNTRY_NAME FROM COUNTRIES"
-                + " JOIN PEOPLE ON COUNTRIES.CODE=PEOPLE.COUNTRY_CODE "
-                + "WHERE PEOPLE.DELETED != 'Y' ORDER BY PEOPLE.ID "+
-                (wrongCountryOrder ? " DESC":"")
-                + "'' ");
-          }else{
-            sb.append(random().nextBoolean() ? "cacheKey=''CODE'' cacheLookup=''People.COUNTRY_CODE'' "
-                : "where=''CODE=People.COUNTRY_CODE'' ");
-            sb.append("query=''SELECT CODE, COUNTRY_NAME FROM COUNTRIES'' ");
-          }
-          sb.append("> \n");
-        }
-      } else {
-        sb.append("processor=''SqlEntityProcessor'' query=''SELECT CODE, COUNTRY_NAME FROM COUNTRIES WHERE DELETED != 'Y' AND CODE='${People.COUNTRY_CODE}' '' ");
-        sb.append(deltaQueriesCountryTable());
-        sb.append("> \n");
-      }
-      sb.append("<field column=''CODE'' name=''COUNTRY_CODE_s'' /> \n");
-      sb.append("<field column=''COUNTRY_NAME'' name=''COUNTRY_NAME_s'' /> \n");
-      sb.append("</entity> \n");
-    }
-    if (sportsEntity) {
-      sb.append("<entity name=''Sports'' ");
-      sb.append("dataSource=''" + ds + "'' ");
-      sb.append(sportsTransformer ? "transformer=''AddAColumnTransformer'' "
-          + "newColumnName=''sportsAdded_s'' newColumnValue=''sport_added'' "
-          : "");
-      if (sportsCached) {
-        sb.append("processor=''SqlEntityProcessor'' cacheImpl=''SortedMapBackedCache'' ");
-        if (useSimpleCaches) {
-          sb.append("query=''SELECT ID, SPORT_NAME FROM PEOPLE_SPORTS WHERE DELETED != 'Y' AND PERSON_ID=${People.ID} ORDER BY ID'' ");
-        } else {
-          sb.append(random().nextBoolean() ? "cacheKey=''PERSON_ID'' cacheLookup=''People.ID'' "
-              : "where=''PERSON_ID=People.ID'' ");
-          if(sportsZipper){
-              sb.append("join=''zipper'' query=''SELECT ID, PERSON_ID, SPORT_NAME FROM PEOPLE_SPORTS ORDER BY PERSON_ID"
-                  + (wrongSportsOrder?" DESC" : "")+
-                  "'' ");
-            }
-          else{
-            sb.append("query=''SELECT ID, PERSON_ID, SPORT_NAME FROM PEOPLE_SPORTS ORDER BY ID'' ");
-          }
-        }
-      } else {
-        sb.append("processor=''SqlEntityProcessor'' query=''SELECT ID, SPORT_NAME FROM PEOPLE_SPORTS WHERE DELETED != 'Y' AND PERSON_ID=${People.ID} ORDER BY ID'' ");
-      }
-      sb.append("> \n");
-      sb.append("<field column=''SPORT_NAME'' name=''SPORT_NAME_mult_s'' /> \n");
-      sb.append("<field column=''id'' name=''SPORT_ID_mult_s'' /> \n");
-      sb.append("</entity> \n");
-    }
-    
-    sb.append("</entity> \n");
-    sb.append("</document> \n");
-    sb.append("</dataConfig> \n");
-    String config = sb.toString().replaceAll("[']{2}", "\"");
-    log.debug(config);
-    return config;
-  }
-
-  @SuppressForbidden(reason = "Needs currentTimeMillis to set change time for SQL query")
-  @Override
-  protected void populateData(Connection conn) throws Exception {
-    Statement s = null;
-    PreparedStatement ps = null;
-    Timestamp theTime = new Timestamp(System.currentTimeMillis() - 10000); // 10 seconds ago
-    try {
-      s = conn.createStatement();
-      s.executeUpdate("create table countries(code varchar(3) not null primary key, country_name varchar(50), deleted char(1) default 'N', last_modified timestamp not null)");
-      s.executeUpdate("create table people(id int not null primary key, name varchar(50), country_code char(2), deleted char(1) default 'N', last_modified timestamp not null)");
-      s.executeUpdate("create table people_sports(id int not null primary key, person_id int, sport_name varchar(50), deleted char(1) default 'N', last_modified timestamp not null)");
-      if (log.isDebugEnabled()) {
-        log.debug("INSERTING DB DATA USING TIMESTAMP: {}",
-            new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ", Locale.ROOT).format(theTime));
-      }
-      ps = conn
-          .prepareStatement("insert into countries (code, country_name, last_modified) values (?,?,?)");
-      for (String[] country : countries) {
-        ps.setString(1, country[0]);
-        ps.setString(2, country[1]);
-        ps.setTimestamp(3, theTime);
-        Assert.assertEquals(1, ps.executeUpdate());
-      }
-      ps.close();
-      
-      ps = conn
-          .prepareStatement("insert into people (id, name, country_code, last_modified) values (?,?,?,?)");
-      for (Object[] person : people) {
-        ps.setInt(1, (Integer) person[0]);
-        ps.setString(2, (String) person[1]);
-        ps.setString(3, (String) person[2]);
-        ps.setTimestamp(4, theTime);
-        Assert.assertEquals(1, ps.executeUpdate());
-      }
-      ps.close();
-      
-      ps = conn
-          .prepareStatement("insert into people_sports (id, person_id, sport_name, last_modified) values (?,?,?,?)");
-      for (Object[] sport : people_sports) {
-        ps.setInt(1, (Integer) sport[0]);
-        ps.setInt(2, (Integer) sport[1]);
-        ps.setString(3, (String) sport[2]);
-        ps.setTimestamp(4, theTime);
-        Assert.assertEquals(1, ps.executeUpdate());
-      }
-      ps.close();
-      conn.commit();
-      conn.close();
-    } catch (Exception e) {
-      throw e;
-    } finally {
-      try {
-        ps.close();
-      } catch (Exception ex) {}
-      try {
-        s.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-  }
-  public static final String[][] countries = {
-    {"NA",   "Namibia"},
-    {"NC",   "New Caledonia"},
-    {"NE",   "Niger"},
-    {"NF",   "Norfolk Island"},
-    {"NG",   "Nigeria"},
-    {"NI",   "Nicaragua"},
-    {"NL",   "Netherlands"},
-    {"NO",   "Norway"},
-    {"NP",   "Nepal"},
-    {"NR",   "Nauru"},
-    {"NU",   "Niue"},
-    {"NZ",   "New Zealand"}
-  };
-  
-  public static final Object[][] people = {
-    {1,"Jacob","NZ"},
-    {2,"Ethan","NU"},
-    {3,"Michael","NR"},
-    {4,"Jayden","NP"},
-    {5,"William","NO"},
-    {6,"Alexander","NL"},
-    {7,"Noah","NI"},
-    {8,"Daniel","NG"},
-    {9,"Aiden","NF"},
-    
-    {21,"Anthony","NE"}, // there is no ID=10 anymore
-    
-    {11,"Emma","NL"},
-    {12,"Grace","NI"},
-    {13,"Hailey","NG"},
-    {14,"Isabella","NF"},
-    {15,"Lily","NE"},
-    {16,"Madison","NC"},
-    {17,"Mia","NA"},
-    {18,"Natalie","NZ"},
-    {19,"Olivia","NU"},
-    {20,"Samantha","NR"}
-  };
-  
-  public static final Object[][] people_sports = {
-    {100, 1, "Swimming"},
-    {200, 2, "Triathlon"},
-    {300, 3, "Water polo"},
-    {310, 3, "Underwater rugby"},
-    {320, 3, "Kayaking"},
-    {400, 4, "Snorkeling"},
-    {500, 5, "Synchronized diving"},
-    {600, 6, "Underwater rugby"},
-    {700, 7, "Boating"},
-    {800, 8, "Bodyboarding"},
-    {900, 9, "Canoeing"},
-    
-    {1000, 10, "No Fishing"}, // orhpaned sport
-    //
-    
-    {1100, 11, "Jet Ski"},
-    {1110, 11, "Rowing"},
-    {1120, 11, "Sailing"},
-    {1200, 12, "Kayaking"},
-    {1210, 12, "Canoeing"},
-    {1300, 13, "Kite surfing"},
-    {1400, 14, "Parasailing"},
-    {1500, 15, "Rafting"},
-    //{1600, 16, "Rowing"}, Madison has no sport
-    {1700, 17, "Sailing"},
-    {1800, 18, "White Water Rafting"},
-    {1900, 19, "Water skiing"},
-    {2000, 20, "Windsurfing"},
-    {2100, 21, "Concrete diving"},
-    {2110, 21, "Bubble rugby"}
-  }; 
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AddAColumnTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AddAColumnTransformer.java
deleted file mode 100644
index 5e665d4..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/AddAColumnTransformer.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.Map;
-
-public class AddAColumnTransformer extends Transformer {
-  @Override
-  public Object transformRow(Map<String,Object> aRow, Context context) {
-    String colName = context.getEntityAttribute("newColumnName");
-    colName = colName==null ? "AddAColumn_s" : colName;
-    String colValue = context.getEntityAttribute("newColumnValue");
-    colValue = colValue==null ? "Added" : colValue;
-    aRow.put(colName, colValue);
-    return aRow;
-  }
-}
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/DestroyCountCache.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/DestroyCountCache.java
deleted file mode 100644
index d14f43e..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/DestroyCountCache.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.hamcrest.CoreMatchers.nullValue;
-
-import java.util.IdentityHashMap;
-import java.util.Map;
-
-import org.junit.Assert;
-
-public class DestroyCountCache extends SortedMapBackedCache {
-  static Map<DIHCache,DIHCache> destroyed = new IdentityHashMap<>();
-  
-  @Override
-  public void destroy() {
-    super.destroy();
-    Assert.assertThat(destroyed.put(this, this), nullValue());
-  }
-  
-  public DestroyCountCache() {}
-  
-}
\ No newline at end of file
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
deleted file mode 100644
index 5a7ea84..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.HashMap;
-import java.util.Hashtable;
-import java.util.Map;
-
-import javax.naming.NamingException;
-import javax.naming.spi.InitialContextFactory;
-
-import static org.mockito.Mockito.*;
-
-public class MockInitialContextFactory implements InitialContextFactory {
-  private static final Map<String, Object> objects = new HashMap<>();
-  private final javax.naming.Context context;
-
-  public MockInitialContextFactory() {
-    context = mock(javax.naming.Context.class);
-
-    try {
-      when(context.lookup(anyString())).thenAnswer(invocation -> objects.get(invocation.getArgument(0)));
-
-    } catch (NamingException e) {
-      throw new RuntimeException(e);
-    }
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public javax.naming.Context getInitialContext(@SuppressWarnings({"rawtypes"})Hashtable env) {
-    return context;
-  }
-
-  public static void bind(String name, Object obj) {
-    objects.put(name, obj);
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockSolrEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockSolrEntityProcessor.java
deleted file mode 100644
index 42e5f7d..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockSolrEntityProcessor.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.common.SolrDocument;
-import org.apache.solr.common.SolrDocumentList;
-
-import java.util.List;
-
-public class MockSolrEntityProcessor extends SolrEntityProcessor {
-
-  private final List<SolrTestCaseJ4.Doc> docsData;
-//  private final int rows;
-  private int queryCount = 0;
-
-  private int rows;
-  
-  private int start = 0;
-
-  public MockSolrEntityProcessor(List<SolrTestCaseJ4.Doc> docsData, int rows) {
-    this.docsData = docsData;
-    this.rows = rows;
-  }
-
-  //@Override
-  //protected SolrDocumentList doQuery(int start) {
-  //  queryCount++;
-  //  return getDocs(start, rows);
- // }
-  
-  @Override
-  protected void buildIterator() {
-    if (rowIterator==null || (!rowIterator.hasNext() && ((SolrDocumentListIterator)rowIterator).hasMoreRows())){
-      queryCount++;
-      SolrDocumentList docs = getDocs(start, rows);
-      rowIterator = new SolrDocumentListIterator(docs);
-      start += docs.size();
-    }
-  }
-
-  private SolrDocumentList getDocs(int start, int rows) {
-    SolrDocumentList docs = new SolrDocumentList();
-    docs.setNumFound(docsData.size());
-    docs.setStart(start);
-
-    int endIndex = start + rows;
-    int end = docsData.size() < endIndex ? docsData.size() : endIndex;
-    for (int i = start; i < end; i++) {
-      SolrDocument doc = new SolrDocument();
-      SolrTestCaseJ4.Doc testDoc = docsData.get(i);
-      doc.addField("id", testDoc.id);
-      doc.addField("description", testDoc.getValues("description"));
-      docs.add(doc);
-    }
-    return docs;
-  }
-
-  public int getQueryCount() {
-    return queryCount;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockStringDataSource.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockStringDataSource.java
deleted file mode 100644
index 7c9a6d1..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockStringDataSource.java
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.handler.dataimport;
-
-
-import java.io.Reader;
-import java.io.StringReader;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Properties;
-
-public class MockStringDataSource extends DataSource<Reader> {
-
-  private static Map<String, String> cache = new HashMap<>();
-
-  public static void setData(String query,
-                                 String data) {
-    cache.put(query, data);
-  }
-
-  public static void clearCache() {
-    cache.clear();
-  }
-  @Override
-  public void init(Context context, Properties initProps) {
-
-  }
-
-  @Override
-  public Reader getData(String query) {
-    return new StringReader(cache.get(query));
-  }
-
-  @Override
-  public void close() {
-    cache.clear();
-
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestBuiltInEvaluators.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestBuiltInEvaluators.java
deleted file mode 100644
index 986a8cd..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestBuiltInEvaluators.java
+++ /dev/null
@@ -1,188 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Before;
-import org.junit.Test;
-
-import java.net.URLEncoder;
-import java.nio.charset.StandardCharsets;
-import java.text.SimpleDateFormat;
-import java.util.*;
-
-/**
- * <p> Test for Evaluators </p>
- *
- *
- * @since solr 1.3
- */
-public class TestBuiltInEvaluators extends AbstractDataImportHandlerTestCase {
-  private static final String ENCODING = StandardCharsets.UTF_8.name();
-
-  VariableResolver resolver;
-
-  Map<String, String> sqlTests;
-
-  Map<String, String> urlTests;
-
-  @Override
-  @Before
-  public void setUp() throws Exception {
-    super.setUp();
-    resolver = new VariableResolver();
-
-    sqlTests = new HashMap<>();
-
-    sqlTests.put("foo\"", "foo\"\"");
-    sqlTests.put("foo\\", "foo\\\\");
-    sqlTests.put("foo'", "foo''");
-    sqlTests.put("foo''", "foo''''");
-    sqlTests.put("'foo\"", "''foo\"\"");
-    sqlTests.put("\"Albert D'souza\"", "\"\"Albert D''souza\"\"");
-
-    urlTests = new HashMap<>();
-
-    urlTests.put("*:*", URLEncoder.encode("*:*", ENCODING));
-    urlTests.put("price:[* TO 200]", URLEncoder.encode("price:[* TO 200]",
-            ENCODING));
-    urlTests.put("review:\"hybrid sedan\"", URLEncoder.encode(
-            "review:\"hybrid sedan\"", ENCODING));
-  }
-
-  
-  @Test
-  public void testSqlEscapingEvaluator() {
-    Evaluator sqlEscaper = new SqlEscapingEvaluator();
-    runTests(sqlTests, sqlEscaper);
-  }
-
-  
-  @Test
-  public void testUrlEvaluator() throws Exception {
-    Evaluator urlEvaluator = new UrlEvaluator();
-    runTests(urlTests, urlEvaluator);
-  }
-
-  @Test
-  public void parseParams() {
-    Map<String,Object> m = new HashMap<>();
-    m.put("b","B");
-    VariableResolver vr = new VariableResolver();
-    vr.addNamespace("a",m);
-    List<Object> l = (new Evaluator() {      
-      @Override
-      public String evaluate(String expression, Context context) {
-        return null;
-      }
-    }).parseParams(" 1 , a.b, 'hello!', 'ds,o,u\'za',",vr);
-    assertEquals(1d,l.get(0));
-    assertEquals("B",((Evaluator.VariableWrapper)l.get(1)).resolve());
-    assertEquals("hello!",l.get(2));
-    assertEquals("ds,o,u'za",l.get(3));
-  }
-
-  @Test
-  public void testEscapeSolrQueryFunction() {
-    final VariableResolver resolver = new VariableResolver();    
-    Map<String,Object> m= new HashMap<>();
-    m.put("query","c:t");
-    resolver.setEvaluators(new DataImporter().getEvaluators(Collections.<Map<String,String>>emptyList()));
-    
-    resolver.addNamespace("e",m);
-    String s = resolver
-            .replaceTokens("${dataimporter.functions.escapeQueryChars(e.query)}");
-    org.junit.Assert.assertEquals("c\\:t", s);
-    
-  }
-  
-  private Date twoDaysAgo(Locale l, TimeZone tz) {
-    Calendar calendar = Calendar.getInstance(tz, l);
-    calendar.add(Calendar.DAY_OF_YEAR, -2);
-    return calendar.getTime();
-  }
-  
-  @Test
-  public void testDateFormatEvaluator() {
-    Evaluator dateFormatEval = new DateFormatEvaluator();
-    ContextImpl context = new ContextImpl(null, resolver, null,
-        Context.FULL_DUMP, Collections.<String,Object> emptyMap(), null, null);
-    
-    Locale rootLocale = Locale.ROOT;
-    Locale defaultLocale = Locale.getDefault();
-    TimeZone defaultTz = TimeZone.getDefault();
-    
-    {
-      SimpleDateFormat sdfDate = new SimpleDateFormat("yyyy-MM-dd HH", rootLocale);
-      String sdf = sdfDate.format(twoDaysAgo(rootLocale, defaultTz));
-      String dfe = dateFormatEval.evaluate("'NOW-2DAYS','yyyy-MM-dd HH'", context);
-      assertEquals(sdf,dfe);
-    }
-    {
-      SimpleDateFormat sdfDate = new SimpleDateFormat("yyyy-MM-dd HH", defaultLocale);
-      String sdf = sdfDate.format(twoDaysAgo(defaultLocale, TimeZone.getDefault()));
-      String dfe = dateFormatEval.evaluate(
-          "'NOW-2DAYS','yyyy-MM-dd HH','" + defaultLocale.toLanguageTag() + "'", context);
-      assertEquals(sdf,dfe);
-      for(String tzStr : TimeZone.getAvailableIDs()) {  
-        TimeZone tz = TimeZone.getTimeZone(tzStr);
-        sdfDate.setTimeZone(tz);
-        sdf = sdfDate.format(twoDaysAgo(defaultLocale, tz));
-        dfe = dateFormatEval.evaluate(
-            "'NOW-2DAYS','yyyy-MM-dd HH','" + defaultLocale.toLanguageTag() + "','" + tzStr + "'", context);
-        assertEquals(sdf,dfe);          
-      }
-    }
-   
-    Date d = new Date();    
-    Map<String,Object> map = new HashMap<>();
-    map.put("key", d);
-    resolver.addNamespace("A", map);
-        
-    assertEquals(
-        new SimpleDateFormat("yyyy-MM-dd HH:mm", rootLocale).format(d),
-        dateFormatEval.evaluate("A.key, 'yyyy-MM-dd HH:mm'", context));
-    assertEquals(
-        new SimpleDateFormat("yyyy-MM-dd HH:mm", defaultLocale).format(d),
-        dateFormatEval.evaluate("A.key, 'yyyy-MM-dd HH:mm','" + defaultLocale.toLanguageTag() + "'", context));
-    SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm", defaultLocale);
-    for(String tzStr : TimeZone.getAvailableIDs()) {
-      TimeZone tz = TimeZone.getTimeZone(tzStr);
-      sdf.setTimeZone(tz);
-      assertEquals(
-          sdf.format(d),
-          dateFormatEval.evaluate(
-              "A.key, 'yyyy-MM-dd HH:mm','" + defaultLocale.toLanguageTag() + "', '" + tzStr + "'", context));     
-      
-    }
-    
-    
-  }
-
-  private void runTests(Map<String, String> tests, Evaluator evaluator) {
-    ContextImpl ctx = new ContextImpl(null, resolver, null, Context.FULL_DUMP, Collections.<String, Object>emptyMap(), null, null);    
-    for (Map.Entry<String, String> entry : tests.entrySet()) {
-      Map<String, Object> values = new HashMap<>();
-      values.put("key", entry.getKey());
-      resolver.addNamespace("A", values);
-
-      String expected = entry.getValue();
-      String actual = evaluator.evaluate("A.key", ctx);
-      assertEquals(expected, actual);
-    }
-    
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestClobTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestClobTransformer.java
deleted file mode 100644
index 26478de..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestClobTransformer.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Test;
-
-import java.io.StringReader;
-import java.lang.reflect.InvocationHandler;
-import java.lang.reflect.Method;
-import java.lang.reflect.Proxy;
-import java.sql.Clob;
-import java.util.*;
-
-/**
- * Test for ClobTransformer
- *
- *
- * @see org.apache.solr.handler.dataimport.ClobTransformer
- * @since solr 1.4
- */
-@SuppressWarnings({"unchecked"})
-public class TestClobTransformer extends AbstractDataImportHandlerTestCase {
-  @Test
-  public void simple() throws Exception {
-    List<Map<String, String>> flds = new ArrayList<>();
-    Map<String, String> f = new HashMap<>();
-    // <field column="dsc" clob="true" name="description" />
-    f.put(DataImporter.COLUMN, "dsc");
-    f.put(ClobTransformer.CLOB, "true");
-    f.put(DataImporter.NAME, "description");
-    flds.add(f);
-    Context ctx = getContext(null, new VariableResolver(), null, Context.FULL_DUMP, flds, Collections.EMPTY_MAP);
-    Transformer t = new ClobTransformer();
-    Map<String, Object> row = new HashMap<>();
-    @SuppressWarnings({"rawtypes"})
-    Clob clob = (Clob) Proxy.newProxyInstance(this.getClass().getClassLoader(), new Class[]{Clob.class}, new InvocationHandler() {
-      @Override
-      public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
-        if (method.getName().equals("getCharacterStream")) {
-          return new StringReader("hello!");
-        }
-        return null;
-      }
-    });
-
-    row.put("dsc", clob);
-    t.transformRow(row, ctx);
-    assertEquals("hello!", row.get("dsc"));
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContentStreamDataSource.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContentStreamDataSource.java
deleted file mode 100644
index 34e50e0..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContentStreamDataSource.java
+++ /dev/null
@@ -1,196 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.solr.client.solrj.embedded.JettySolrRunner;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.DirectXmlRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.common.SolrDocument;
-import org.apache.solr.common.SolrDocumentList;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.UpdateParams;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-
-import java.io.File;
-import java.nio.file.Files;
-import java.util.List;
-import java.util.Properties;
-
-/**
- * Test for ContentStreamDataSource
- *
- *
- * @since solr 1.4
- */
-public class TestContentStreamDataSource extends AbstractDataImportHandlerTestCase {
-  private static final String CONF_DIR = "dih/solr/collection1/conf/";
-  private static final String ROOT_DIR = "dih/solr/";
-  SolrInstance instance = null;
-  JettySolrRunner jetty;
-
-  @Override
-  @Before
-  public void setUp() throws Exception {
-    super.setUp();
-    instance = new SolrInstance("inst", null);
-    instance.setUp();
-    jetty = createAndStartJetty(instance);
-  }
-  
-  @Override
-  @After
-  public void tearDown() throws Exception {
-    if (null != jetty) {
-      jetty.stop();
-      jetty = null;
-    }
-    super.tearDown();
-  }
-
-  @Test
-  public void testSimple() throws Exception {
-    DirectXmlRequest req = new DirectXmlRequest("/dataimport", xml);
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set("command", "full-import");
-    params.set("clean", "false");
-    req.setParams(params);
-    try (HttpSolrClient solrClient = getHttpSolrClient(buildUrl(jetty.getLocalPort(), "/solr/collection1"))) {
-      solrClient.request(req);
-      ModifiableSolrParams qparams = new ModifiableSolrParams();
-      qparams.add("q", "*:*");
-      QueryResponse qres = solrClient.query(qparams);
-      SolrDocumentList results = qres.getResults();
-      assertEquals(2, results.getNumFound());
-      SolrDocument doc = results.get(0);
-      assertEquals("1", doc.getFieldValue("id"));
-      assertEquals("Hello C1", ((List) doc.getFieldValue("desc")).get(0));
-    }
-  }
-
-  @Test
-  public void testCommitWithin() throws Exception {
-    DirectXmlRequest req = new DirectXmlRequest("/dataimport", xml);
-    ModifiableSolrParams params = params("command", "full-import", 
-        "clean", "false", UpdateParams.COMMIT, "false", 
-        UpdateParams.COMMIT_WITHIN, "1000");
-    req.setParams(params);
-    try (HttpSolrClient solrServer = getHttpSolrClient(buildUrl(jetty.getLocalPort(), "/solr/collection1"))) {
-      solrServer.request(req);
-      Thread.sleep(100);
-      ModifiableSolrParams queryAll = params("q", "*", "df", "desc");
-      QueryResponse qres = solrServer.query(queryAll);
-      SolrDocumentList results = qres.getResults();
-      assertEquals(0, results.getNumFound());
-      Thread.sleep(1000);
-      for (int i = 0; i < 10; i++) {
-        qres = solrServer.query(queryAll);
-        results = qres.getResults();
-        if (2 == results.getNumFound()) {
-          return;
-        }
-        Thread.sleep(500);
-      }
-    }
-    fail("Commit should have occurred but it did not");
-  }
-  
-  private static class SolrInstance {
-    String name;
-    Integer port;
-    File homeDir;
-    File confDir;
-    File dataDir;
-    
-    /**
-     * if masterPort is null, this instance is a master -- otherwise this instance is a slave, and assumes the master is
-     * on localhost at the specified port.
-     */
-    public SolrInstance(String name, Integer port) {
-      this.name = name;
-      this.port = port;
-    }
-
-    public String getHomeDir() {
-      return homeDir.toString();
-    }
-
-    public String getSchemaFile() {
-      return CONF_DIR + "dataimport-schema.xml";
-    }
-
-    public String getConfDir() {
-      return confDir.toString();
-    }
-
-    public String getDataDir() {
-      return dataDir.toString();
-    }
-
-    public String getSolrConfigFile() {
-      return CONF_DIR + "contentstream-solrconfig.xml";
-    }
-
-    public String getSolrXmlFile() {
-      return ROOT_DIR + "solr.xml";
-    }
-
-
-    public void setUp() throws Exception {
-      homeDir = createTempDir("inst").toFile();
-      dataDir = new File(homeDir + "/collection1", "data");
-      confDir = new File(homeDir + "/collection1", "conf");
-
-      homeDir.mkdirs();
-      dataDir.mkdirs();
-      confDir.mkdirs();
-
-      FileUtils.copyFile(getFile(getSolrXmlFile()), new File(homeDir, "solr.xml"));
-      File f = new File(confDir, "solrconfig.xml");
-      FileUtils.copyFile(getFile(getSolrConfigFile()), f);
-      f = new File(confDir, "schema.xml");
-
-      FileUtils.copyFile(getFile(getSchemaFile()), f);
-      f = new File(confDir, "data-config.xml");
-      FileUtils.copyFile(getFile(CONF_DIR + "dataconfig-contentstream.xml"), f);
-
-      Files.createFile(homeDir.toPath().resolve("collection1/core.properties"));
-    }
-
-  }
-
-  private JettySolrRunner createAndStartJetty(SolrInstance instance) throws Exception {
-    Properties nodeProperties = new Properties();
-    nodeProperties.setProperty("solr.data.dir", instance.getDataDir());
-    JettySolrRunner jetty = new JettySolrRunner(instance.getHomeDir(), nodeProperties, buildJettyConfig("/solr"));
-    jetty.start();
-    return jetty;
-  }
-
-  static String xml = "<root>\n"
-          + "<b>\n"
-          + "  <id>1</id>\n"
-          + "  <c>Hello C1</c>\n"
-          + "</b>\n"
-          + "<b>\n"
-          + "  <id>2</id>\n"
-          + "  <c>Hello C2</c>\n"
-          + "</b>\n" + "</root>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContextImpl.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContextImpl.java
deleted file mode 100644
index 0747e98..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContextImpl.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.HashMap;
-
-import org.junit.Test;
-
-public class TestContextImpl extends AbstractDataImportHandlerTestCase {
-  
-  @Test
-  public void testEntityScope() {
-    ContextImpl ctx = new ContextImpl(null, new VariableResolver(), null, "something", new HashMap<String,Object>(), null, null);
-    String lala = new String("lala");
-    ctx.setSessionAttribute("huhu", lala, Context.SCOPE_ENTITY);
-    Object got = ctx.getSessionAttribute("huhu", Context.SCOPE_ENTITY);
-    
-    assertEquals(lala, got);
-    
-  }
-  @Test
-  public void testCoreScope() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit("<dataConfig><document /></dataConfig>");
-    DocBuilder db = new DocBuilder(di, new SolrWriter(null, null),new SimplePropertiesWriter(), new RequestInfo(null, new HashMap<String,Object>(), null));
-    ContextImpl ctx = new ContextImpl(null, new VariableResolver(), null, "something", new HashMap<String,Object>(), null, db);
-    String lala = new String("lala");
-    ctx.setSessionAttribute("huhu", lala, Context.SCOPE_SOLR_CORE);
-    Object got = ctx.getSessionAttribute("huhu", Context.SCOPE_SOLR_CORE);
-    assertEquals(lala, got);
-    
-  }
-  @Test
-  public void testDocumentScope() {
-    ContextImpl ctx = new ContextImpl(null, new VariableResolver(), null, "something", new HashMap<String,Object>(), null, null);
-    ctx.setDoc(new DocBuilder.DocWrapper());
-    String lala = new String("lala");
-    ctx.setSessionAttribute("huhu", lala, Context.SCOPE_DOC);
-    Object got = ctx.getSessionAttribute("huhu", Context.SCOPE_DOC);
-    
-    assertEquals(lala, got);
-    
-  }
-  @Test
-  public void testGlobalScope() {
-    ContextImpl ctx = new ContextImpl(null, new VariableResolver(), null, "something", new HashMap<String,Object>(), null, null);
-    String lala = new String("lala");
-    ctx.setSessionAttribute("huhu", lala, Context.SCOPE_GLOBAL);
-    Object got = ctx.getSessionAttribute("huhu", Context.SCOPE_GLOBAL);
-    
-    assertEquals(lala, got);
-    
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDataConfig.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDataConfig.java
deleted file mode 100644
index c502893..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDataConfig.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.handler.dataimport.config.DIHConfiguration;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.w3c.dom.Document;
-import org.xml.sax.InputSource;
-
-import javax.xml.parsers.DocumentBuilderFactory;
-import java.io.StringReader;
-import java.util.ArrayList;
-import java.util.List;
-
-/**
- * <p>
- * Test for DataConfig
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestDataConfig extends AbstractDataImportHandlerTestCase {
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    initCore("dataimport-nodatasource-solrconfig.xml", "dataimport-schema.xml");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testDataConfigWithDataSource() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(loadDataConfig("data-config-with-datasource.xml"));
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-  }
-
-  @Test
-  public void testBasic() throws Exception {
-    javax.xml.parsers.DocumentBuilder builder = DocumentBuilderFactory
-            .newInstance().newDocumentBuilder();
-    Document doc = builder.parse(new InputSource(new StringReader(xml)));
-    DataImporter di = new DataImporter();
-    DIHConfiguration dc = di.readFromXml(doc);
-    assertEquals("atrimlisting", dc.getEntities().get(0).getName());
-  }
-
-  private static final String xml = "<dataConfig>\n"
-          + "\t<document name=\"autos\" >\n"
-          + "\t\t<entity name=\"atrimlisting\" pk=\"acode\"\n"
-          + "\t\t\tquery=\"select acode,make,model,year,msrp,category,image,izmo_image_url,price_range_low,price_range_high,invoice_range_low,invoice_range_high from atrimlisting\"\n"
-          + "\t\t\tdeltaQuery=\"select acode from atrimlisting where last_modified > '${indexer.last_index_time}'\">\n"
-          +
-
-          "\t\t</entity>\n" +
-
-          "\t</document>\n" + "</dataConfig>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDateFormatTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDateFormatTransformer.java
deleted file mode 100644
index a1e85d7..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDateFormatTransformer.java
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Test;
-
-import java.text.SimpleDateFormat;
-import java.util.*;
-
-/**
- * <p>
- * Test for DateFormatTransformer
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestDateFormatTransformer extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_SingleRow() throws Exception {
-    List<Map<String, String>> fields = new ArrayList<>();
-    fields.add(createMap(DataImporter.COLUMN, "lastModified"));
-    fields.add(createMap(DataImporter.COLUMN,
-            "dateAdded", RegexTransformer.SRC_COL_NAME, "lastModified",
-            DateFormatTransformer.DATE_TIME_FMT, "${xyz.myDateFormat}"));
-
-    SimpleDateFormat format = new SimpleDateFormat("MM/dd/yyyy", Locale.ROOT);
-    Date now = format.parse(format.format(new Date()));
-
-    Map<String,Object> row = createMap("lastModified", format.format(now));
-
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("e", row);
-    resolver.addNamespace("xyz", createMap("myDateFormat", "MM/dd/yyyy"));
-
-    Context context = getContext(null, resolver,
-            null, Context.FULL_DUMP, fields, null);
-    new DateFormatTransformer().transformRow(row, context);
-    assertEquals(now, row.get("dateAdded"));
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_MultipleRows() throws Exception {
-    List<Map<String, String>> fields = new ArrayList<>();
-    fields.add(createMap(DataImporter.COLUMN, "lastModified"));
-    fields.add(createMap(DataImporter.COLUMN,
-            "dateAdded", RegexTransformer.SRC_COL_NAME, "lastModified",
-            DateFormatTransformer.DATE_TIME_FMT, "MM/dd/yyyy hh:mm:ss.SSS"));
-
-    SimpleDateFormat format = new SimpleDateFormat("MM/dd/yyyy hh:mm:ss.SSS", Locale.ROOT);
-    Date now1 = format.parse(format.format(new Date()));
-    Date now2 = format.parse(format.format(new Date()));
-
-    Map<String,Object> row = new HashMap<>();
-    List<String> list = new ArrayList<>();
-    list.add(format.format(now1));
-    list.add(format.format(now2));
-    row.put("lastModified", list);
-
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("e", row);
-
-    Context context = getContext(null, resolver,
-            null, Context.FULL_DUMP, fields, null);
-    new DateFormatTransformer().transformRow(row, context);
-    List<Object> output = new ArrayList<>();
-    output.add(now1);
-    output.add(now2);
-    assertEquals(output, row.get("dateAdded"));
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDocBuilder.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDocBuilder.java
deleted file mode 100644
index 6ee2432..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDocBuilder.java
+++ /dev/null
@@ -1,341 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.handler.dataimport.config.DIHConfiguration;
-import org.apache.solr.handler.dataimport.config.Entity;
-
-import org.junit.After;
-import org.junit.Test;
-
-import java.util.*;
-
-/**
- * <p>
- * Test for DocBuilder
- * </p>
- *
- *
- * @since solr 1.3
- */
-// See: https://issues.apache.org/jira/browse/SOLR-12028 Tests cannot remove files on Windows machines occasionally
-public class TestDocBuilder extends AbstractDataImportHandlerTestCase {
-
-  @Override
-  @After
-  public void tearDown() throws Exception {
-    MockDataSource.clearCache();
-    MockStringDataSource.clearCache();
-    super.tearDown();
-  }
-
-  @Test
-  public void loadClass() throws Exception {
-    @SuppressWarnings("unchecked")
-    Class<Transformer> clz = DocBuilder.loadClass("RegexTransformer", null);
-    assertNotNull(clz);
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void singleEntityNoRows() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(dc_singleEntity);
-    DIHConfiguration cfg = di.getConfig();
-    Entity ent = cfg.getEntities().get(0);
-    MockDataSource.setIterator("select * from x", new ArrayList<Map<String, Object>>().iterator());
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals(Boolean.TRUE, swi.deleteAllCalled);
-    assertEquals(Boolean.TRUE, swi.commitCalled);
-    assertEquals(Boolean.TRUE, swi.finishCalled);
-    assertEquals(0, swi.docs.size());
-    assertEquals(1, di.getDocBuilder().importStatistics.queryCount.get());
-    assertEquals(0, di.getDocBuilder().importStatistics.docCount.get());
-    assertEquals(0, di.getDocBuilder().importStatistics.rowsCount.get());
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testDeltaImportNoRows_MustNotCommit() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(dc_deltaConfig);
-    redirectTempProperties(di);
-
-    DIHConfiguration cfg = di.getConfig();
-    Entity ent = cfg.getEntities().get(0);
-    MockDataSource.setIterator("select * from x", new ArrayList<Map<String, Object>>().iterator());
-    MockDataSource.setIterator("select id from x", new ArrayList<Map<String, Object>>().iterator());
-    RequestInfo rp = new RequestInfo(null, createMap("command", "delta-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals(Boolean.FALSE, swi.deleteAllCalled);
-    assertEquals(Boolean.FALSE, swi.commitCalled);
-    assertEquals(Boolean.TRUE, swi.finishCalled);
-    assertEquals(0, swi.docs.size());
-    assertEquals(1, di.getDocBuilder().importStatistics.queryCount.get());
-    assertEquals(0, di.getDocBuilder().importStatistics.docCount.get());
-    assertEquals(0, di.getDocBuilder().importStatistics.rowsCount.get());
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void singleEntityOneRow() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(dc_singleEntity);
-    DIHConfiguration cfg = di.getConfig();
-    Entity ent = cfg.getEntities().get(0);
-    List<Map<String, Object>> l = new ArrayList<>();
-    l.add(createMap("id", 1, "desc", "one"));
-    MockDataSource.setIterator("select * from x", l.iterator());
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals(Boolean.TRUE, swi.deleteAllCalled);
-    assertEquals(Boolean.TRUE, swi.commitCalled);
-    assertEquals(Boolean.TRUE, swi.finishCalled);
-    assertEquals(1, swi.docs.size());
-    assertEquals(1, di.getDocBuilder().importStatistics.queryCount.get());
-    assertEquals(1, di.getDocBuilder().importStatistics.docCount.get());
-    assertEquals(1, di.getDocBuilder().importStatistics.rowsCount.get());
-
-    for (int i = 0; i < l.size(); i++) {
-      Map<String, Object> map = l.get(i);
-      SolrInputDocument doc = swi.docs.get(i);
-      for (Map.Entry<String, Object> entry : map.entrySet()) {
-        assertEquals(entry.getValue(), doc.getFieldValue(entry.getKey()));
-      }
-    }
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testImportCommand() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(dc_singleEntity);
-    DIHConfiguration cfg = di.getConfig();
-    Entity ent = cfg.getEntities().get(0);
-    List<Map<String, Object>> l = new ArrayList<>();
-    l.add(createMap("id", 1, "desc", "one"));
-    MockDataSource.setIterator("select * from x", l.iterator());
-    RequestInfo rp = new RequestInfo(null, createMap("command", "import"), null);
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals(Boolean.FALSE, swi.deleteAllCalled);
-    assertEquals(Boolean.TRUE, swi.commitCalled);
-    assertEquals(Boolean.TRUE, swi.finishCalled);
-    assertEquals(1, swi.docs.size());
-    assertEquals(1, di.getDocBuilder().importStatistics.queryCount.get());
-    assertEquals(1, di.getDocBuilder().importStatistics.docCount.get());
-    assertEquals(1, di.getDocBuilder().importStatistics.rowsCount.get());
-
-    for (int i = 0; i < l.size(); i++) {
-      Map<String, Object> map = l.get(i);
-      SolrInputDocument doc = swi.docs.get(i);
-      for (Map.Entry<String, Object> entry : map.entrySet()) {
-        assertEquals(entry.getValue(), doc.getFieldValue(entry.getKey()));
-      }
-    }
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void singleEntityMultipleRows() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(dc_singleEntity);
-    DIHConfiguration cfg = di.getConfig();
-    Entity ent = cfg.getEntities().get(0);
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    List<Map<String, Object>> l = new ArrayList<>();
-    l.add(createMap("id", 1, "desc", "one"));
-    l.add(createMap("id", 2, "desc", "two"));
-    l.add(createMap("id", 3, "desc", "three"));
-
-    MockDataSource.setIterator("select * from x", l.iterator());
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals(Boolean.TRUE, swi.deleteAllCalled);
-    assertEquals(Boolean.TRUE, swi.commitCalled);
-    assertEquals(Boolean.TRUE, swi.finishCalled);
-    assertEquals(3, swi.docs.size());
-    for (int i = 0; i < l.size(); i++) {
-      Map<String, Object> map = l.get(i);
-      SolrInputDocument doc = swi.docs.get(i);
-      for (Map.Entry<String, Object> entry : map.entrySet()) {
-        assertEquals(entry.getValue(), doc.getFieldValue(entry.getKey()));
-      }
-      assertEquals(map.get("desc"), doc.getFieldValue("desc_s"));
-    }
-    assertEquals(1, di.getDocBuilder().importStatistics.queryCount.get());
-    assertEquals(3, di.getDocBuilder().importStatistics.docCount.get());
-    assertEquals(3, di.getDocBuilder().importStatistics.rowsCount.get());
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void templateXPath() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(dc_variableXpath);
-    DIHConfiguration cfg = di.getConfig();
-
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    List<Map<String, Object>> l = new ArrayList<>();
-    l.add(createMap("id", 1, "name", "iphone", "manufacturer", "Apple"));
-    l.add(createMap("id", 2, "name", "ipad", "manufacturer", "Apple"));
-    l.add(createMap("id", 3, "name", "pixel", "manufacturer", "Google"));
-
-    MockDataSource.setIterator("select * from x", l.iterator());
-
-    List<Map<String,Object>> nestedData = new ArrayList<>();
-    nestedData.add(createMap("founded", "Cupertino, California, U.S", "year", "1976", "year2", "1976"));
-    nestedData.add(createMap("founded", "Cupertino, California, U.S", "year", "1976", "year2", "1976"));
-    nestedData.add(createMap("founded", "Menlo Park, California, U.S", "year", "1998", "year2", "1998"));
-
-    MockStringDataSource.setData("companies.xml", xml_attrVariableXpath);
-    MockStringDataSource.setData("companies2.xml", xml_variableXpath);
-    MockStringDataSource.setData("companies3.xml", xml_variableForEach);
-
-    SolrWriterImpl swi = new SolrWriterImpl();
-    di.runCmd(rp, swi);
-    assertEquals(Boolean.TRUE, swi.deleteAllCalled);
-    assertEquals(Boolean.TRUE, swi.commitCalled);
-    assertEquals(Boolean.TRUE, swi.finishCalled);
-    assertEquals(3, swi.docs.size());
-    for (int i = 0; i < l.size(); i++) {
-      SolrInputDocument doc = swi.docs.get(i);
-
-      Map<String, Object> map = l.get(i);
-      for (Map.Entry<String, Object> entry : map.entrySet()) {
-        assertEquals(entry.getValue(), doc.getFieldValue(entry.getKey()));
-      }
-
-      map = nestedData.get(i);
-      for (Map.Entry<String, Object> entry : map.entrySet()) {
-        assertEquals(entry.getValue(), doc.getFieldValue(entry.getKey()));
-      }
-    }
-    assertEquals(1, di.getDocBuilder().importStatistics.queryCount.get());
-    assertEquals(3, di.getDocBuilder().importStatistics.docCount.get());
-  }
-
-  static class SolrWriterImpl extends SolrWriter {
-    List<SolrInputDocument> docs = new ArrayList<>();
-
-    Boolean deleteAllCalled = Boolean.FALSE;
-
-    Boolean commitCalled = Boolean.FALSE;
-
-    Boolean finishCalled = Boolean.FALSE;
-
-    public SolrWriterImpl() {
-      super(null, null);
-    }
-
-    @Override
-    public boolean upload(SolrInputDocument doc) {
-      return docs.add(doc);
-    }
-
-    @Override
-    public void doDeleteAll() {
-      deleteAllCalled = Boolean.TRUE;
-    }
-
-    @Override
-    public void commit(boolean b) {
-      commitCalled = Boolean.TRUE;
-    }
-    
-    @Override
-    public void close() {
-      finishCalled = Boolean.TRUE;
-    }
-  }
-
-  public static final String dc_singleEntity = "<dataConfig>\n"
-      + "<dataSource  type=\"MockDataSource\"/>\n"
-      + "    <document name=\"X\" >\n"
-      + "        <entity name=\"x\" query=\"select * from x\">\n"
-      + "          <field column=\"id\"/>\n"
-      + "          <field column=\"desc\"/>\n"
-      + "          <field column=\"desc\" name=\"desc_s\" />" + "        </entity>\n"
-      + "    </document>\n" + "</dataConfig>";
-
-  public static final String dc_deltaConfig = "<dataConfig>\n"
-      + "<dataSource  type=\"MockDataSource\"/>\n"
-      + "    <document name=\"X\" >\n"
-      + "        <entity name=\"x\" query=\"select * from x\" deltaQuery=\"select id from x\">\n"
-      + "          <field column=\"id\"/>\n"
-      + "          <field column=\"desc\"/>\n"
-      + "          <field column=\"desc\" name=\"desc_s\" />" + "        </entity>\n"
-      + "    </document>\n" + "</dataConfig>";
-
-  public static final String dc_variableXpath = "<dataConfig>\n"
-      + "<dataSource type=\"MockDataSource\"/>\n"
-      + "<dataSource name=\"xml\" type=\"MockStringDataSource\"/>\n"
-      + "    <document name=\"X\" >\n"
-      + "        <entity name=\"x\" query=\"select * from x\">\n"
-      + "          <field column=\"id\"/>\n"
-      + "          <field column=\"name\"/>\n"
-      + "          <field column=\"manufacturer\"/>"
-      + "          <entity name=\"c1\" url=\"companies.xml\" dataSource=\"xml\" forEach=\"/companies/company\" processor=\"XPathEntityProcessor\">"
-      + "            <field column=\"year\" xpath=\"/companies/company/year[@name='p_${x.manufacturer}_s']\" />"
-      + "          </entity>"
-      + "          <entity name=\"c2\" url=\"companies2.xml\" dataSource=\"xml\" forEach=\"/companies/company\" processor=\"XPathEntityProcessor\">"
-      + "            <field column=\"founded\" xpath=\"/companies/company/p_${x.manufacturer}_s/founded\" />"
-      + "          </entity>"
-      + "          <entity name=\"c3\" url=\"companies3.xml\" dataSource=\"xml\" forEach=\"/companies/${x.manufacturer}\" processor=\"XPathEntityProcessor\">"
-      + "            <field column=\"year2\" xpath=\"/companies/${x.manufacturer}/year\" />"
-      + "          </entity>"
-      + "        </entity>\n"
-      + "    </document>\n" + "</dataConfig>";
-
-
-  public static final String xml_variableForEach = "<companies>\n" +
-      "\t<Apple>\n" +
-      "\t\t<year>1976</year>\n" +
-      "\t</Apple>\n" +
-      "\t<Google>\n" +
-      "\t\t<year>1998</year>\n" +
-      "\t</Google>\n" +
-      "</companies>";
-
-  public static final String xml_variableXpath = "<companies>\n" +
-      "\t<company>\n" +
-      "\t\t<p_Apple_s>\n" +
-      "\t\t\t<founded>Cupertino, California, U.S</founded>\n" +
-      "\t\t</p_Apple_s>\t\t\n" +
-      "\t</company>\n" +
-      "\t<company>\n" +
-      "\t\t<p_Google_s>\n" +
-      "\t\t\t<founded>Menlo Park, California, U.S</founded>\n" +
-      "\t\t</p_Google_s>\n" +
-      "\t</company>\n" +
-      "</companies>";
-
-  public static final String xml_attrVariableXpath = "<companies>\n" +
-      "\t<company>\n" +
-      "\t\t<year name='p_Apple_s'>1976</year>\n" +
-      "\t</company>\n" +
-      "\t<company>\n" +
-      "\t\t<year name='p_Google_s'>1998</year>\t\t\n" +
-      "\t</company>\n" +
-      "</companies>";
-
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDocBuilder2.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDocBuilder2.java
deleted file mode 100644
index 2941f58..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestDocBuilder2.java
+++ /dev/null
@@ -1,445 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.junit.BeforeClass;
-import org.junit.Ignore;
-import org.junit.Test;
-
-import java.nio.charset.StandardCharsets;
-
-/**
- * <p>
- * Test for DocBuilder using the test harness
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestDocBuilder2 extends AbstractDataImportHandlerTestCase {
-
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testSingleEntity() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(loadDataConfig("single-entity-data-config.xml"));
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    
-    assertTrue("Update request processor processAdd was not called", TestUpdateRequestProcessor.processAddCalled);
-    assertTrue("Update request processor processCommit was not callled", TestUpdateRequestProcessor.processCommitCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testSingleEntity_CaseInsensitive() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desC", "one"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_WITH_CASE_INSENSITIVE_FIELDS);
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertTrue("Start event listener was not called", StartEventListener.executed);
-    assertTrue("End event listener was not called", EndEventListener.executed);
-    assertTrue("Update request processor processAdd was not called", TestUpdateRequestProcessor.processAddCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testErrorHandler() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "FORCE_ERROR", "true"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_WITH_ERROR_HANDLER);
-
-    assertTrue("Error event listener was not called", ErrorEventListener.executed);
-    assertTrue(ErrorEventListener.lastException.getMessage().contains("ForcedException"));
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testDynamicFields() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_WITH_DYNAMIC_TRANSFORMER);
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("dynamic_s:test"), "//*[@numFound='1']");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testRequestParamsAsVariable() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "101", "desc", "ApacheSolr"));
-    MockDataSource.setIterator("select * from books where category='search'", rows.iterator());
-
-    LocalSolrQueryRequest request = lrf.makeRequest("command", "full-import",
-            "debug", "on", "clean", "true", "commit", "true",
-            "category", "search",
-            "dataConfig", REQUEST_PARAM_AS_VARIABLE);
-    h.query("/dataimport", request);
-    assertQ(req("desc:ApacheSolr"), "//*[@numFound='1']");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testDynamicFieldNames() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("mypk", "101", "text", "ApacheSolr"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    LocalSolrQueryRequest request = lrf.makeRequest("command", "full-import",
-        "debug", "on", "clean", "true", "commit", "true",
-        "dataConfig", DATA_CONFIG_WITH_DYNAMIC_FIELD_NAMES);
-    h.query("/dataimport", request);
-    assertQ(req("id:101"), "//*[@numFound='1']", "//*[@name='101_s']");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testRequestParamsAsFieldName() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("mypk", "101", "text", "ApacheSolr"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    LocalSolrQueryRequest request = lrf.makeRequest("command", "full-import",
-            "debug", "on", "clean", "true", "commit", "true",
-            "mypk", "id", "text", "desc",
-            "dataConfig", DATA_CONFIG_WITH_TEMPLATIZED_FIELD_NAMES);
-    h.query("/dataimport", request);
-    assertQ(req("id:101"), "//*[@numFound='1']");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testContext() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(loadDataConfig("data-config-with-transformer.xml"));
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testSkipDoc() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    rows.add(createMap("id", "2", "desc", "two", DocBuilder.SKIP_DOC, "true"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_WITH_DYNAMIC_TRANSFORMER);
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='0']");
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  public void testSkipRow() throws Exception {
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    rows.add(createMap("id", "2", "desc", "two", DocBuilder.SKIP_ROW, "true"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_WITH_DYNAMIC_TRANSFORMER);
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='0']");
-
-    MockDataSource.clearCache();
-
-    rows = new ArrayList();
-    rows.add(createMap("id", "3", "desc", "one"));
-    rows.add(createMap("id", "4", "desc", "two"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    rows = new ArrayList();
-    rows.add(createMap("name_s", "abcd"));
-    MockDataSource.setIterator("3", rows.iterator());
-
-    rows = new ArrayList();
-    rows.add(createMap("name_s", "xyz", DocBuilder.SKIP_ROW, "true"));
-    MockDataSource.setIterator("4", rows.iterator());
-
-    runFullImport(DATA_CONFIG_WITH_TWO_ENTITIES);
-    assertQ(req("id:3"), "//*[@numFound='1']");
-    assertQ(req("id:4"), "//*[@numFound='1']");
-    assertQ(req("name_s:abcd"), "//*[@numFound='1']");
-    assertQ(req("name_s:xyz"), "//*[@numFound='0']");
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testStopTransform() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    rows.add(createMap("id", "2", "desc", "two", "$stopTransform", "true"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_FOR_SKIP_TRANSFORM);
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='1']");
-    assertQ(req("name_s:xyz"), "//*[@numFound='1']");
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  public void testDeleteDocs() throws Exception {
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    rows.add(createMap("id", "2", "desc", "two"));
-    rows.add(createMap("id", "3", "desc", "two", DocBuilder.DELETE_DOC_BY_ID, "2"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_FOR_SKIP_TRANSFORM);
-
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='0']");
-    assertQ(req("id:3"), "//*[@numFound='1']");
-
-    assertTrue("Update request processor processDelete was not called", TestUpdateRequestProcessor.processDeleteCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-    
-    MockDataSource.clearCache();
-    rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    rows.add(createMap("id", "2", "desc", "one"));
-    rows.add(createMap("id", "3", "desc", "two", DocBuilder.DELETE_DOC_BY_QUERY, "desc:one"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-
-    runFullImport(DATA_CONFIG_FOR_SKIP_TRANSFORM);
-
-    assertQ(req("id:1"), "//*[@numFound='0']");
-    assertQ(req("id:2"), "//*[@numFound='0']");
-    assertQ(req("id:3"), "//*[@numFound='1']");
-    
-    assertTrue("Update request processor processDelete was not called", TestUpdateRequestProcessor.processDeleteCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-    
-    MockDataSource.clearCache();
-    rows = new ArrayList();
-    rows.add(createMap(DocBuilder.DELETE_DOC_BY_ID, "3"));
-    MockDataSource.setIterator("select * from x", rows.iterator());
-    runFullImport(DATA_CONFIG_FOR_SKIP_TRANSFORM, createMap("clean","false"));
-    assertQ(req("id:3"), "//*[@numFound='0']");
-    
-    assertTrue("Update request processor processDelete was not called", TestUpdateRequestProcessor.processDeleteCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-    
-  }
-
-  @Test
-  @Ignore("Fix Me. See SOLR-4103.")
-  public void testFileListEntityProcessor_lastIndexTime() throws Exception  {
-    File tmpdir = createTempDir().toFile();
-
-    @SuppressWarnings({"unchecked"})
-    Map<String, String> params = createMap("baseDir", tmpdir.getAbsolutePath());
-
-    createFile(tmpdir, "a.xml", "a.xml".getBytes(StandardCharsets.UTF_8), true);
-    createFile(tmpdir, "b.xml", "b.xml".getBytes(StandardCharsets.UTF_8), true);
-    createFile(tmpdir, "c.props", "c.props".getBytes(StandardCharsets.UTF_8), true);
-    runFullImport(DATA_CONFIG_FILE_LIST, params);
-    assertQ(req("*:*"), "//*[@numFound='3']");
-
-    // Add a new file after a full index is done
-    createFile(tmpdir, "t.xml", "t.xml".getBytes(StandardCharsets.UTF_8), false);
-    runFullImport(DATA_CONFIG_FILE_LIST, params);
-    // we should find only 1 because by default clean=true is passed
-    // and this particular import should find only one file t.xml
-    assertQ(req("*:*"), "//*[@numFound='1']");
-  }
-
-  public static class MockTransformer extends Transformer {
-    @Override
-    public Object transformRow(Map<String, Object> row, Context context) {
-      assertTrue("Context gave incorrect data source", context.getDataSource("mockDs") instanceof MockDataSource2);
-      return row;
-    }
-  }
-
-  public static class AddDynamicFieldTransformer extends Transformer  {
-    @Override
-    public Object transformRow(Map<String, Object> row, Context context) {
-      // Add a dynamic field
-      row.put("dynamic_s", "test");
-      return row;
-    }
-  }
-
-  public static class ForcedExceptionTransformer extends Transformer {
-    @Override
-    public Object transformRow(Map<String, Object> row, Context context) {
-      throw new DataImportHandlerException(DataImportHandlerException.SEVERE, "ForcedException");
-    }
-  }
-
-  public static class MockDataSource2 extends MockDataSource  {
-
-  }
-
-  public static class StartEventListener implements EventListener {
-    public static boolean executed = false;
-
-    @Override
-    public void onEvent(Context ctx) {
-      executed = true;
-    }
-  }
-
-  public static class EndEventListener implements EventListener {
-    public static boolean executed = false;
-
-    @Override
-    public void onEvent(Context ctx) {
-      executed = true;
-    }
-  }
-
-  public static class ErrorEventListener implements EventListener {
-    public static boolean executed = false;
-    public static Exception lastException = null;
-
-    @Override
-    public void onEvent(Context ctx) {
-      executed = true;
-      lastException = ((ContextImpl) ctx).getLastException();
-    }
-  }
-
-  private static final String REQUEST_PARAM_AS_VARIABLE = "<dataConfig>\n" +
-          "    <dataSource type=\"MockDataSource\" />\n" +
-          "    <document>\n" +
-          "        <entity name=\"books\" query=\"select * from books where category='${dataimporter.request.category}'\">\n" +
-          "            <field column=\"id\" />\n" +
-          "            <field column=\"desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-   private static final String DATA_CONFIG_WITH_DYNAMIC_TRANSFORMER = "<dataConfig> <dataSource type=\"MockDataSource\"/>\n" +
-          "    <document>\n" +
-          "        <entity name=\"books\" query=\"select * from x\"" +
-           "                transformer=\"TestDocBuilder2$AddDynamicFieldTransformer\">\n" +
-          "            <field column=\"id\" />\n" +
-          "            <field column=\"desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private static final String DATA_CONFIG_FOR_SKIP_TRANSFORM = "<dataConfig> <dataSource  type=\"MockDataSource\"/>\n" +
-          "    <document>\n" +
-          "        <entity name=\"books\" query=\"select * from x\"" +
-           "                transformer=\"TemplateTransformer\">\n" +
-          "            <field column=\"id\" />\n" +
-          "            <field column=\"desc\" />\n" +
-          "            <field column=\"name_s\" template=\"xyz\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private static final String DATA_CONFIG_WITH_TWO_ENTITIES = "<dataConfig><dataSource type=\"MockDataSource\"/>\n" +
-          "    <document>\n" +
-          "        <entity name=\"books\" query=\"select * from x\">" +
-          "            <field column=\"id\" />\n" +
-          "            <field column=\"desc\" />\n" +
-          "            <entity name=\"authors\" query=\"${books.id}\">" +
-          "               <field column=\"name_s\" />" +
-          "            </entity>" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private static final String DATA_CONFIG_WITH_CASE_INSENSITIVE_FIELDS = "<dataConfig> <dataSource  type=\"MockDataSource\"/>\n" +
-          "    <document onImportStart=\"TestDocBuilder2$StartEventListener\" onImportEnd=\"TestDocBuilder2$EndEventListener\">\n" +
-          "        <entity name=\"books\" query=\"select * from x\">\n" +
-          "            <field column=\"ID\" />\n" +
-          "            <field column=\"Desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private static final String DATA_CONFIG_WITH_ERROR_HANDLER = "<dataConfig> <dataSource  type=\"MockDataSource\"/>\n" +
-          "    <document onError=\"TestDocBuilder2$ErrorEventListener\">\n" +
-          "        <entity name=\"books\" query=\"select * from x\" transformer=\"TestDocBuilder2$ForcedExceptionTransformer\">\n" +
-          "            <field column=\"id\" />\n" +
-          "            <field column=\"FORCE_ERROR\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private static final String DATA_CONFIG_WITH_TEMPLATIZED_FIELD_NAMES = "<dataConfig><dataSource  type=\"MockDataSource\"/>\n" +
-          "    <document>\n" +
-          "        <entity name=\"books\" query=\"select * from x\">\n" +
-          "            <field column=\"mypk\" name=\"${dih.request.mypk}\" />\n" +
-          "            <field column=\"text\" name=\"${dih.request.text}\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private static final String DATA_CONFIG_WITH_DYNAMIC_FIELD_NAMES = "<dataConfig><dataSource  type=\"MockDataSource\"/>\n" +
-      "    <document>\n" +
-      "        <entity name=\"books\" query=\"select * from x\">\n" +
-      "            <field column=\"mypk\" name=\"id\" />\n" +
-      "            <field column=\"text\" name=\"${books.mypk}_s\" />\n" +
-      "        </entity>\n" +
-      "    </document>\n" +
-      "</dataConfig>";
-
-  private static final String DATA_CONFIG_FILE_LIST = "<dataConfig>\n" +
-          "\t<document>\n" +
-          "\t\t<entity name=\"x\" processor=\"FileListEntityProcessor\" \n" +
-          "\t\t\t\tfileName=\".*\" newerThan=\"${dih.last_index_time}\" \n" +
-          "\t\t\t\tbaseDir=\"${dih.request.baseDir}\" transformer=\"TemplateTransformer\">\n" +
-          "\t\t\t<field column=\"id\" template=\"${x.file}\" />\n" +
-          "\t\t</entity>\n" +
-          "\t</document>\n" +
-          "</dataConfig>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestEntityProcessorBase.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestEntityProcessorBase.java
deleted file mode 100644
index 75ec2f5..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestEntityProcessorBase.java
+++ /dev/null
@@ -1,84 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * <p>
- * Test for EntityProcessorBase
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestEntityProcessorBase extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  public void multiTransformer() {
-    List<Map<String, String>> fields = new ArrayList<>();
-    Map<String, String> entity = new HashMap<>();
-    entity.put("transformer", T1.class.getName() + "," + T2.class.getName()
-            + "," + T3.class.getName());
-    fields.add(getField("A", null, null, null, null));
-    fields.add(getField("B", null, null, null, null));
-
-    Context context = getContext(null, null, new MockDataSource(), Context.FULL_DUMP,
-            fields, entity);
-    Map<String, Object> src = new HashMap<>();
-    src.put("A", "NA");
-    src.put("B", "NA");
-    EntityProcessorWrapper sep = new EntityProcessorWrapper(new SqlEntityProcessor(), null, null);
-    sep.init(context);
-    Map<String, Object> res = sep.applyTransformer(src);
-    assertNotNull(res.get("T1"));
-    assertNotNull(res.get("T2"));
-    assertNotNull(res.get("T3"));
-  }
-
-  public static class T1 extends Transformer {
-
-    @Override
-    public Object transformRow(Map<String, Object> aRow, Context context) {
-      aRow.put("T1", "T1 called");
-      return aRow;
-
-    }
-  }
-
-  public static class T2 extends Transformer {
-
-    @Override
-    public Object transformRow(Map<String, Object> aRow, Context context) {
-      aRow.put("T2", "T2 called");
-      return aRow;
-    }
-  }
-
-  public static class T3 {
-
-    public Object transformRow(Map<String, Object> aRow) {
-      aRow.put("T3", "T3 called");
-      return aRow;
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestEphemeralCache.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestEphemeralCache.java
deleted file mode 100644
index b5b3c33..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestEphemeralCache.java
+++ /dev/null
@@ -1,143 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.math.BigDecimal;
-import java.util.ArrayList;
-import java.util.List;
-
-import static org.hamcrest.CoreMatchers.*;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-public class TestEphemeralCache extends AbstractDataImportHandlerTestCase {
-  
-  
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml");
-  }
-  
-  @Before
-  public void reset() {
-    DestroyCountCache.destroyed.clear();
-    setupMockData();
-  }
-  
-  @Test
-  public void test() throws Exception {
-    assertFullImport(getDataConfigDotXml());
-  }
-
-  @SuppressWarnings("unchecked")
-  private void setupMockData() {
-    @SuppressWarnings({"rawtypes"})
-    List parentRows = new ArrayList();
-    parentRows.add(createMap("id", new BigDecimal("1"), "parent_s", "one"));
-    parentRows.add(createMap("id", new BigDecimal("2"), "parent_s", "two"));
-    parentRows.add(createMap("id", new BigDecimal("3"), "parent_s", "three"));
-    parentRows.add(createMap("id", new BigDecimal("4"), "parent_s", "four"));
-    parentRows.add(createMap("id", new BigDecimal("5"), "parent_s", "five"));
-    
-    @SuppressWarnings({"rawtypes"})
-    List child1Rows = new ArrayList();
-    child1Rows.add(createMap("id", new BigDecimal("6"), "child1a_mult_s", "this is the number six."));
-    child1Rows.add(createMap("id", new BigDecimal("5"), "child1a_mult_s", "this is the number five."));
-    child1Rows.add(createMap("id", new BigDecimal("6"), "child1a_mult_s", "let's sing a song of six."));
-    child1Rows.add(createMap("id", new BigDecimal("3"), "child1a_mult_s", "three"));
-    child1Rows.add(createMap("id", new BigDecimal("3"), "child1a_mult_s", "III"));
-    child1Rows.add(createMap("id", new BigDecimal("3"), "child1a_mult_s", "3"));
-    child1Rows.add(createMap("id", new BigDecimal("3"), "child1a_mult_s", "|||"));
-    child1Rows.add(createMap("id", new BigDecimal("1"), "child1a_mult_s", "one"));
-    child1Rows.add(createMap("id", new BigDecimal("1"), "child1a_mult_s", "uno"));
-    child1Rows.add(createMap("id", new BigDecimal("2"), "child1b_s", "CHILD1B", "child1a_mult_s", "this is the number two."));
-    
-    @SuppressWarnings({"rawtypes"})
-    List child2Rows = new ArrayList();
-    child2Rows.add(createMap("id", new BigDecimal("6"), "child2a_mult_s", "Child 2 says, 'this is the number six.'"));
-    child2Rows.add(createMap("id", new BigDecimal("5"), "child2a_mult_s", "Child 2 says, 'this is the number five.'"));
-    child2Rows.add(createMap("id", new BigDecimal("6"), "child2a_mult_s", "Child 2 says, 'let's sing a song of six.'"));
-    child2Rows.add(createMap("id", new BigDecimal("3"), "child2a_mult_s", "Child 2 says, 'three'"));
-    child2Rows.add(createMap("id", new BigDecimal("3"), "child2a_mult_s", "Child 2 says, 'III'"));
-    child2Rows.add(createMap("id", new BigDecimal("3"), "child2b_s", "CHILD2B", "child2a_mult_s", "Child 2 says, '3'"));
-    child2Rows.add(createMap("id", new BigDecimal("3"), "child2a_mult_s", "Child 2 says, '|||'"));
-    child2Rows.add(createMap("id", new BigDecimal("1"), "child2a_mult_s", "Child 2 says, 'one'"));
-    child2Rows.add(createMap("id", new BigDecimal("1"), "child2a_mult_s", "Child 2 says, 'uno'"));
-    child2Rows.add(createMap("id", new BigDecimal("2"), "child2a_mult_s", "Child 2 says, 'this is the number two.'"));
-    
-    MockDataSource.setIterator("SELECT * FROM PARENT", parentRows.iterator());
-    MockDataSource.setIterator("SELECT * FROM CHILD_1", child1Rows.iterator());
-    MockDataSource.setIterator("SELECT * FROM CHILD_2", child2Rows.iterator());
-    
-  }
-  private String getDataConfigDotXml() {
-    return
-      "<dataConfig>" +
-      " <dataSource type=\"MockDataSource\" />" +
-      " <document>" +
-      "   <entity " +
-      "     name=\"PARENT\"" +
-      "     processor=\"SqlEntityProcessor\"" +
-      "     cacheImpl=\"org.apache.solr.handler.dataimport.DestroyCountCache\"" +
-      "     cacheName=\"PARENT\"" +
-      "     query=\"SELECT * FROM PARENT\"  " +
-      "   >" +
-      "     <entity" +
-      "       name=\"CHILD_1\"" +
-      "       processor=\"SqlEntityProcessor\"" +
-      "       cacheImpl=\"org.apache.solr.handler.dataimport.DestroyCountCache\"" +
-      "       cacheName=\"CHILD\"" +
-      "       cacheKey=\"id\"" +
-      "       cacheLookup=\"PARENT.id\"" +
-      "       fieldNames=\"id,         child1a_mult_s, child1b_s\"" +
-      "       fieldTypes=\"BIGDECIMAL, STRING,         STRING\"" +
-      "       query=\"SELECT * FROM CHILD_1\"       " +
-      "     />" +
-      "     <entity" +
-      "       name=\"CHILD_2\"" +
-      "       processor=\"SqlEntityProcessor\"" +
-      "       cacheImpl=\"org.apache.solr.handler.dataimport.DestroyCountCache\"" +
-      "       cacheKey=\"id\"" +
-      "       cacheLookup=\"PARENT.id\"" +
-      "       query=\"SELECT * FROM CHILD_2\"       " +
-      "     />" +
-      "   </entity>" +
-      " </document>" +
-      "</dataConfig>"
-    ;
-  }
-  
-  private void assertFullImport(String dataConfig) throws Exception {
-    runFullImport(dataConfig);
-    
-    assertQ(req("*:*"), "//*[@numFound='5']");
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:6"), "//*[@numFound='0']");
-    assertQ(req("parent_s:four"), "//*[@numFound='1']");
-    assertQ(req("child1a_mult_s:this\\ is\\ the\\ numbe*"), "//*[@numFound='2']");
-    assertQ(req("child2a_mult_s:Child\\ 2\\ say*"), "//*[@numFound='4']");
-    assertQ(req("child1b_s:CHILD1B"), "//*[@numFound='1']");
-    assertQ(req("child2b_s:CHILD2B"), "//*[@numFound='1']");
-    assertQ(req("child1a_mult_s:one"), "//*[@numFound='1']");
-    assertQ(req("child1a_mult_s:uno"), "//*[@numFound='1']");
-    assertQ(req("child1a_mult_s:(uno OR one)"), "//*[@numFound='1']");
-    
-    assertThat(DestroyCountCache.destroyed.size(), is(3));
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestErrorHandling.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestErrorHandling.java
deleted file mode 100644
index 2391ae8..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestErrorHandling.java
+++ /dev/null
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-import java.io.Reader;
-import java.io.StringReader;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-
-/**
- * Tests exception handling during imports in DataImportHandler
- *
- *
- * @since solr 1.4
- */
-public class TestErrorHandling extends AbstractDataImportHandlerTestCase {
-
-  //TODO: fix this test to not require FSDirectory.
-  static String savedFactory;
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    savedFactory = System.getProperty("solr.DirectoryFactory");
-    System.setProperty("solr.directoryFactory", "solr.MockFSDirectoryFactory");
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml");
-    ignoreException("Unexpected close tag");
-  }
-  
-  @AfterClass
-  public static void afterClass() {
-    if (savedFactory == null) {
-      System.clearProperty("solr.directoryFactory");
-    } else {
-      System.setProperty("solr.directoryFactory", savedFactory);
-    }
-  }
-  
-  @Before @Override
-  public void setUp() throws Exception {
-    super.setUp();
-    clearIndex();
-    assertU(commit());
-  }
-  
-  public void testMalformedStreamingXml() throws Exception {
-    StringDataSource.xml = malformedXml;
-    runFullImport(dataConfigWithStreaming);
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='1']");
-  }
-
-  public void testMalformedNonStreamingXml() throws Exception {
-    StringDataSource.xml = malformedXml;
-    runFullImport(dataConfigWithoutStreaming);
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='1']");
-  }
-
-  public void testAbortOnError() throws Exception {
-    StringDataSource.xml = malformedXml;
-    runFullImport(dataConfigAbortOnError);
-    assertQ(req("*:*"), "//*[@numFound='0']");
-  }
-
-  @SuppressWarnings({"unchecked"})
-  public void testTransformerErrorContinue() throws Exception {
-    StringDataSource.xml = wellformedXml;
-    List<Map<String, Object>> rows = new ArrayList<>();
-    rows.add(createMap("id", "3", "desc", "exception-transformer"));
-    MockDataSource.setIterator("select * from foo", rows.iterator());
-    runFullImport(dataConfigWithTransformer);
-    assertQ(req("*:*"), "//*[@numFound='3']");
-  }
-
-  public void testExternalEntity() throws Exception {
-    StringDataSource.xml = wellformedXml;
-    // This should not fail as external entities are replaced by an empty string during parsing:
-    runFullImport(dataConfigWithEntity);
-    assertQ(req("*:*"), "//*[@numFound='3']");
-  }
-
-  public static class StringDataSource extends DataSource<Reader> {
-    public static String xml = "";
-
-    @Override
-    public void init(Context context, Properties initProps) {
-    }
-
-    @Override
-    public Reader getData(String query) {
-      return new StringReader(xml);
-    }
-
-    @Override
-    public void close() {
-
-    }
-  }
-
-  public static class ExceptionTransformer extends Transformer {
-    @Override
-    public Object transformRow(Map<String, Object> row, Context context) {
-      throw new RuntimeException("Test exception");
-    }
-  }
-
-  private String dataConfigWithStreaming = "<dataConfig>\n" +
-          "        <dataSource name=\"str\" type=\"TestErrorHandling$StringDataSource\" />" +
-          "    <document>\n" +
-          "        <entity name=\"node\" dataSource=\"str\" processor=\"XPathEntityProcessor\" url=\"test\" stream=\"true\" forEach=\"/root/node\" onError=\"skip\">\n" +
-          "            <field column=\"id\" xpath=\"/root/node/id\" />\n" +
-          "            <field column=\"desc\" xpath=\"/root/node/desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private String dataConfigWithoutStreaming = "<dataConfig>\n" +
-          "        <dataSource name=\"str\" type=\"TestErrorHandling$StringDataSource\" />" +
-          "    <document>\n" +
-          "        <entity name=\"node\" dataSource=\"str\" processor=\"XPathEntityProcessor\" url=\"test\" forEach=\"/root/node\" onError=\"skip\">\n" +
-          "            <field column=\"id\" xpath=\"/root/node/id\" />\n" +
-          "            <field column=\"desc\" xpath=\"/root/node/desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private String dataConfigAbortOnError = "<dataConfig>\n" +
-          "        <dataSource name=\"str\" type=\"TestErrorHandling$StringDataSource\" />" +
-          "    <document>\n" +
-          "        <entity name=\"node\" dataSource=\"str\" processor=\"XPathEntityProcessor\" url=\"test\" forEach=\"/root/node\" onError=\"abort\">\n" +
-          "            <field column=\"id\" xpath=\"/root/node/id\" />\n" +
-          "            <field column=\"desc\" xpath=\"/root/node/desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private String dataConfigWithTransformer = "<dataConfig>\n" +
-          "        <dataSource name=\"str\" type=\"TestErrorHandling$StringDataSource\" />" +
-          "<dataSource  type=\"MockDataSource\"/>" +
-          "    <document>\n" +
-          "        <entity name=\"node\" dataSource=\"str\" processor=\"XPathEntityProcessor\" url=\"test\" forEach=\"/root/node\">\n" +
-          "            <field column=\"id\" xpath=\"/root/node/id\" />\n" +
-          "            <field column=\"desc\" xpath=\"/root/node/desc\" />\n" +
-          "            <entity name=\"child\" query=\"select * from foo\" transformer=\"TestErrorHandling$ExceptionTransformer\" onError=\"continue\">\n" +
-          "            </entity>" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private String dataConfigWithEntity = "<!DOCTYPE dataConfig [\n" + 
-          "  <!ENTITY internalTerm \"node\">\n" + 
-          "  <!ENTITY externalTerm SYSTEM \"foo://bar.xyz/external\">\n" + 
-          "]><dataConfig>\n" +
-          "    <dataSource name=\"str\" type=\"TestErrorHandling$StringDataSource\" />" +
-          "    <document>\n" +
-          "        <entity name=\"&internalTerm;\" dataSource=\"str\" processor=\"XPathEntityProcessor\" url=\"test\" forEach=\"/root/node\" onError=\"skip\">\n" +
-          "            <field column=\"id\" xpath=\"/root/node/id\">&externalTerm;</field>\n" +
-          "            <field column=\"desc\" xpath=\"/root/node/desc\" />\n" +
-          "        </entity>\n" +
-          "    </document>\n" +
-          "</dataConfig>";
-
-  private String malformedXml = "<root>\n" +
-          "    <node>\n" +
-          "        <id>1</id>\n" +
-          "        <desc>test1</desc>\n" +
-          "    </node>\n" +
-          "    <node>\n" +
-          "        <id>2</id>\n" +
-          "        <desc>test2</desc>\n" +
-          "    </node>\n" +
-          "    <node>\n" +
-          "        <id/>3</id>\n" +
-          "        <desc>test3</desc>\n" +
-          "    </node>\n" +
-          "</root>";
-
-  private String wellformedXml = "<root>\n" +
-          "    <node>\n" +
-          "        <id>1</id>\n" +
-          "        <desc>test1</desc>\n" +
-          "    </node>\n" +
-          "    <node>\n" +
-          "        <id>2</id>\n" +
-          "        <desc>test2</desc>\n" +
-          "    </node>\n" +
-          "    <node>\n" +
-          "        <id>3</id>\n" +
-          "        <desc>test3</desc>\n" +
-          "    </node>\n" +
-          "</root>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFieldReader.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFieldReader.java
deleted file mode 100644
index 3203bda..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFieldReader.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-/**
- * Test for FieldReaderDataSource
- *
- *
- * @see org.apache.solr.handler.dataimport.FieldReaderDataSource
- * @since 1.4
- */
-public class TestFieldReader extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void simple() {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(config);
-    redirectTempProperties(di);
-
-    TestDocBuilder.SolrWriterImpl sw = new TestDocBuilder.SolrWriterImpl();
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    List<Map<String, Object>> l = new ArrayList<>();
-    l.add(createMap("xml", xml));
-    MockDataSource.setIterator("select * from a", l.iterator());
-    di.runCmd(rp, sw);
-    assertEquals(sw.docs.get(0).getFieldValue("y"), "Hello");
-    MockDataSource.clearCache();
-  }
-
-  String config = "<dataConfig>\n" +
-          "  <dataSource type=\"FieldReaderDataSource\" name=\"f\"/>\n" +
-          "  <dataSource type=\"MockDataSource\"/>\n" +
-          "  <document>\n" +
-          "    <entity name=\"a\" query=\"select * from a\" >\n" +
-          "      <entity name=\"b\" dataSource=\"f\" processor=\"XPathEntityProcessor\" forEach=\"/x\" dataField=\"a.xml\">\n" +
-          "        <field column=\"y\" xpath=\"/x/y\"/>\n" +
-          "      </entity>\n" +
-          "    </entity>\n" +
-          "  </document>\n" +
-          "</dataConfig>";
-
-  String xml = "<x>\n" +
-          " <y>Hello</y>\n" +
-          "</x>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFileListEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFileListEntityProcessor.java
deleted file mode 100644
index cf0f3a3..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFileListEntityProcessor.java
+++ /dev/null
@@ -1,194 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.io.IOException;
-import java.nio.charset.StandardCharsets;
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Date;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Set;
-
-import org.apache.solr.common.util.SuppressForbidden;
-import org.junit.Test;
-
-/**
- * <p>
- * Test for FileListEntityProcessor
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestFileListEntityProcessor extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testSimple() throws IOException {
-    File tmpdir = createTempDir().toFile();
-
-    createFile(tmpdir, "a.xml", "a.xml".getBytes(StandardCharsets.UTF_8), false);
-    createFile(tmpdir, "b.xml", "b.xml".getBytes(StandardCharsets.UTF_8), false);
-    createFile(tmpdir, "c.props", "c.props".getBytes(StandardCharsets.UTF_8), false);
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, "xml$",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath());
-    Context c = getContext(null,
-            new VariableResolver(), null, Context.FULL_DUMP, Collections.emptyList(), attrs);
-    FileListEntityProcessor fileListEntityProcessor = new FileListEntityProcessor();
-    fileListEntityProcessor.init(c);
-    List<String> fList = new ArrayList<>();
-    while (true) {
-      Map<String, Object> f = fileListEntityProcessor.nextRow();
-      if (f == null)
-        break;
-      fList.add((String) f.get(FileListEntityProcessor.ABSOLUTE_FILE));
-    }
-    assertEquals(2, fList.size());
-  }
-  
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testBiggerSmallerFiles() throws IOException {
-    File tmpdir = createTempDir().toFile();
-
-    long minLength = Long.MAX_VALUE;
-    String smallestFile = "";
-    byte[] content = "abcdefgij".getBytes(StandardCharsets.UTF_8);
-    createFile(tmpdir, "a.xml", content, false);
-    if (minLength > content.length) {
-      minLength = content.length;
-      smallestFile = "a.xml";
-    }
-    content = "abcdefgij".getBytes(StandardCharsets.UTF_8);
-    createFile(tmpdir, "b.xml", content, false);
-    if (minLength > content.length) {
-      minLength = content.length;
-      smallestFile = "b.xml";
-    }
-    content = "abc".getBytes(StandardCharsets.UTF_8);
-    createFile(tmpdir, "c.props", content, false);
-    if (minLength > content.length) {
-      minLength = content.length;
-      smallestFile = "c.props";
-    }
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, ".*",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath(),
-            FileListEntityProcessor.BIGGER_THAN, String.valueOf(minLength));
-    List<String> fList = getFiles(null, attrs);
-    assertEquals(2, fList.size());
-    Set<String> l = new HashSet<>();
-    l.add(new File(tmpdir, "a.xml").getAbsolutePath());
-    l.add(new File(tmpdir, "b.xml").getAbsolutePath());
-    assertEquals(l, new HashSet<>(fList));
-    attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, ".*",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath(),
-            FileListEntityProcessor.SMALLER_THAN, String.valueOf(minLength+1));
-    fList = getFiles(null, attrs);
-    l.clear();
-    l.add(new File(tmpdir, smallestFile).getAbsolutePath());
-    assertEquals(l, new HashSet<>(fList));
-    attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, ".*",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath(),
-            FileListEntityProcessor.SMALLER_THAN, "${a.x}");
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("a", createMap("x", "4"));
-    fList = getFiles(resolver, attrs);
-    assertEquals(l, new HashSet<>(fList));
-  }
-
-  static List<String> getFiles(VariableResolver resolver, @SuppressWarnings({"rawtypes"})Map attrs) {
-    @SuppressWarnings({"unchecked"})
-    Context c = getContext(null,
-            resolver, null, Context.FULL_DUMP, Collections.emptyList(), attrs);
-    FileListEntityProcessor fileListEntityProcessor = new FileListEntityProcessor();
-    fileListEntityProcessor.init(c);
-    List<String> fList = new ArrayList<>();
-    while (true) {
-      Map<String, Object> f = fileListEntityProcessor.nextRow();
-      if (f == null)
-        break;
-      fList.add((String) f.get(FileListEntityProcessor.ABSOLUTE_FILE));
-    }
-    return fList;
-  }
-
-  @SuppressForbidden(reason = "Needs currentTimeMillis to set last modified time")
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testNTOT() throws IOException {
-    File tmpdir = createTempDir().toFile();
-
-    createFile(tmpdir, "a.xml", "a.xml".getBytes(StandardCharsets.UTF_8), true);
-    createFile(tmpdir, "b.xml", "b.xml".getBytes(StandardCharsets.UTF_8), true);
-    createFile(tmpdir, "c.props", "c.props".getBytes(StandardCharsets.UTF_8), true);
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, "xml$",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath(),
-            FileListEntityProcessor.OLDER_THAN, "'NOW'");
-    List<String> fList = getFiles(null, attrs);
-    assertEquals(2, fList.size());
-    attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, ".xml$",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath(),
-            FileListEntityProcessor.NEWER_THAN, "'NOW-2HOURS'");
-    fList = getFiles(null, attrs);
-    assertEquals(2, fList.size());
-
-    // Use a variable for newerThan
-    attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, ".xml$",
-            FileListEntityProcessor.BASE_DIR, tmpdir.getAbsolutePath(),
-            FileListEntityProcessor.NEWER_THAN, "${a.x}");
-    VariableResolver resolver = new VariableResolver();
-    String lastMod = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.ROOT).format(new Date(System.currentTimeMillis() - 50000));
-    resolver.addNamespace("a", createMap("x", lastMod));
-    createFile(tmpdir, "t.xml", "t.xml".getBytes(StandardCharsets.UTF_8), false);
-    fList = getFiles(resolver, attrs);
-    assertEquals(1, fList.size());
-    assertEquals("File name must be t.xml", new File(tmpdir, "t.xml").getAbsolutePath(), fList.get(0));
-  }
-
-  @Test
-  public void testRECURSION() throws IOException {
-    File tmpdir = createTempDir().toFile();
-    File childdir = new File(tmpdir + "/child" );
-    childdir.mkdir();
-    createFile(childdir, "a.xml", "a.xml".getBytes(StandardCharsets.UTF_8), true);
-    createFile(childdir, "b.xml", "b.xml".getBytes(StandardCharsets.UTF_8), true);
-    createFile(childdir, "c.props", "c.props".getBytes(StandardCharsets.UTF_8), true);
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            FileListEntityProcessor.FILE_NAME, "^.*\\.xml$",
-            FileListEntityProcessor.BASE_DIR, childdir.getAbsolutePath(),
-            FileListEntityProcessor.RECURSIVE, "true");
-    List<String> fList = getFiles(null, attrs);
-    assertEquals(2, fList.size());
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFileListWithLineEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFileListWithLineEntityProcessor.java
deleted file mode 100644
index aad8e30..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestFileListWithLineEntityProcessor.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.nio.charset.StandardCharsets;
-
-import org.apache.lucene.util.LuceneTestCase;
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.junit.BeforeClass;
-
-public class TestFileListWithLineEntityProcessor extends AbstractDataImportHandlerTestCase {
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml");
-  }
-  
-  public void test() throws Exception {
-    File tmpdir = createTempDir(LuceneTestCase.getTestClass().getSimpleName()).toFile();
-    createFile(tmpdir, "a.txt", "a line one\na line two\na line three".getBytes(StandardCharsets.UTF_8), false);
-    createFile(tmpdir, "b.txt", "b line one\nb line two".getBytes(StandardCharsets.UTF_8), false);
-    createFile(tmpdir, "c.txt", "c line one\nc line two\nc line three\nc line four".getBytes(StandardCharsets.UTF_8), false);
-    
-    String config = generateConfig(tmpdir);
-    LocalSolrQueryRequest request = lrf.makeRequest(
-        "command", "full-import", "dataConfig", config,
-        "clean", "true", "commit", "true", "synchronous", "true", "indent", "true");
-    h.query("/dataimport", request);
-    
-    assertQ(req("*:*"), "//*[@numFound='9']");
-    assertQ(req("id:?\\ line\\ one"), "//*[@numFound='3']");
-    assertQ(req("id:a\\ line*"), "//*[@numFound='3']");
-    assertQ(req("id:b\\ line*"), "//*[@numFound='2']");
-    assertQ(req("id:c\\ line*"), "//*[@numFound='4']");    
-  }
-  
-  private String generateConfig(File dir) {
-    return
-    "<dataConfig> \n"+
-    "<dataSource type=\"FileDataSource\" encoding=\"UTF-8\" name=\"fds\"/> \n"+
-    "    <document> \n"+
-    "       <entity name=\"f\" processor=\"FileListEntityProcessor\" fileName=\".*[.]txt\" baseDir=\"" + dir.getAbsolutePath() + "\" recursive=\"false\" rootEntity=\"false\"  transformer=\"TemplateTransformer\"> \n" +
-    "             <entity name=\"jc\" processor=\"LineEntityProcessor\" url=\"${f.fileAbsolutePath}\" dataSource=\"fds\"  rootEntity=\"true\" transformer=\"TemplateTransformer\"> \n" +
-    "              <field column=\"rawLine\" name=\"id\" /> \n" +
-    "             </entity> \n"+              
-    "        </entity> \n"+
-    "    </document> \n"+
-    "</dataConfig> \n";
-  }  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestHierarchicalDocBuilder.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestHierarchicalDocBuilder.java
deleted file mode 100644
index 2c7a32a..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestHierarchicalDocBuilder.java
+++ /dev/null
@@ -1,483 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
-import org.apache.lucene.document.Document;
-import org.apache.lucene.index.Term;
-import org.apache.lucene.search.BooleanClause.Occur;
-import org.apache.lucene.search.BooleanQuery;
-import org.apache.lucene.search.Query;
-import org.apache.lucene.search.TermQuery;
-import org.apache.lucene.search.TopDocs;
-import org.apache.lucene.search.join.BitSetProducer;
-import org.apache.lucene.search.join.QueryBitSetProducer;
-import org.apache.lucene.search.join.ScoreMode;
-import org.apache.lucene.search.join.ToParentBlockJoinQuery;
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.handler.dataimport.config.ConfigNameConstants;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.search.SolrIndexSearcher;
-import org.apache.solr.util.TestHarness;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-/**
- * Test for DocBuilder using the test harness. 
- * <b> Documents are hierarchical in this test, i.e. each document have nested children documents.</b>
- */
-public class TestHierarchicalDocBuilder extends AbstractDataImportHandlerTestCase {
-
-  private static final String FIELD_ID = "id";
-  private int id = 0; //unique id
-  private SolrQueryRequest req;
-  
-  /**
-   * Holds the data related to randomly created index.
-   * It is used for making assertions.
-   */
-  private static class ContextHolder {
-    /** Overall documents number **/
-    int counter = 0;
-    
-    /**
-     * Each Hierarchy object represents nested documents with a parent at the root of hierarchy
-     */
-    List<Hierarchy> hierarchies = new ArrayList<Hierarchy>();
-  }
-  
-  /**
-   * Represents a hierarchical document structure
-   */
-  private static class Hierarchy {
-    
-    /**
-     * Type of element, i.e. parent, child, grandchild, etc..
-     */
-    String elementType;
-    
-    /**
-     * Fields of a current element
-     */
-    Map<String, Object> elementData = new HashMap<String,Object>();
-    
-    /**
-     * Nested elements/documents hierarchies. 
-     */
-    List<Hierarchy> elements = new ArrayList<Hierarchy>();
-  }
-  
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml");    
-  }
-  
-  @Before
-  public void before() {
-    req = req("*:*"); // don't really care about query
-    MockDataSource.clearCache();
-  }
-  
-  @After
-  public void after() {
-    if (null != req) {
-      req.close();
-      req = null;
-    }
-    MockDataSource.clearCache();
-  }
-
-  @Test
-  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-12801") // this test fails easily under beasting
-  public void testThreeLevelHierarchy() throws Exception {
-    int parentsNum = 3; //fixed for simplicity of test
-    int childrenNum = 0;
-    int grandChildrenNum = 0;
-    
-    final String parentType = "parent";
-    final String childType = "child";
-    final String grandChildType = "grand_child";
-
-    List<String> parentIds = createDataIterator("select * from PARENT", parentType, parentType, parentsNum);
-    Collections.shuffle(parentIds, random());
-    final String parentId1 = parentIds.get(0);
-    String parentId2 = parentIds.get(1);
-    
-    //parent 1 children
-    int firstParentChildrenNum = 3; //fixed for simplicity of test
-    String select = "select * from CHILD where parent_id='" + parentId1 + "'";
-    List<String> childrenIds = createDataIterator(select, childType, "child of first parent", firstParentChildrenNum);
-    List<String> firstParentChildrenIds = new ArrayList<String>(childrenIds);
-    childrenNum += childrenIds.size();
-    
-    // grand children of first parent first child
-    final String childId = childrenIds.get(0);
-    String description = "grandchild of first parent, child of " + childId + " child";
-    select = "select * from GRANDCHILD where parent_id='" + childId + "'";
-    List<String> grandChildrenIds = createDataIterator(select, grandChildType, description, atLeast(2));
-    grandChildrenNum += grandChildrenIds.size();
-    
-    // grand children of first parent second child
-    {
-      String childId2 = childrenIds.get(1);
-      description = "grandchild of first parent, child of " + childId2 + " child";
-      select = "select * from GRANDCHILD where parent_id='" + childId2 + "'";
-    }
-    final List<String> grandChildrenIds2 = createDataIterator(select, grandChildType, description, atLeast(2));
-    grandChildrenNum += grandChildrenIds2.size();
-    
-    List<String> allGrandChildrenIds = new ArrayList<>(grandChildrenIds);
-    allGrandChildrenIds.addAll(grandChildrenIds2);
-        
-    // third children of first parent has no grand children
-    
-    // parent 2 children (no grand children)   
-    select = "select * from CHILD where parent_id='" + parentId2 + "'";
-    childrenIds = createDataIterator(select, childType, "child of second parent", atLeast(2));
-    childrenNum += childrenIds.size();
-    
-    // parent 3 has no children and grand children
-    
-    int totalDocsNum = parentsNum + childrenNum + grandChildrenNum;
-    
-    String resp = runFullImport(THREE_LEVEL_HIERARCHY_CONFIG);
-    String xpath = "//arr[@name='documents']/lst[arr[@name='id']/str='"+parentId1+"']/"+
-      "arr[@name='_childDocuments_']/lst[arr[@name='id']/str='"+childId+"']/"+
-      "arr[@name='_childDocuments_']/lst[arr[@name='id']/str='"+grandChildrenIds.get(0)+"']";
-    String results = TestHarness.validateXPath(resp, 
-           xpath);
-    assertTrue("Debug documents does not contain child documents\n"+resp+"\n"+ xpath+
-                                                        "\n"+results, results == null);
-    
-    assertTrue("Update request processor processAdd was not called", TestUpdateRequestProcessor.processAddCalled);
-    assertTrue("Update request processor processCommit was not callled", TestUpdateRequestProcessor.processCommitCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-    
-    // very simple asserts to check that we at least have correct num of docs indexed
-    assertQ(req("*:*"), "//*[@numFound='" + totalDocsNum + "']");
-    assertQ(req("type_s:parent"), "//*[@numFound='" + parentsNum + "']");
-    assertQ(req("type_s:child"), "//*[@numFound='" + childrenNum + "']");
-    assertQ(req("type_s:grand_child"), "//*[@numFound='" + grandChildrenNum + "']");
-
-    // let's check BlockJoin
-    // get first parent by any grand children
-    String randomGrandChildId = allGrandChildrenIds.get(random().nextInt(allGrandChildrenIds.size()));
-    Query query = createToParentQuery(parentType, FIELD_ID, randomGrandChildId);
-    assertSearch(query, FIELD_ID, parentId1);
-
-    // get first parent by any children 
-    String randomChildId = firstParentChildrenIds.get(random().nextInt(firstParentChildrenIds.size()));
-    query = createToParentQuery(parentType, FIELD_ID, randomChildId);
-    assertSearch(query, FIELD_ID, parentId1);
-    
-    // get parent by children by grand children
-    randomGrandChildId = grandChildrenIds.get(random().nextInt(grandChildrenIds.size()));
-    ToParentBlockJoinQuery childBlockJoinQuery = createToParentQuery(childType, FIELD_ID, randomGrandChildId);
-    ToParentBlockJoinQuery blockJoinQuery = new ToParentBlockJoinQuery(childBlockJoinQuery, createParentFilter(parentType), ScoreMode.Avg);
-    assertSearch(blockJoinQuery, FIELD_ID, parentId1);
-  }
-
-  @Test
-  public void testRandomDepthHierarchy() throws Exception {
-    final String parentType = "parent";
-    
-    // Be aware that hierarchies grows exponentially, thus 
-    // numbers bigger than 6 may lead to significant memory usage
-    // and cause OOME
-    int parentsNum = 2 + random().nextInt(3);
-    int depth = 2 + random().nextInt(3);
-    
-    ContextHolder holder = new ContextHolder();
-    
-    String config = createRandomizedConfig(depth, parentType, parentsNum, holder);
-    runFullImport(config);
-    
-    assertTrue("Update request processor processAdd was not called", TestUpdateRequestProcessor.processAddCalled);
-    assertTrue("Update request processor processCommit was not callled", TestUpdateRequestProcessor.processCommitCalled);
-    assertTrue("Update request processor finish was not called", TestUpdateRequestProcessor.finishCalled);
-    
-    assertQ(req("type_s:" + parentType), "//*[@numFound='" + parentsNum + "']");
-    assertQ(req("-type_s:"+ parentType), "//*[@numFound='" + (holder.counter - parentsNum) + "']");
-    
-    // let's check BlockJoin
-    Hierarchy randomHierarchy = holder.hierarchies.get(random().nextInt(holder.hierarchies.size()));
-       
-    Query deepestQuery = createBlockJoinQuery(randomHierarchy);
-    assertSearch(deepestQuery, FIELD_ID, (String) randomHierarchy.elementData.get(FIELD_ID));
-  }
-  
-  private Query createBlockJoinQuery(Hierarchy hierarchy) {
-    List<Hierarchy> elements = hierarchy.elements;
-    if (elements.isEmpty()) {
-      BooleanQuery.Builder childQuery = new BooleanQuery.Builder();
-      childQuery.add(new TermQuery(new Term(FIELD_ID, (String) hierarchy.elementData.get(FIELD_ID))), Occur.MUST);
-      return childQuery.build();
-    }
-    
-    Query childQuery = createBlockJoinQuery(elements.get(random().nextInt(elements.size())));
-    return createToParentQuery(hierarchy.elementType, childQuery);
-  }
-
-  private ToParentBlockJoinQuery createToParentQuery(String parentType, String childField, String childFieldValue) {
-    BooleanQuery.Builder childQuery = new BooleanQuery.Builder();
-    childQuery.add(new TermQuery(new Term(childField, childFieldValue)), Occur.MUST);
-    ToParentBlockJoinQuery result = createToParentQuery(parentType, childQuery.build());
-    
-    return result;
-  }
-  
-  private ToParentBlockJoinQuery createToParentQuery(String parentType, Query childQuery) {
-    ToParentBlockJoinQuery blockJoinQuery = new ToParentBlockJoinQuery(childQuery, createParentFilter(parentType), ScoreMode.Avg);
-    
-    return blockJoinQuery;
-  }
-  
-  private void assertSearch(Query query, String field, String... values) throws IOException {
-    /* The limit of search queue is doubled to catch the error in case when for some reason there are more docs than expected  */
-    SolrIndexSearcher searcher = req.getSearcher();
-    TopDocs result = searcher.search(query, values.length * 2);
-    assertEquals(values.length, result.totalHits.value);
-    List<String> actualValues = new ArrayList<String>();
-    for (int index = 0; index < values.length; ++index) {
-      Document doc = searcher.doc(result.scoreDocs[index].doc);
-      actualValues.add(doc.get(field));
-    }
-    
-    for (String expectedValue: values) {
-      boolean removed = actualValues.remove(expectedValue);
-      if (!removed) {
-        fail("Search result does not contain expected values");
-      }
-    }
-  }
-  
-  @SuppressWarnings("unchecked")
-  private List<String> createDataIterator(String query, String type, String description, int count) {
-    List<Map<String, Object>> data = new ArrayList<Map<String, Object>>();
-    List<String> ids = new ArrayList<String>(count);
-    for (int index = 0; index < count; ++index) {
-      String docId = nextId();
-      ids.add(docId);
-      Map<String, Object> doc = createMap(FIELD_ID, docId, "desc", docId + " " + description, "type_s", type);
-      data.add(doc);
-    }
-    Collections.shuffle(data, random());
-    MockDataSource.setIterator(query, data.iterator());
-    
-    return ids;
-  }
-  
-  /**
-   * Creates randomized configuration of a specified depth. Simple configuration example:
-   * 
-   * <pre>
-   * 
-   * &lt;dataConfig>
-   *   <dataSource type="MockDataSource" />
-   *   &lt;document>
-   *     &lt;entity name="parent" query="SELECT * FROM parent">
-   *       &lt;field column="id" />
-   *       &lt;field column="desc" />
-   *       &lt;field column="type_s" />
-   *       &lt;entity child="true" name="parentChild0" query="select * from parentChild0 where parentChild0_parent_id='${parent.id}'">
-   *         &lt;field column="id" />
-   *         &lt;field column="desc" />
-   *         &lt;field column="type_s" />
-   *         &lt;entity child="true" name="parentChild0Child0" query="select * from parentChild0Child0 where parentChild0Child0_parent_id='${parentChild0.id}'">
-   *           &lt;field column="id" />
-   *           &lt;field column="desc" />
-   *           &lt;field column="type_s" />
-   *         &lt;/entity>
-   *         &lt;entity child="true" name="parentChild0Child1" query="select * from parentChild0Child1 where parentChild0Child1_parent_id='${parentChild0.id}'">
-   *           &lt;field column="id" />
-   *           &lt;field column="desc" />
-   *           &lt;field column="type_s" />
-   *         &lt;/entity>
-   *       &lt;/entity>
-   *       &lt;entity child="true" name="parentChild1" query="select * from parentChild1 where parentChild1_parent_id='${parent.id}'">
-   *         &lt;field column="id" />
-   *         &lt;field column="desc" />
-   *         &lt;field column="type_s" />
-   *         &lt;entity child="true" name="parentChild1Child0" query="select * from parentChild1Child0 where parentChild1Child0_parent_id='${parentChild1.id}'">
-   *           &lt;field column="id" />
-   *           &lt;field column="desc" />
-   *           &lt;field column="type_s" />
-   *         &lt;/entity>
-   *         &lt;entity child="true" name="parentChild1Child1" query="select * from parentChild1Child1 where parentChild1Child1_parent_id='${parentChild1.id}'">
-   *           &lt;field column="id" />
-   *           &lt;field column="desc" />
-   *           &lt;field column="type_s" />
-   *         &lt;/entity>
-   *       &lt;/entity>
-   *     &lt;/entity>
-   *   &lt;/document>
-   * &lt;/dataConfig>
-   * 
-   * </pre>
-   * 
-   * Internally configures MockDataSource.
-   **/
-  private String createRandomizedConfig(int depth, String parentType, int parentsNum, ContextHolder holder) {
-    List<Hierarchy> parentData = createMockedIterator(parentType, "SELECT * FROM " + parentType, parentsNum, holder);
-    
-    holder.hierarchies = parentData;
-    
-    String children = createChildren(parentType, 0, depth, parentData, holder);
-    
-    String rootFields = createFieldsList(FIELD_ID, "desc", "type_s");
-    String rootEntity = StrUtils.formatString(ROOT_ENTITY_TEMPLATE, parentType, "SELECT * FROM " + parentType, rootFields, children);
-
-    String config = StrUtils.formatString(DATA_CONFIG_TEMPLATE, rootEntity);
-    return config;
-  }
-  
-  @SuppressWarnings("unchecked")
-  private List<Hierarchy> createMockedIterator(String type, String query, int amount, ContextHolder holder) {
-    List<Hierarchy> hierarchies = new ArrayList<Hierarchy>();
-    List<Map<String, Object>> data = new ArrayList<Map<String, Object>>();
-    for (int index = 0; index < amount; ++index) {
-      holder.counter++;      
-      String idStr = String.valueOf(holder.counter);
-      Map<String, Object> element = createMap(FIELD_ID, idStr, "desc", type + "_" + holder.counter, "type_s", type);
-      data.add(element);
-      
-      Hierarchy hierarchy = new Hierarchy();
-      hierarchy.elementType = type;
-      hierarchy.elementData = element;
-      hierarchies.add(hierarchy);
-    }
-    
-    MockDataSource.setIterator(query, data.iterator());
-    
-    return hierarchies;
-  }
-  
-  private List<Hierarchy> createMockedIterator(String type, List<Hierarchy> parentData, ContextHolder holder) {
-    List<Hierarchy> result = new ArrayList<Hierarchy>();
-    for (Hierarchy parentHierarchy: parentData) {
-      Map<String, Object> data = parentHierarchy.elementData;
-      String id = (String) data.get(FIELD_ID);
-      String select = String.format(Locale.ROOT, "select * from %s where %s='%s'", type, type + "_parent_id", id);
-      
-      // Number of actual children documents
-      int childrenNum = 1 + random().nextInt(3);
-      List<Hierarchy> childHierarchies = createMockedIterator(type, select, childrenNum, holder);
-      parentHierarchy.elements.addAll(childHierarchies);
-      result.addAll(childHierarchies);
-    }
-    return result;
-  }
-
-  private String createChildren(String parentName, int currentLevel, int maxLevel,
-      List<Hierarchy> parentData, ContextHolder holder) {
-    
-    if (currentLevel == maxLevel) { //recursion base
-      return "";
-    }
-    
-    // number of different children <b>types</b> of parent, i.e. parentChild0, parentChild1
-    // @see #createMockedIterator for the actual number of each children type 
-    int childrenNumber = 2 + random().nextInt(3);
-    StringBuilder builder = new StringBuilder();
-    for (int childIndex = 0; childIndex < childrenNumber; ++childIndex) {
-      String childName = parentName + "Child" + childIndex;
-      String fields = createFieldsList(FIELD_ID, "desc", "type_s");
-      String select = String.format(Locale.ROOT, "select * from %s where %s='%s'", childName, childName + "_parent_id", "${" + parentName + ".id}");
-      
-      //for each child entity create several iterators
-      List<Hierarchy> childData = createMockedIterator(childName, parentData, holder);
-      
-      String subChildren = createChildren(childName, currentLevel + 1, maxLevel, childData, holder);
-      String child = StrUtils.formatString(CHILD_ENTITY_TEMPLATE, childName, select, fields, subChildren);
-      builder.append(child);
-      builder.append('\n');
-    }
-    
-    return builder.toString();
-  }
-  
-  private String createFieldsList(String... fields) {
-    StringBuilder builder = new StringBuilder();
-    for (String field: fields) {
-      String text = String.format(Locale.ROOT, "<field column='%s' />", field);
-      builder.append(text);
-      builder.append('\n');
-    }
-    return builder.toString();
-  }
-
-  private static final String THREE_LEVEL_HIERARCHY_CONFIG = "<dataConfig>\n" +
-      "  <dataSource type='MockDataSource' />\n" +
-      "  <document>\n" +
-      "    <entity name='PARENT' query='select * from PARENT'>\n" +
-      "      <field column='id' />\n" +
-      "      <field column='desc' />\n" +
-      "      <field column='type_s' />\n" +
-      "      <entity child='true' name='CHILD' query=\"select * from CHILD where parent_id='${PARENT.id}'\">\n" +
-      "        <field column='id' />\n" +
-      "        <field column='desc' />\n" +
-      "        <field column='type_s' />\n" +
-      "          <entity child='true' name='GRANDCHILD' query=\"select * from GRANDCHILD where parent_id='${CHILD.id}'\">\n" +
-      "            <field column='id' />\n" +
-      "            <field column='desc' />\n" +
-      "            <field column='type_s' />\n" +
-      "          </entity>\n" +
-      "      </entity>\n" +
-      "    </entity>\n" +
-      "  </document>\n" +
-      "</dataConfig>";
-  
-  /** {0} is rootEntity block **/
-  private static final String DATA_CONFIG_TEMPLATE = "<dataConfig><dataSource type=\"MockDataSource\" />\n<document>\n {0}</document></dataConfig>";
-  
-  /** 
-   * {0} - entityName, 
-   * {1} - select query
-   * {2} - fieldsList
-   * {3} - childEntitiesList 
-   **/
-  private static final String ROOT_ENTITY_TEMPLATE = "<entity name=\"{0}\" query=\"{1}\">\n{2} {3}\n</entity>\n";
-  
-  /** 
-   * {0} - entityName, 
-   * {1} - select query
-   * {2} - fieldsList
-   * {3} - childEntitiesList 
-   **/
-  private static final String CHILD_ENTITY_TEMPLATE = "<entity " + ConfigNameConstants.CHILD + "=\"true\" name=\"{0}\" query=\"{1}\">\n {2} {3} </entity>\n";
-  
-  private BitSetProducer createParentFilter(String type) {
-    BooleanQuery.Builder parentQuery = new BooleanQuery.Builder();
-    parentQuery.add(new TermQuery(new Term("type_s", type)), Occur.MUST);
-    return new QueryBitSetProducer(parentQuery.build());
-  }
-  
-  private String nextId() {
-    ++id;
-    return String.valueOf(id);
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSource.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSource.java
deleted file mode 100644
index 158bbc6..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSource.java
+++ /dev/null
@@ -1,663 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import javax.sql.DataSource;
-import java.io.File;
-import java.io.IOException;
-import java.nio.charset.StandardCharsets;
-import java.sql.Connection;
-import java.sql.Driver;
-import java.sql.DriverManager;
-import java.sql.ResultSet;
-import java.sql.ResultSetMetaData;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-
-import org.apache.solr.common.util.SuppressForbidden;
-import org.apache.solr.handler.dataimport.JdbcDataSource.ResultSetIterator;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Ignore;
-import org.junit.Test;
-
-import static org.mockito.Mockito.doThrow;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.reset;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
-import static org.mockito.Mockito.when;
-
-/**
- * <p>
- * Test for JdbcDataSource
- * </p>
- * <p>
- * Note: The tests are ignored for the lack of DB support for testing
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestJdbcDataSource extends AbstractDataImportHandlerTestCase {
-  private Driver driver;
-  private DataSource dataSource;
-  private Connection connection;
-  private JdbcDataSource jdbcDataSource = new JdbcDataSource();
-  List<Map<String, String>> fields = new ArrayList<>();
-
-  Context context = AbstractDataImportHandlerTestCase.getContext(null, null,
-          jdbcDataSource, Context.FULL_DUMP, fields, null);
-
-  Properties props = new Properties();
-
-  String sysProp = System.getProperty("java.naming.factory.initial");
-
-  @BeforeClass
-  public static void beforeClass() {
-    assumeWorkingMockito();
-  }
-  
-  @Override
-  @Before
-  public void setUp() throws Exception {
-    super.setUp();
-    System.setProperty("java.naming.factory.initial",
-            MockInitialContextFactory.class.getName());
-    
-    driver = mock(Driver.class);
-    dataSource = mock(DataSource.class);
-    connection = mock(Connection.class);
-    props.clear();
-  }
-
-  @Override
-  @After
-  public void tearDown() throws Exception {
-    if (sysProp == null) {
-      System.getProperties().remove("java.naming.factory.initial");
-    } else {
-      System.setProperty("java.naming.factory.initial", sysProp);
-    }
-    super.tearDown();
-    if (null != driver) {
-      reset(driver, dataSource, connection);
-    }
-  }
-
-  @Test
-  public void testRetrieveFromJndi() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-
-    when(dataSource.getConnection()).thenReturn(connection);
-
-    Connection conn = jdbcDataSource.createConnectionFactory(context, props)
-            .call();
-
-    verify(connection).setAutoCommit(false);
-    verify(dataSource).getConnection();
-
-    assertSame("connection", conn, connection);
-  }
-
-  @Test
-  public void testRetrieveFromJndiWithCredentials() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    props.put("user", "Fred");
-    props.put("password", "4r3d");
-    props.put("holdability", "HOLD_CURSORS_OVER_COMMIT");
-
-    when(dataSource.getConnection("Fred", "4r3d")).thenReturn(
-            connection);
-
-    Connection conn = jdbcDataSource.createConnectionFactory(context, props)
-            .call();
-
-    verify(connection).setAutoCommit(false);
-    verify(connection).setHoldability(1);
-    verify(dataSource).getConnection("Fred", "4r3d");
-
-    assertSame("connection", conn, connection);
-  }
-
-  @Test
-  public void testRetrieveFromJndiWithCredentialsEncryptedAndResolved() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    String user = "Fred";
-    String plainPassword = "MyPassword";
-    String encryptedPassword = "U2FsdGVkX18QMjY0yfCqlfBMvAB4d3XkwY96L7gfO2o=";
-    String propsNamespace = "exampleNamespace";
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-
-    props.put("user", "${" +propsNamespace +".user}");
-    props.put("encryptKeyFile", "${" +propsNamespace +".encryptKeyFile}");
-    props.put("password", "${" +propsNamespace +".password}");
-
-    when(dataSource.getConnection(user, plainPassword)).thenReturn(
-             connection);
-
-    Map<String,Object> values = new HashMap<>();
-    values.put("user", user);
-    values.put("encryptKeyFile", createEncryptionKeyFile());
-    values.put("password", encryptedPassword);
-    context.getVariableResolver().addNamespace(propsNamespace, values);
-
-    jdbcDataSource.init(context, props);
-    Connection conn = jdbcDataSource.getConnection();
-
-    verify(connection).setAutoCommit(false);
-    verify(dataSource).getConnection(user, plainPassword);
-
-    assertSame("connection", conn, connection);
-  }
-
-  @Test
-  public void testRetrieveFromJndiWithCredentialsWithEncryptedAndResolvedPwd() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    Properties properties = new Properties();
-    properties.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    properties.put("user", "Fred");
-    properties.put("encryptKeyFile", "${foo.bar}");
-    properties.put("password", "U2FsdGVkX18QMjY0yfCqlfBMvAB4d3XkwY96L7gfO2o=");
-    when(dataSource.getConnection("Fred", "MyPassword")).thenReturn(
-        connection);
-
-    Map<String,Object> values = new HashMap<>();
-    values.put("bar", createEncryptionKeyFile());
-    context.getVariableResolver().addNamespace("foo", values);
-
-    jdbcDataSource.init(context, properties);
-    jdbcDataSource.getConnection();
-
-    verify(connection).setAutoCommit(false);
-    verify(dataSource).getConnection("Fred", "MyPassword");
-  }
-
-  @Test
-  public void testRetrieveFromJndiFailureNotHidden() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-
-    SQLException sqlException = new SQLException("fake");
-    when(dataSource.getConnection()).thenThrow(sqlException);
-
-    SQLException ex = expectThrows(SQLException.class,
-        () -> jdbcDataSource.createConnectionFactory(context, props).call());
-    assertSame(sqlException, ex);
-
-    verify(dataSource).getConnection();
-  }
-
-  @Test
-  public void testClosesConnectionWhenExceptionThrownOnSetAutocommit() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-
-    SQLException sqlException = new SQLException("fake");
-    when(dataSource.getConnection()).thenReturn(connection);
-    doThrow(sqlException).when(connection).setAutoCommit(false);
-
-    DataImportHandlerException ex = expectThrows(DataImportHandlerException.class,
-        () -> jdbcDataSource.createConnectionFactory(context, props).call());
-    assertSame(sqlException, ex.getCause());
-
-    verify(dataSource).getConnection();
-    verify(connection).setAutoCommit(false);
-    verify(connection).close();
-  }
-
-  @Test
-  public void testClosesStatementWhenExceptionThrownOnExecuteQuery() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-
-    jdbcDataSource.init(context, props);
-
-    SQLException sqlException = new SQLException("fake");
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenThrow(sqlException);
-
-    DataImportHandlerException ex = expectThrows(DataImportHandlerException.class,
-        () -> jdbcDataSource.getData("query"));
-    assertSame(sqlException, ex.getCause());
-
-    verify(dataSource).getConnection();
-    verify(connection).setAutoCommit(false);
-    verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement).setFetchSize(500);
-    verify(statement).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement).close();
-  }
-
-  @Test
-  public void testClosesStatementWhenResultSetNull() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-
-    jdbcDataSource.init(context, props);
-
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenReturn(false);
-    when(statement.getUpdateCount()).thenReturn(-1);
-
-    jdbcDataSource.getData("query");
-
-    verify(dataSource).getConnection();
-    verify(connection).setAutoCommit(false);
-    verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement).setFetchSize(500);
-    verify(statement).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement).getUpdateCount();
-    verify(statement).close();
-  }
-
-  @Test
-  public void testClosesStatementWhenHasNextCalledAndResultSetNull() throws Exception {
-
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-
-    jdbcDataSource.init(context, props);
-
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenReturn(true);
-    ResultSet resultSet = mock(ResultSet.class);
-    when(statement.getResultSet()).thenReturn(resultSet);
-    ResultSetMetaData metaData = mock(ResultSetMetaData.class);
-    when(resultSet.getMetaData()).thenReturn(metaData);
-    when(metaData.getColumnCount()).thenReturn(0);
-
-    Iterator<Map<String,Object>> data = jdbcDataSource.getData("query");
-
-    ResultSetIterator resultSetIterator = (ResultSetIterator) data.getClass().getDeclaredField("this$1").get(data);
-    resultSetIterator.setResultSet(null);
-
-    data.hasNext();
-
-    verify(dataSource).getConnection();
-    verify(connection).setAutoCommit(false);
-    verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement).setFetchSize(500);
-    verify(statement).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement).getResultSet();
-    verify(statement).close();
-    verify(resultSet).getMetaData();
-    verify(metaData).getColumnCount();
-  }
-
-  @Test
-  public void testClosesResultSetAndStatementWhenDataSourceIsClosed() throws Exception {
-
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-
-    jdbcDataSource.init(context, props);
-
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenReturn(true);
-    ResultSet resultSet = mock(ResultSet.class);
-    when(statement.getResultSet()).thenReturn(resultSet);
-    ResultSetMetaData metaData = mock(ResultSetMetaData.class);
-    when(resultSet.getMetaData()).thenReturn(metaData);
-    when(metaData.getColumnCount()).thenReturn(0);
-
-    jdbcDataSource.getData("query");
-    jdbcDataSource.close();
-
-    verify(dataSource).getConnection();
-    verify(connection).setAutoCommit(false);
-    verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement).setFetchSize(500);
-    verify(statement).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement).getResultSet();
-    verify(resultSet).getMetaData();
-    verify(metaData).getColumnCount();
-    verify(resultSet).close();
-    verify(statement).close();
-    verify(connection).commit();
-    verify(connection).close();
-  }
-
-  @Test
-  public void testClosesCurrentResultSetIteratorWhenNewOneIsCreated() throws Exception {
-
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-
-    jdbcDataSource.init(context, props);
-
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenReturn(true);
-    ResultSet resultSet = mock(ResultSet.class);
-    when(statement.getResultSet()).thenReturn(resultSet);
-    ResultSetMetaData metaData = mock(ResultSetMetaData.class);
-    when(resultSet.getMetaData()).thenReturn(metaData);
-    when(metaData.getColumnCount()).thenReturn(0);
-    when(statement.execute("other query")).thenReturn(false);
-    when(statement.getUpdateCount()).thenReturn(-1);
-
-    jdbcDataSource.getData("query");
-    jdbcDataSource.getData("other query");
-
-    verify(dataSource).getConnection();
-    verify(connection).setAutoCommit(false);
-    verify(connection, times(2)).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement, times(2)).setFetchSize(500);
-    verify(statement, times(2)).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement).getResultSet();
-    verify(resultSet).getMetaData();
-    verify(metaData).getColumnCount();
-    verify(resultSet).close();
-    verify(statement, times(2)).close();
-    verify(statement).execute("other query");
-  }
-  
-  @Test
-  public void testMultipleResultsSets_UpdateCountUpdateCountResultSet() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-    jdbcDataSource.init(context, props);
-
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenReturn(false);
-    when(statement.getUpdateCount()).thenReturn(1);
-    when(statement.getMoreResults()).thenReturn(false).thenReturn(true);
-    ResultSet resultSet = mock(ResultSet.class);
-    when(statement.getResultSet()).thenReturn(resultSet);
-    ResultSetMetaData metaData = mock(ResultSetMetaData.class);
-    when(resultSet.getMetaData()).thenReturn(metaData);
-    when(metaData.getColumnCount()).thenReturn(0);
-
-    final ResultSetIterator resultSetIterator = jdbcDataSource.new ResultSetIterator("query");
-    assertSame(resultSet, resultSetIterator.getResultSet());
-
-    verify(connection).setAutoCommit(false);
-    verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement).setFetchSize(500);
-    verify(statement).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement, times(2)).getUpdateCount();
-    verify(statement, times(2)).getMoreResults();
-  }
-
-  @Test
-  public void testMultipleResultsSets_ResultSetResultSet() throws Exception {
-    MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-    props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-    when(dataSource.getConnection()).thenReturn(connection);
-    jdbcDataSource.init(context, props);
-    connection.setAutoCommit(false);
-
-    Statement statement = mock(Statement.class);
-    when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-        .thenReturn(statement);
-    when(statement.execute("query")).thenReturn(true);
-    ResultSet resultSet1 = mock(ResultSet.class);
-    ResultSet resultSet2 = mock(ResultSet.class);
-    when(statement.getResultSet()).thenReturn(resultSet1).thenReturn(resultSet2).thenReturn(null);
-    when(statement.getMoreResults()).thenReturn(true).thenReturn(false);
-    ResultSetMetaData metaData1 = mock(ResultSetMetaData.class);
-    when(resultSet1.getMetaData()).thenReturn(metaData1);
-    when(metaData1.getColumnCount()).thenReturn(0);
-    when(resultSet1.next()).thenReturn(false);
-    ResultSetMetaData metaData2 = mock(ResultSetMetaData.class);
-    when(resultSet2.getMetaData()).thenReturn(metaData2);
-    when(metaData2.getColumnCount()).thenReturn(0);
-    when(resultSet2.next()).thenReturn(true).thenReturn(false);
-    when(statement.getUpdateCount()).thenReturn(-1);
-
-    final ResultSetIterator resultSetIterator = jdbcDataSource.new ResultSetIterator("query");
-    assertSame(resultSet1, resultSetIterator.getResultSet());
-    assertTrue(resultSetIterator.hasnext());
-    assertSame(resultSet2, resultSetIterator.getResultSet());
-    assertFalse(resultSetIterator.hasnext());
-
-    verify(dataSource).getConnection();
-    verify(connection, times(2)).setAutoCommit(false);
-    verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-    verify(statement).setFetchSize(500);
-    verify(statement).setMaxRows(0);
-    verify(statement).execute("query");
-    verify(statement, times(2)).getResultSet();
-    verify(resultSet1).getMetaData();
-    verify(metaData1).getColumnCount();
-    verify(resultSet1).next();
-    verify(resultSet1).close();
-    verify(resultSet2).getMetaData();
-    verify(metaData2).getColumnCount();
-    verify(resultSet2, times(2)).next();
-    verify(resultSet2).close();
-    verify(statement, times(2)).getMoreResults();
-    verify(statement).getUpdateCount();
-    verify(statement).close();
-  }
-
-  @Test
-  public void testRetrieveFromDriverManager() throws Exception {
-    // we're not (directly) using a Mockito based mock class here because it won't have a consistent class name
-    // that will work with DriverManager's class bindings
-    MockDriver mockDriver = new MockDriver(connection);
-    DriverManager.registerDriver(mockDriver);
-    try {
-      props.put(JdbcDataSource.DRIVER, MockDriver.class.getName());
-      props.put(JdbcDataSource.URL, MockDriver.MY_JDBC_URL);
-      props.put("holdability", "HOLD_CURSORS_OVER_COMMIT");
-
-      Connection conn = jdbcDataSource.createConnectionFactory(context, props).call();
-
-      verify(connection).setAutoCommit(false);
-      verify(connection).setHoldability(1);
-
-      assertSame("connection", conn, connection);
-    } catch(Exception e) {
-      throw e;
-    } finally {
-      DriverManager.deregisterDriver(mockDriver);
-    }
-  }
-
-
-  @Test
-  public void testEmptyResultSet() throws Exception {
-      MockInitialContextFactory.bind("java:comp/env/jdbc/JndiDB", dataSource);
-
-      props.put(JdbcDataSource.JNDI_NAME, "java:comp/env/jdbc/JndiDB");
-      when(dataSource.getConnection()).thenReturn(connection);
-
-      jdbcDataSource.init(context, props);
-
-      Statement statement = mock(Statement.class);
-      when(connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
-          .thenReturn(statement);
-      when(statement.execute("query")).thenReturn(true);
-      ResultSet resultSet = mock(ResultSet.class);
-      when(statement.getResultSet()).thenReturn(resultSet);
-      ResultSetMetaData metaData = mock(ResultSetMetaData.class);
-      when(resultSet.getMetaData()).thenReturn(metaData);
-      when(metaData.getColumnCount()).thenReturn(0);
-      when(resultSet.next()).thenReturn(false);
-      when(statement.getMoreResults()).thenReturn(false);
-      when(statement.getUpdateCount()).thenReturn(-1);
-
-      Iterator<Map<String,Object>> resultSetIterator = jdbcDataSource.getData("query");
-      resultSetIterator.hasNext();
-      resultSetIterator.hasNext();
-
-      verify(connection).setAutoCommit(false);
-      verify(connection).createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
-      verify(statement).setFetchSize(500);
-      verify(statement).setMaxRows(0);
-      verify(statement).execute("query");
-      verify(statement).getResultSet();
-      verify(resultSet).getMetaData();
-      verify(metaData).getColumnCount();
-      verify(resultSet).next();
-      verify(resultSet).close();
-      verify(statement).getMoreResults();
-      verify(statement).getUpdateCount();
-      verify(statement).close();
-  }
-
-  @Test
-  @Ignore("Needs a Mock database server to work")
-  public void testBasic() throws Exception {
-    JdbcDataSource dataSource = new JdbcDataSource();
-    Properties p = new Properties();
-    p.put("driver", "com.mysql.jdbc.Driver");
-    p.put("url", "jdbc:mysql://127.0.0.1/autos");
-    p.put("user", "root");
-    p.put("password", "");
-
-    List<Map<String, String>> flds = new ArrayList<>();
-    Map<String, String> f = new HashMap<>();
-    f.put("column", "trim_id");
-    f.put("type", "long");
-    flds.add(f);
-    f = new HashMap<>();
-    f.put("column", "msrp");
-    f.put("type", "float");
-    flds.add(f);
-
-    Context c = getContext(null, null,
-            dataSource, Context.FULL_DUMP, flds, null);
-    dataSource.init(c, p);
-    Iterator<Map<String, Object>> i = dataSource
-            .getData("select make,model,year,msrp,trim_id from atrimlisting where make='Acura'");
-    int count = 0;
-    Object msrp = null;
-    Object trim_id = null;
-    while (i.hasNext()) {
-      Map<String, Object> map = i.next();
-      msrp = map.get("msrp");
-      trim_id = map.get("trim_id");
-      count++;
-    }
-    assertEquals(5, count);
-    assertEquals(Float.class, msrp.getClass());
-    assertEquals(Long.class, trim_id.getClass());
-  }
-  
-  private String createEncryptionKeyFile() throws IOException {
-    File tmpdir = createTempDir().toFile();
-    byte[] content = "secret".getBytes(StandardCharsets.UTF_8);
-    createFile(tmpdir, "enckeyfile.txt", content, false);
-    return new File(tmpdir, "enckeyfile.txt").getAbsolutePath();
-  }
-
-  /**
-   * A stub driver that returns our mocked connection for connection URL {@link #MY_JDBC_URL}.
-   * <p>
-   * This class is used instead of a Mockito mock because {@link DriverManager} uses the class
-   * name to lookup the driver and also requires the driver to behave in a sane way, if other
-   * drivers are registered in the runtime. A simple Mockito mock is likely to break
-   * depending on JVM runtime version. So this class implements a full {@link Driver},
-   * so {@code DriverManager} can do whatever it wants to find the correct driver for a URL.
-   */
-  public static final class MockDriver implements Driver {
-    public static final String MY_JDBC_URL = "jdbc:fakedb";
-    private final Connection conn;
-    
-    public MockDriver() throws SQLException {
-      throw new AssertionError("The driver should never be directly instantiated by DIH's JdbcDataSource");
-    }
-    
-    MockDriver(Connection conn) throws SQLException {
-      this.conn = conn;
-    }
-    
-    @Override
-    public boolean acceptsURL(String url) throws java.sql.SQLException {
-      return MY_JDBC_URL.equals(url);
-    }
-    
-    @Override
-    public Connection connect(String url, Properties info) throws java.sql.SQLException {
-      return acceptsURL(url) ? conn : null;
-    }
-    
-    @Override
-    public int getMajorVersion() {
-      return 1;
-    }
-    
-    @Override
-    public int getMinorVersion() {
-      return 0;
-    }
-    
-    @SuppressForbidden(reason="Required by JDBC")
-    @Override
-    public java.util.logging.Logger getParentLogger() throws java.sql.SQLFeatureNotSupportedException {
-      throw new java.sql.SQLFeatureNotSupportedException();
-    }
-    
-    @Override
-    public java.sql.DriverPropertyInfo[] getPropertyInfo(String url, Properties info) throws SQLException {
-      return new java.sql.DriverPropertyInfo[0];
-    }
-    
-    @Override
-    public boolean jdbcCompliant() {
-      // we are not fully compliant:
-      return false;
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSourceConvertType.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSourceConvertType.java
deleted file mode 100644
index ef1cc7b..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSourceConvertType.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakAction;
-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakLingering;
-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope;
-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakZombies;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Objects;
-import java.util.Properties;
-
-@ThreadLeakAction({ThreadLeakAction.Action.WARN})
-@ThreadLeakLingering(linger = 0)
-@ThreadLeakZombies(ThreadLeakZombies.Consequence.CONTINUE)
-@ThreadLeakScope(ThreadLeakScope.Scope.NONE)
-public class TestJdbcDataSourceConvertType extends AbstractDataImportHandlerTestCase {
-  public void testConvertType() throws Throwable {
-    final Locale loc = Locale.getDefault();
-    assumeFalse("Derby is not happy with locale sr-Latn-*",
-        Objects.equals(new Locale("sr").getLanguage(), loc.getLanguage()) &&
-        Objects.equals("Latn", loc.getScript()));
-
-    // ironically convertType=false causes BigDecimal to String conversion
-    convertTypeTest("false", String.class);
-
-    // convertType=true uses the "long" conversion (see mapping of some_i to "long")
-    convertTypeTest("true", Long.class);
-  }
-
-  private void convertTypeTest(String convertType, @SuppressWarnings({"rawtypes"})Class resultClass) throws Throwable {
-    JdbcDataSource dataSource = new JdbcDataSource();
-    Properties p = new Properties();
-    p.put("driver", "org.apache.derby.jdbc.EmbeddedDriver");
-    p.put("url", "jdbc:derby:memory:tempDB;create=true;territory=en_US");
-    p.put("convertType", convertType);
-
-    List<Map<String, String>> flds = new ArrayList<>();
-    Map<String, String> f = new HashMap<>();
-    f.put("column", "some_i");
-    f.put("type", "long");
-    flds.add(f);
-
-    Context c = getContext(null, null,
-        dataSource, Context.FULL_DUMP, flds, null);
-    dataSource.init(c, p);
-    Iterator<Map<String, Object>> i = dataSource
-        .getData("select 1 as id, CAST(9999 AS DECIMAL) as \"some_i\" from sysibm.sysdummy1");
-    assertTrue(i.hasNext());
-    Map<String, Object> map = i.next();
-    Object val = map.get("some_i");
-    assertEquals(resultClass, val.getClass());
-
-    dataSource.close();
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestLineEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestLineEntityProcessor.java
deleted file mode 100644
index 3563e6a..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestLineEntityProcessor.java
+++ /dev/null
@@ -1,259 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-import org.junit.Test;
-
-import java.io.IOException;
-import java.io.Reader;
-import java.io.StringReader;
-import java.util.*;
-
-
-/**
- * <p> Test for TestLineEntityProcessor </p>
- *
- *
- * @since solr 1.4
- */
-public class TestLineEntityProcessor extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  /************************************************************************/
-  public void testSimple() throws IOException {
-
-    /* we want to create the equiv of :-
-     *  <entity name="list_all_files" 
-     *           processor="LineEntityProcessor"
-     *           fileName="dummy.lis"
-     *           />
-     */
-
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            LineEntityProcessor.URL, "dummy.lis",
-            LineEntityProcessor.ACCEPT_LINE_REGEX, null,
-            LineEntityProcessor.SKIP_LINE_REGEX, null
-    );
-
-    @SuppressWarnings({"unchecked"})
-    Context c = getContext(
-            null,                          //parentEntity
-            new VariableResolver(),  //resolver
-            getDataSource(filecontents),   //parentDataSource
-            Context.FULL_DUMP,                             //currProcess
-            Collections.EMPTY_LIST,        //entityFields
-            attrs                          //entityAttrs
-    );
-    LineEntityProcessor ep = new LineEntityProcessor();
-    ep.init(c);
-
-    /// call the entity processor to the list of lines
-    if (VERBOSE) System.out.print("\n");
-    List<String> fList = new ArrayList<>();
-    while (true) {
-      Map<String, Object> f = ep.nextRow();
-      if (f == null) break;
-      fList.add((String) f.get("rawLine"));
-      if (VERBOSE) System.out.print("     rawLine='" + f.get("rawLine") + "'\n");
-    }
-    assertEquals(24, fList.size());
-  }
-
-  @Test
-  /************************************************************************/
-  public void testOnly_xml_files() throws IOException {
-
-    /* we want to create the equiv of :-
-     *  <entity name="list_all_files" 
-     *           processor="LineEntityProcessor"
-     *           fileName="dummy.lis"
-     *           acceptLineRegex="xml"
-     *           />
-     */
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            LineEntityProcessor.URL, "dummy.lis",
-            LineEntityProcessor.ACCEPT_LINE_REGEX, "xml",
-            LineEntityProcessor.SKIP_LINE_REGEX, null
-    );
-
-    @SuppressWarnings({"unchecked"})
-    Context c = getContext(
-            null,                          //parentEntity
-            new VariableResolver(),  //resolver
-            getDataSource(filecontents),   //parentDataSource
-            Context.FULL_DUMP,                             //currProcess
-            Collections.emptyList(),        //entityFields
-            attrs                          //entityAttrs
-    );
-    LineEntityProcessor ep = new LineEntityProcessor();
-    ep.init(c);
-
-    /// call the entity processor to the list of lines
-    List<String> fList = new ArrayList<>();
-    while (true) {
-      Map<String, Object> f = ep.nextRow();
-      if (f == null) break;
-      fList.add((String) f.get("rawLine"));
-    }
-    assertEquals(5, fList.size());
-  }
-
-  @Test
-  /************************************************************************/
-  public void testOnly_xml_files_no_xsd() throws IOException {
-    /* we want to create the equiv of :-
-     *  <entity name="list_all_files" 
-     *           processor="LineEntityProcessor"
-     *           fileName="dummy.lis"
-     *           acceptLineRegex="\\.xml"
-     *           omitLineRegex="\\.xsd"
-     *           />
-     */
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            LineEntityProcessor.URL, "dummy.lis",
-            LineEntityProcessor.ACCEPT_LINE_REGEX, "\\.xml",
-            LineEntityProcessor.SKIP_LINE_REGEX, "\\.xsd"
-    );
-
-    @SuppressWarnings({"unchecked"})
-    Context c = getContext(
-            null,                          //parentEntity
-            new VariableResolver(),  //resolver
-            getDataSource(filecontents),   //parentDataSource
-            Context.FULL_DUMP,                             //currProcess
-            Collections.emptyList(),        //entityFields
-            attrs                          //entityAttrs
-    );
-    LineEntityProcessor ep = new LineEntityProcessor();
-    ep.init(c);
-
-    /// call the entity processor to walk the directory
-    List<String> fList = new ArrayList<>();
-    while (true) {
-      Map<String, Object> f = ep.nextRow();
-      if (f == null) break;
-      fList.add((String) f.get("rawLine"));
-    }
-    assertEquals(4, fList.size());
-  }
-
-  @Test
-  /************************************************************************/
-  public void testNo_xsd_files() throws IOException {
-    /* we want to create the equiv of :-
-     *  <entity name="list_all_files" 
-     *           processor="LineEntityProcessor"
-     *           fileName="dummy.lis"
-     *           omitLineRegex="\\.xsd"
-     *           />
-     */
-    @SuppressWarnings({"rawtypes"})
-    Map attrs = createMap(
-            LineEntityProcessor.URL, "dummy.lis",
-            LineEntityProcessor.SKIP_LINE_REGEX, "\\.xsd"
-    );
-
-    @SuppressWarnings({"unchecked"})
-    Context c = getContext(
-            null,                          //parentEntity
-            new VariableResolver(),  //resolver
-            getDataSource(filecontents),   //parentDataSource
-            Context.FULL_DUMP,                             //currProcess
-            Collections.emptyList(),        //entityFields
-            attrs                          //entityAttrs
-    );
-    LineEntityProcessor ep = new LineEntityProcessor();
-    ep.init(c);
-
-    /// call the entity processor to walk the directory
-    List<String> fList = new ArrayList<>();
-    while (true) {
-      Map<String, Object> f = ep.nextRow();
-      if (f == null) break;
-      fList.add((String) f.get("rawLine"));
-    }
-    assertEquals(18, fList.size());
-  }
-
-  /**
-   * ********************************************************************
-   */
-  public static Map<String, String> createField(
-          String col,   // DIH column name
-          String type,  // field type from schema.xml
-          String srcCol,  // DIH transformer attribute 'sourceColName'
-          String re,  // DIH regex attribute 'regex'
-          String rw,  // DIH regex attribute 'replaceWith'
-          String gn    // DIH regex attribute 'groupNames'
-  ) {
-    HashMap<String, String> vals = new HashMap<>();
-    vals.put("column", col);
-    vals.put("type", type);
-    vals.put("sourceColName", srcCol);
-    vals.put("regex", re);
-    vals.put("replaceWith", rw);
-    vals.put("groupNames", gn);
-    return vals;
-  }
-
-  private DataSource<Reader> getDataSource(final String xml) {
-    return new DataSource<Reader>() {
-      @Override
-      public void init(Context context, Properties initProps) {
-      }
-
-      @Override
-      public void close() {
-      }
-
-      @Override
-      public Reader getData(String query) {
-        return new StringReader(xml);
-      }
-    };
-  }
-
-  private static final String filecontents =
-          "\n" +
-                  "# this is what the output from 'find . -ls; looks like, athough the format\n" +
-                  "# of the time stamp varies depending on the age of the file and your LANG \n" +
-                  "# env setting\n" +
-                  "412577   0 drwxr-xr-x  6 user group    204 1 Apr 10:53 /Volumes/spare/ts\n" +
-                  "412582   0 drwxr-xr-x 13 user group    442 1 Apr 10:18 /Volumes/spare/ts/config\n" +
-                  "412583  24 -rwxr-xr-x  1 user group   8318 1 Apr 11:10 /Volumes/spare/ts/config/dc.xsd\n" +
-                  "412584  32 -rwxr-xr-x  1 user group  12847 1 Apr 11:10 /Volumes/spare/ts/config/dcterms.xsd\n" +
-                  "412585   8 -rwxr-xr-x  1 user group   3156 1 Apr 11:10 /Volumes/spare/ts/config/s-deliver.css\n" +
-                  "412586 192 -rwxr-xr-x  1 user group  97764 1 Apr 11:10 /Volumes/spare/ts/config/s-deliver.xsl\n" +
-                  "412587 224 -rwxr-xr-x  1 user group 112700 1 Apr 11:10 /Volumes/spare/ts/config/sml-delivery-2.1.xsd\n" +
-                  "412588 208 -rwxr-xr-x  1 user group 103419 1 Apr 11:10 /Volumes/spare/ts/config/sml-delivery-norm-2.0.dtd\n" +
-                  "412589 248 -rwxr-xr-x  1 user group 125296 1 Apr 11:10 /Volumes/spare/ts/config/sml-delivery-norm-2.1.dtd\n" +
-                  "412590  72 -rwxr-xr-x  1 user group  36256 1 Apr 11:10 /Volumes/spare/ts/config/jm.xsd\n" +
-                  "412591   8 -rwxr-xr-x  1 user group    990 1 Apr 11:10 /Volumes/spare/ts/config/video.gif\n" +
-                  "412592   8 -rwxr-xr-x  1 user group   1498 1 Apr 11:10 /Volumes/spare/ts/config/xlink.xsd\n" +
-                  "412593   8 -rwxr-xr-x  1 user group   1155 1 Apr 11:10 /Volumes/spare/ts/config/xml.xsd\n" +
-                  "412594   0 drwxr-xr-x  4 user group    136 1 Apr 10:18 /Volumes/spare/ts/acm19\n" +
-                  "412621   0 drwxr-xr-x 57 user group   1938 1 Apr 10:18 /Volumes/spare/ts/acm19/data\n" +
-                  "412622  24 -rwxr-xr-x  1 user group   8894 1 Apr 11:09 /Volumes/spare/ts/acm19/data/00000510.xml\n" +
-                  "412623  32 -rwxr-xr-x  1 user group  14124 1 Apr 11:09 /Volumes/spare/ts/acm19/data/00000603.xml\n" +
-                  "412624  24 -rwxr-xr-x  1 user group  11976 1 Apr 11:09 /Volumes/spare/ts/acm19/data/00001292.xml\n" +
-                  "# tacked on an extra line to cause a file to be deleted.\n" +
-                  "DELETE /Volumes/spare/ts/acm19/data/00001292old.xml\n" +
-                  "";
-
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNestedChildren.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNestedChildren.java
deleted file mode 100644
index ca1bfda..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNestedChildren.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class TestNestedChildren extends AbstractDIHJdbcTestCase {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Test
-  public void test() throws Exception {
-    h.query("/dataimport", generateRequest());
-    assertQ(req("*:*"), "//*[@numFound='1']");
-    assertQ(req("third_s:CHICKEN"), "//*[@numFound='1']");
-  } 
-  
-  @Override
-  protected String generateConfig() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("<dataConfig> \n");
-    sb.append("<dataSource name=\"derby\" driver=\"org.apache.derby.jdbc.EmbeddedDriver\" url=\"jdbc:derby:memory:derbyDB;territory=en_US\" /> \n");
-    sb.append("<document name=\"TestSimplePropertiesWriter\"> \n");
-    sb.append("<entity name=\"FIRST\" processor=\"SqlEntityProcessor\" dataSource=\"derby\" ");
-    sb.append(" query=\"select 1 as id, 'PORK' as FIRST_S from sysibm.sysdummy1 \" >\n");
-    sb.append("  <field column=\"FIRST_S\" name=\"first_s\" /> \n");
-    sb.append("  <entity name=\"SECOND\" processor=\"SqlEntityProcessor\" dataSource=\"derby\" ");
-    sb.append("   query=\"select 1 as id, 2 as SECOND_ID, 'BEEF' as SECOND_S from sysibm.sysdummy1 WHERE 1=${FIRST.ID}\" >\n");
-    sb.append("   <field column=\"SECOND_S\" name=\"second_s\" /> \n");
-    sb.append("   <entity name=\"THIRD\" processor=\"SqlEntityProcessor\" dataSource=\"derby\" ");
-    sb.append("    query=\"select 1 as id, 'CHICKEN' as THIRD_S from sysibm.sysdummy1 WHERE 2=${SECOND.SECOND_ID}\" >\n");
-    sb.append("    <field column=\"THIRD_S\" name=\"third_s\" /> \n");
-    sb.append("   </entity>\n");
-    sb.append("  </entity>\n");
-    sb.append("</entity>\n");
-    sb.append("</document> \n");
-    sb.append("</dataConfig> \n");
-    String config = sb.toString();
-    log.debug(config); 
-    return config;
-  }
-  
-  @Override
-  protected Database setAllowedDatabases() {
-    return Database.DERBY;
-  }   
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNonWritablePersistFile.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNonWritablePersistFile.java
deleted file mode 100644
index 1307927..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNonWritablePersistFile.java
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.commons.io.FileUtils;
-import org.junit.AfterClass;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-public class TestNonWritablePersistFile extends AbstractDataImportHandlerTestCase {
-  private static final String FULLIMPORT_QUERY = "select * from x";
-
-  private static final String DELTA_QUERY = "select id from x where last_modified > NOW";
-
-  private static final String DELETED_PK_QUERY = "select id from x where last_modified > NOW AND deleted='true'";
-
-  private static final String dataConfig_delta =
-    "<dataConfig>" +
-    "  <dataSource  type=\"MockDataSource\"/>\n" +
-    "  <document>\n" +
-    "    <entity name=\"x\" transformer=\"TemplateTransformer\"" +
-    "            query=\"" + FULLIMPORT_QUERY + "\"" +
-    "            deletedPkQuery=\"" + DELETED_PK_QUERY + "\"" +
-    "            deltaImportQuery=\"select * from x where id='${dih.delta.id}'\"" +
-    "            deltaQuery=\"" + DELTA_QUERY + "\">\n" +
-    "      <field column=\"id\" name=\"id\"/>\n" +
-    "      <entity name=\"y\" query=\"select * from y where y.A='${x.id}'\">\n" +
-    "        <field column=\"desc\" />\n" +
-    "      </entity>\n" +
-    "    </entity>\n" +
-    "  </document>\n" +
-    "</dataConfig>\n";
-  private static String tmpSolrHome;
-
-  private static File f;
-
-  @BeforeClass
-  public static void createTempSolrHomeAndCore() throws Exception {
-    tmpSolrHome = createTempDir().toFile().getAbsolutePath();
-    FileUtils.copyDirectory(getFile("dih/solr"), new File(tmpSolrHome).getAbsoluteFile());
-    initCore("dataimport-solrconfig.xml", "dataimport-schema.xml", 
-             new File(tmpSolrHome).getAbsolutePath());
-    
-    // See SOLR-2551
-    String configDir = h.getCore().getResourceLoader().getConfigDir();
-    String filePath = configDir;
-    if (configDir != null && !configDir.endsWith(File.separator))
-      filePath += File.separator;
-    filePath += "dataimport.properties";
-    f = new File(filePath);
-    // execute the test only if we are able to set file to read only mode
-    assumeTrue("No dataimport.properties file", f.exists() || f.createNewFile());
-    assumeTrue("dataimport.properties can't be set read only", f.setReadOnly());
-    assumeFalse("dataimport.properties is still writable even though " + 
-                "marked readonly - test running as superuser?", f.canWrite());
-  }
-  
-  @AfterClass
-  public static void afterClass() throws Exception {
-    if (f != null) {
-      f.setWritable(true);
-    }
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testNonWritablePersistFile() throws Exception {
-    ignoreException("Properties is not writable");
-
-    @SuppressWarnings("rawtypes")
-    List parentRow = new ArrayList();
-    parentRow.add(createMap("id", "1"));
-    MockDataSource.setIterator(FULLIMPORT_QUERY, parentRow.iterator());
-      
-    @SuppressWarnings("rawtypes")
-    List childRow = new ArrayList();
-    childRow.add(createMap("desc", "hello"));
-    MockDataSource.setIterator("select * from y where y.A='1'",
-                                 childRow.iterator());
-      
-    runFullImport(dataConfig_delta);
-    assertQ(req("id:1"), "//*[@numFound='0']");
-  }  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNumberFormatTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNumberFormatTransformer.java
deleted file mode 100644
index 91bdd00..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestNumberFormatTransformer.java
+++ /dev/null
@@ -1,160 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Test;
-
-import java.text.DecimalFormatSymbols;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
-/**
- * <p>
- * Test for NumberFormatTransformer
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestNumberFormatTransformer extends AbstractDataImportHandlerTestCase {
-  private char GROUPING_SEP = new DecimalFormatSymbols(Locale.ROOT).getGroupingSeparator();
-  private char DECIMAL_SEP = new DecimalFormatSymbols(Locale.ROOT).getDecimalSeparator();
-
-  @SuppressWarnings("unchecked")
-  @Test
-  public void testTransformRow_SingleNumber() {
-    char GERMAN_GROUPING_SEP = new DecimalFormatSymbols(Locale.GERMANY).getGroupingSeparator();
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER));
-    l.add(createMap("column", "localizedNum",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER, NumberFormatTransformer.LOCALE, "de-DE"));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String,Object> m = createMap("num", "123" + GROUPING_SEP + "567", "localizedNum", "123" + GERMAN_GROUPING_SEP + "567");
-    new NumberFormatTransformer().transformRow(m, c);
-    assertEquals(123567L, m.get("num"));
-    assertEquals(123567L, m.get("localizedNum"));
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_MultipleNumbers() throws Exception {
-    List<Map<String, String>> fields = new ArrayList<>();
-    fields.add(createMap(DataImporter.COLUMN, "inputs"));
-    fields.add(createMap(DataImporter.COLUMN,
-            "outputs", RegexTransformer.SRC_COL_NAME, "inputs",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER));
-
-    List<String> inputs = new ArrayList<>();
-    inputs.add("123" + GROUPING_SEP + "567");
-    inputs.add("245" + GROUPING_SEP + "678");
-    Map<String, Object> row = createMap("inputs", inputs);
-
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("e", row);
-
-    Context context = getContext(null, resolver, null, Context.FULL_DUMP, fields, null);
-    new NumberFormatTransformer().transformRow(row, context);
-
-    List<Long> output = new ArrayList<>();
-    output.add(123567L);
-    output.add(245678L);
-    Map<String, Object> outputRow = createMap("inputs", inputs, "outputs", output);
-
-    assertEquals(outputRow, row);
-  }
-
-  @Test(expected = DataImportHandlerException.class)
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_InvalidInput1_Number() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + GROUPING_SEP + "5a67");
-    new NumberFormatTransformer().transformRow(m, c);
-  }
-
-  @Test(expected = DataImportHandlerException.class)
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_InvalidInput2_Number() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + GROUPING_SEP + "567b");
-    new NumberFormatTransformer().transformRow(m, c);
-  }
-
-  @Test(expected = DataImportHandlerException.class)
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_InvalidInput2_Currency() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.CURRENCY));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + GROUPING_SEP + "567b");
-    new NumberFormatTransformer().transformRow(m, c);
-  }
-
-  @Test(expected = DataImportHandlerException.class)
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_InvalidInput1_Percent() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.PERCENT));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + GROUPING_SEP + "5a67");
-    new NumberFormatTransformer().transformRow(m, c);
-  }
-
-  @Test(expected = DataImportHandlerException.class)
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_InvalidInput3_Currency() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.CURRENCY));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + DECIMAL_SEP + "456" + DECIMAL_SEP + "789");
-    new NumberFormatTransformer().transformRow(m, c);
-  }
-
-  @Test(expected = DataImportHandlerException.class)
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_InvalidInput3_Number() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + DECIMAL_SEP + "456" + DECIMAL_SEP + "789");
-    new NumberFormatTransformer().transformRow(m, c);
-  }
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testTransformRow_MalformedInput_Number() {
-    List<Map<String, String>> l = new ArrayList<>();
-    l.add(createMap("column", "num",
-            NumberFormatTransformer.FORMAT_STYLE, NumberFormatTransformer.NUMBER));
-    Context c = getContext(null, null, null, Context.FULL_DUMP, l, null);
-    Map<String, Object> m = createMap("num", "123" + GROUPING_SEP + GROUPING_SEP + "789");
-    new NumberFormatTransformer().transformRow(m, c);
-    assertEquals(123789L, m.get("num"));
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestPlainTextEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestPlainTextEntityProcessor.java
deleted file mode 100644
index 007e63f..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestPlainTextEntityProcessor.java
+++ /dev/null
@@ -1,182 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.ByteArrayInputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.io.StringReader;
-import java.sql.Blob;
-import java.sql.SQLException;
-import java.util.Collections;
-import java.util.Properties;
-
-import org.apache.solr.common.util.Utils;
-import org.junit.Test;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-/**
- * Test for PlainTextEntityProcessor
- *
- *
- * @see org.apache.solr.handler.dataimport.PlainTextEntityProcessor
- * @since solr 1.4
- */
-public class TestPlainTextEntityProcessor extends AbstractDataImportHandlerTestCase {
-  @Test
-  public void testSimple() throws IOException {
-    DataImporter di = new DataImporter();
-    di.loadAndInit(DATA_CONFIG);
-    redirectTempProperties(di);
-
-    TestDocBuilder.SolrWriterImpl sw = new TestDocBuilder.SolrWriterImpl();
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    di.runCmd(rp, sw);
-    assertEquals(DS.s, sw.docs.get(0).getFieldValue("x"));
-  }
-
-  static class BlobImpl implements Blob{
-    private final byte[] bytes;
-
-    BlobImpl(byte[] bytes) {
-      this.bytes = bytes;
-    }
-
-    @Override
-    public long length() throws SQLException {
-      return 0;
-    }
-
-    @Override
-    public byte[] getBytes(long pos, int length) throws SQLException {
-      return bytes;
-    }
-
-    @Override
-    public InputStream getBinaryStream() throws SQLException {
-      return new ByteArrayInputStream(bytes);
-    }
-
-    @Override
-    public long position(byte[] pattern, long start) throws SQLException {
-      return 0;
-    }
-
-    @Override
-    public long position(Blob pattern, long start) throws SQLException {
-      return 0;
-    }
-
-    @Override
-    public int setBytes(long pos, byte[] bytes) throws SQLException {
-      return 0;
-    }
-
-    @Override
-    public int setBytes(long pos, byte[] bytes, int offset, int len) throws SQLException {
-      return 0;
-    }
-
-    @Override
-    public OutputStream setBinaryStream(long pos) throws SQLException {
-      return null;
-    }
-
-    @Override
-    public void truncate(long len) throws SQLException {
-
-    }
-
-    @Override
-    public void free() throws SQLException {
-
-    }
-
-    @Override
-    public InputStream getBinaryStream(long pos, long length) throws SQLException {
-      return new ByteArrayInputStream(bytes);
-    }
-  }
-
-  @Test
-  public void testSimple2() throws IOException {
-    DataImporter di = new DataImporter();
-    MockDataSource.setIterator("select id, name, blob_field from lw_table4", Collections.singletonList(Utils.makeMap("blob_field",new BlobImpl(DS.s.getBytes(UTF_8)) ) ).iterator());
-
-    String dc =
-
-        " <dataConfig>" +
-            "<dataSource name=\"ds1\" type=\"MockDataSource\"/>\n" +
-        " <!-- dataSource for FieldReaderDataSource -->\n" +
-        " <dataSource dataField=\"root.blob_field\" name=\"fr\" type=\"FieldReaderDataSource\"/>\n" +
-        "\n" +
-        " <document name=\"items\">\n" +
-        "   <entity dataSource=\"ds1\" name=\"root\" pk=\"id\"  query=\"select id, name, blob_field from lw_table4\" transformer=\"TemplateTransformer\">\n" +
-        "           <field column=\"id\" name=\"id\"/>\n" +
-        "\n" +
-        "        <entity dataField=\"root.blob_field\" dataSource=\"fr\" format=\"text\" name=\"n1\" processor=\"PlainTextEntityProcessor\" url=\"blob_field\">\n" +
-        "                       <field column=\"plainText\" name=\"plainText\"/>\n" +
-        "           </entity>\n" +
-        "\n" +
-        "   </entity>\n" +
-        " </document>\n" +
-        "</dataConfig>";
-    System.out.println(dc);
-    di.loadAndInit(dc);
-    redirectTempProperties(di);
-
-    TestDocBuilder.SolrWriterImpl sw = new TestDocBuilder.SolrWriterImpl();
-    @SuppressWarnings({"unchecked"})
-    RequestInfo rp = new RequestInfo(null, createMap("command", "full-import"), null);
-    di.runCmd(rp, sw);
-    assertEquals(DS.s, sw.docs.get(0).getFieldValue("plainText"));
-  }
-
-
-  @SuppressWarnings({"rawtypes"})
-  public static class DS extends DataSource {
-    static String s = "hello world";
-
-    @Override
-    public void init(Context context, Properties initProps) {
-
-    }
-
-    @Override
-    public Object getData(String query) {
-
-      return new StringReader(s);
-    }
-
-    @Override
-    public void close() {
-
-    }
-  }
-
-  static String DATA_CONFIG = "<dataConfig>\n" +
-          "\t<dataSource type=\"TestPlainTextEntityProcessor$DS\" />\n" +
-          "\t<document>\n" +
-          "\t\t<entity processor=\"PlainTextEntityProcessor\" name=\"x\" query=\"x\">\n" +
-          "\t\t\t<field column=\"plainText\" name=\"x\" />\n" +
-          "\t\t</entity>\n" +
-          "\t</document>\n" +
-          "</dataConfig>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestRegexTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestRegexTransformer.java
deleted file mode 100644
index 9af9b29..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestRegexTransformer.java
+++ /dev/null
@@ -1,213 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import static org.apache.solr.handler.dataimport.RegexTransformer.REGEX;
-import static org.apache.solr.handler.dataimport.RegexTransformer.GROUP_NAMES;
-import static org.apache.solr.handler.dataimport.RegexTransformer.REPLACE_WITH;
-import static org.apache.solr.handler.dataimport.DataImporter.COLUMN;
-
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * <p> Test for RegexTransformer </p>
- *
- *
- * @since solr 1.3
- */
-public class TestRegexTransformer extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  public void testCommaSeparated() {
-    List<Map<String, String>> fields = new ArrayList<>();
-    // <field column="col1" sourceColName="a" splitBy="," />
-    fields.add(getField("col1", "string", null, "a", ","));
-    Context context = getContext(null, null, null, Context.FULL_DUMP, fields, null);
-
-    Map<String, Object> src = new HashMap<>();
-    src.put("a", "a,bb,cc,d");
-
-    Map<String, Object> result = new RegexTransformer().transformRow(src, context);
-    assertEquals(2, result.size());
-    assertEquals(4, ((List) result.get("col1")).size());
-  }
-
-
-  @Test
-  public void testGroupNames() {
-    List<Map<String, String>> fields = new ArrayList<>();
-    // <field column="col1" regex="(\w*)(\w*) (\w*)" groupNames=",firstName,lastName"/>
-    Map<String ,String > m = new HashMap<>();
-    m.put(COLUMN,"fullName");
-    m.put(GROUP_NAMES,",firstName,lastName");
-    m.put(REGEX,"(\\w*) (\\w*) (\\w*)");
-    fields.add(m);
-    Context context = getContext(null, null, null, Context.FULL_DUMP, fields, null);
-    Map<String, Object> src = new HashMap<>();
-    src.put("fullName", "Mr Noble Paul");
-
-    Map<String, Object> result = new RegexTransformer().transformRow(src, context);
-    assertEquals("Noble", result.get("firstName"));
-    assertEquals("Paul", result.get("lastName"));
-    src= new HashMap<>();
-    @SuppressWarnings({"unchecked", "rawtypes"})
-    List<String> l= new ArrayList();
-    l.add("Mr Noble Paul") ;
-    l.add("Mr Shalin Mangar") ;
-    src.put("fullName", l);
-    result = new RegexTransformer().transformRow(src, context);
-    @SuppressWarnings({"rawtypes"})
-    List l1 = (List) result.get("firstName");
-    @SuppressWarnings({"rawtypes"})
-    List l2 = (List) result.get("lastName");
-    assertEquals("Noble", l1.get(0));
-    assertEquals("Shalin", l1.get(1));
-    assertEquals("Paul", l2.get(0));
-    assertEquals("Mangar", l2.get(1));
-  }
-
-  @Test
-  public void testReplaceWith() {
-    List<Map<String, String>> fields = new ArrayList<>();
-    // <field column="name" regexp="'" replaceWith="''" />
-    Map<String, String> fld = getField("name", "string", "'", null, null);
-    fld.put(REPLACE_WITH, "''");
-    fields.add(fld);
-    Context context = getContext(null, null, null, Context.FULL_DUMP, fields, null);
-
-    Map<String, Object> src = new HashMap<>();
-    String s = "D'souza";
-    src.put("name", s);
-
-    Map<String, Object> result = new RegexTransformer().transformRow(src,
-            context);
-    assertEquals("D''souza", result.get("name"));
-
-    fld = getField("title_underscore", "string", "\\s+", "title", null);
-    fld.put(REPLACE_WITH, "_");
-    fields.clear();
-    fields.add(fld);
-    context = getContext(null, null, null, Context.FULL_DUMP, fields, null);
-    src.clear();
-    src.put("title", "value with spaces"); // a value which will match the regex
-    result = new RegexTransformer().transformRow(src, context);
-    assertEquals("value_with_spaces", result.get("title_underscore"));
-    src.clear();
-    src.put("title", "valueWithoutSpaces"); // value which will not match regex
-    result = new RegexTransformer().transformRow(src, context);
-    assertEquals("valueWithoutSpaces", result.get("title_underscore")); // value should be returned as-is
-  }
-
-  @Test
-  public void testMileage() {
-    // init a whole pile of fields
-    List<Map<String, String>> fields = getFields();
-
-    // add another regex which reuses result from previous regex again!
-    // <field column="hltCityMPG" sourceColName="rowdata" regexp="(${e.city_mileage})" />
-    Map<String, String> fld = getField("hltCityMPG", "string",
-            ".*(${e.city_mileage})", "rowdata", null);
-    fld.put(REPLACE_WITH, "*** $1 ***");
-    fields.add(fld);
-
-    //  **ATTEMPTS** a match WITHOUT a replaceWith
-    // <field column="t1" sourceColName="rowdata" regexp="duff" />
-    fld = getField("t1", "string","duff", "rowdata", null);
-    fields.add(fld);
-
-    //  **ATTEMPTS** a match WITH a replaceWith (should return original data)
-    // <field column="t2" sourceColName="rowdata" regexp="duff" replaceWith="60"/>
-    fld = getField("t2", "string","duff", "rowdata", null);
-    fld.put(REPLACE_WITH, "60");
-    fields.add(fld);
-
-    //  regex WITH both replaceWith and groupName (groupName ignored!)
-    // <field column="t3" sourceColName="rowdata" regexp="(Range)" />
-    fld = getField("t3", "string","(Range)", "rowdata", null);
-    fld.put(REPLACE_WITH, "range");
-    fld.put(GROUP_NAMES,"t4,t5");
-    fields.add(fld);
-
-    Map<String, Object> row = new HashMap<>();
-    String s = "Fuel Economy Range: 26 mpg Hwy, 19 mpg City";
-    row.put("rowdata", s);
-
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("e", row);
-    @SuppressWarnings({"unchecked"})
-    Map<String, String> eAttrs = createMap("name", "e");
-    Context context = getContext(null, resolver, null, Context.FULL_DUMP, fields, eAttrs);
-
-    Map<String, Object> result = new RegexTransformer().transformRow(row, context);
-    assertEquals(6, result.size());
-    assertEquals(s, result.get("t2"));
-    assertEquals(s, result.get("rowdata"));
-    assertEquals("26", result.get("highway_mileage"));
-    assertEquals("19", result.get("city_mileage"));
-    assertEquals("*** 19 *** mpg City", result.get("hltCityMPG"));
-    assertEquals("Fuel Economy range: 26 mpg Hwy, 19 mpg City", result.get("t3"));
-  }
-
-  @Test
-  public void testMultiValuedRegex(){
-      List<Map<String, String>> fields = new ArrayList<>();
-//    <field column="participant" sourceColName="person" regex="(.*)" />
-    Map<String, String> fld = getField("participant", null, "(.*)", "person", null);
-    fields.add(fld);
-    Context context = getContext(null, null,
-            null, Context.FULL_DUMP, fields, null);
-
-    ArrayList<String> strings = new ArrayList<>();
-    strings.add("hello");
-    strings.add("world");
-    @SuppressWarnings({"unchecked"})
-    Map<String, Object> result = new RegexTransformer().transformRow(createMap("person", strings), context);
-    assertEquals(strings,result.get("participant"));
-  }
-
-  public static List<Map<String, String>> getFields() {
-    List<Map<String, String>> fields = new ArrayList<>();
-
-    // <field column="city_mileage" sourceColName="rowdata" regexp=
-    //    "Fuel Economy Range:\\s*?\\d*?\\s*?mpg Hwy,\\s*?(\\d*?)\\s*?mpg City"
-    fields.add(getField("city_mileage", "sint",
-            "Fuel Economy Range:\\s*?\\d*?\\s*?mpg Hwy,\\s*?(\\d*?)\\s*?mpg City",
-            "rowdata", null));
-
-    // <field column="highway_mileage" sourceColName="rowdata" regexp=
-    //    "Fuel Economy Range:\\s*?(\\d*?)\\s*?mpg Hwy,\\s*?\\d*?\\s*?mpg City"
-    fields.add(getField("highway_mileage", "sint",
-            "Fuel Economy Range:\\s*?(\\d*?)\\s*?mpg Hwy,\\s*?\\d*?\\s*?mpg City",
-            "rowdata", null));
-
-    // <field column="seating_capacity" sourceColName="rowdata" regexp="Seating capacity:(.*)"
-    fields.add(getField("seating_capacity", "sint", "Seating capacity:(.*)",
-            "rowdata", null));
-
-    // <field column="warranty" sourceColName="rowdata" regexp="Warranty:(.*)" />
-    fields.add(getField("warranty", "string", "Warranty:(.*)", "rowdata", null));
-
-    // <field column="rowdata" sourceColName="rowdata" />
-    fields.add(getField("rowdata", "string", null, "rowdata", null));
-    return fields;
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestScriptTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestScriptTransformer.java
deleted file mode 100644
index 2000231..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestScriptTransformer.java
+++ /dev/null
@@ -1,173 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.handler.dataimport.config.DIHConfiguration;
-import org.junit.Test;
-import org.w3c.dom.Document;
-import org.xml.sax.InputSource;
-
-import javax.xml.parsers.DocumentBuilder;
-import javax.xml.parsers.DocumentBuilderFactory;
-import java.io.StringReader;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * Test for ScriptTransformer
- *
- *
- * @since solr 1.3
- */
-public class TestScriptTransformer extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  public void testBasic() {
-    try {
-      String script = "function f1(row,context){"
-              + "row.put('name','Hello ' + row.get('name'));" + "return row;\n" + "}";
-      Context context = getContext("f1", script);
-      Map<String, Object> map = new HashMap<>();
-      map.put("name", "Scott");
-      EntityProcessorWrapper sep = new EntityProcessorWrapper(new SqlEntityProcessor(), null, null);
-      sep.init(context);
-      sep.applyTransformer(map);
-      assertEquals("Hello Scott", map.get("name").toString());
-    } catch (DataImportHandlerException e) {    
-      assumeFalse("This JVM does not have JavaScript installed.  Test Skipped.", e
-          .getMessage().startsWith("Cannot load Script Engine for language"));
-      throw e;
-    }
-  }
-
-  @Test
-  public void testEvil() {
-    assumeTrue("This test only works with security manager", System.getSecurityManager() != null);
-    String script = "function f1(row) {"
-            + "var os = Packages.java.lang.System.getProperty('os.name');"
-            + "row.put('name', os);"
-            + "return row;\n"
-            + "}";
-
-    Context context = getContext("f1", script);
-    Map<String, Object> map = new HashMap<>();
-    map.put("name", "Scott");
-    EntityProcessorWrapper sep = new EntityProcessorWrapper(new SqlEntityProcessor(), null, null);
-    sep.init(context);
-    DataImportHandlerException expected = expectThrows(DataImportHandlerException.class, () -> {
-      sep.applyTransformer(map);
-    });
-    assumeFalse("This JVM does not have JavaScript installed.  Test Skipped.",
-        expected.getMessage().startsWith("Cannot load Script Engine for language"));
-    assertTrue(expected.getCause().toString(), SecurityException.class.isAssignableFrom(expected.getCause().getClass()));
-  }
-
-  private Context getContext(String funcName, String script) {
-    List<Map<String, String>> fields = new ArrayList<>();
-    Map<String, String> entity = new HashMap<>();
-    entity.put("name", "hello");
-    entity.put("transformer", "script:" + funcName);
-
-    TestContext context = getContext(null, null, null,
-            Context.FULL_DUMP, fields, entity);
-    context.script = script;
-    context.scriptlang = "JavaScript";
-    return context;
-  }
-
-  @Test
-  public void testOneparam() {
-    try {
-      String script = "function f1(row){"
-              + "row.put('name','Hello ' + row.get('name'));" + "return row;\n" + "}";
-
-      Context context = getContext("f1", script);
-      Map<String, Object> map = new HashMap<>();
-      map.put("name", "Scott");
-      EntityProcessorWrapper sep = new EntityProcessorWrapper(new SqlEntityProcessor(), null, null);
-      sep.init(context);
-      sep.applyTransformer(map);
-      assertEquals("Hello Scott", map.get("name").toString());
-    } catch (DataImportHandlerException e) {   
-      assumeFalse("This JVM does not have JavaScript installed.  Test Skipped.", e
-          .getMessage().startsWith("Cannot load Script Engine for language"));
-      throw e;
-    }
-  }
-
-  @Test
-  public void testReadScriptTag() throws Exception {
-    try {
-      DocumentBuilder builder = DocumentBuilderFactory.newInstance()
-              .newDocumentBuilder();
-      Document document = builder.parse(new InputSource(new StringReader(xml)));
-      DataImporter di = new DataImporter();
-      DIHConfiguration dc = di.readFromXml(document);
-      assertTrue(dc.getScript().getText().indexOf("checkNextToken") > -1);
-    } catch (DataImportHandlerException e) {    
-      assumeFalse("This JVM does not have JavaScript installed.  Test Skipped.", e
-          .getMessage().startsWith("Cannot load Script Engine for language"));
-      throw e;
-    }
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testCheckScript() throws Exception {
-    try {
-      DocumentBuilder builder = DocumentBuilderFactory.newInstance()
-              .newDocumentBuilder();
-      Document document = builder.parse(new InputSource(new StringReader(xml)));
-      DataImporter di = new DataImporter();
-      DIHConfiguration dc = di.readFromXml(document);
-      Context c = getContext("checkNextToken", dc.getScript().getText());
-
-      @SuppressWarnings({"rawtypes"})
-      Map map = new HashMap();
-      map.put("nextToken", "hello");
-      EntityProcessorWrapper sep = new EntityProcessorWrapper(new SqlEntityProcessor(), null, null);
-      sep.init(c);
-      sep.applyTransformer(map);
-      assertEquals("true", map.get("$hasMore"));
-      map = new HashMap<>();
-      map.put("nextToken", "");
-      sep.applyTransformer(map);
-      assertNull(map.get("$hasMore"));
-    } catch (DataImportHandlerException e) {    
-      assumeFalse("This JVM does not have JavaScript installed.  Test Skipped.", e
-          .getMessage().startsWith("Cannot load Script Engine for language"));
-      throw e;
-    }
-  }
-
-  static String xml = "<dataConfig>\n"
-          + "<script><![CDATA[\n"
-          + "function checkNextToken(row)\t{\n"
-          + " var nt = row.get('nextToken');"
-          + " if (nt && nt !='' ){ "
-          + "    row.put('$hasMore', 'true');}\n"
-          + "    return row;\n"
-          + "}]]></script>\t<document>\n"
-          + "\t\t<entity name=\"mbx\" pk=\"articleNumber\" processor=\"XPathEntityProcessor\"\n"
-          + "\t\t\turl=\"?boardId=${dataimporter.defaults.boardId}&amp;maxRecords=20&amp;includeBody=true&amp;startDate=${dataimporter.defaults.startDate}&amp;guid=:autosearch001&amp;reqId=1&amp;transactionId=stringfortracing&amp;listPos=${mbx.nextToken}\"\n"
-          + "\t\t\tforEach=\"/mbmessage/articles/navigation | /mbmessage/articles/article\" transformer=\"script:checkNextToken\">\n"
-          + "\n" + "\t\t\t<field column=\"nextToken\"\n"
-          + "\t\t\t\txpath=\"/mbmessage/articles/navigation/nextToken\" />\n"
-          + "\n" + "\t\t</entity>\n" + "\t</document>\n" + "</dataConfig>";
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSimplePropertiesWriter.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSimplePropertiesWriter.java
deleted file mode 100644
index 74e04c9..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSimplePropertiesWriter.java
+++ /dev/null
@@ -1,135 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.sql.Connection;
-import java.sql.ResultSet;
-import java.sql.Statement;
-import java.text.SimpleDateFormat;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.Locale;
-import java.util.Map;
-
-import org.apache.solr.common.util.SuppressForbidden;
-import org.junit.Before;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-
-public class TestSimplePropertiesWriter extends AbstractDIHJdbcTestCase {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  private boolean useJdbcEscapeSyntax;
-  private String dateFormat;
-  private String fileLocation;
-  private String fileName;
-  
-  @Before
-  public void spwBefore() throws Exception {
-    fileLocation = createTempDir().toFile().getAbsolutePath();
-    fileName = "the.properties";
-  }
-
-  @SuppressForbidden(reason = "Needs currentTimeMillis to construct date stamps")
-  @Test
-  public void testSimplePropertiesWriter() throws Exception { 
-    
-    SimpleDateFormat errMsgFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSSSS", Locale.ROOT);
-    
-    String[] d = { 
-        "{'ts' ''yyyy-MM-dd HH:mm:ss.SSSSSS''}",
-        "{'ts' ''yyyy-MM-dd HH:mm:ss''}",
-        "yyyy-MM-dd HH:mm:ss", 
-        "yyyy-MM-dd HH:mm:ss.SSSSSS"
-    };
-    for(int i=0 ; i<d.length ; i++) {
-      delQ("*:*");
-      commit();
-      if(i<2) {
-        useJdbcEscapeSyntax = true;
-      } else {
-        useJdbcEscapeSyntax = false;
-      }
-      dateFormat = d[i];
-      SimpleDateFormat df = new SimpleDateFormat(dateFormat, Locale.ROOT);
-      Date oneSecondAgo = new Date(System.currentTimeMillis() - 1000);
-      
-      Map<String,String> init = new HashMap<>();
-      init.put("dateFormat", dateFormat);
-      init.put("filename", fileName);
-      init.put("directory", fileLocation);
-      SimplePropertiesWriter spw = new SimplePropertiesWriter();
-      spw.init(new DataImporter(), init);
-      Map<String, Object> props = new HashMap<>();
-      props.put("SomeDates.last_index_time", oneSecondAgo);
-      props.put("last_index_time", oneSecondAgo);
-      spw.persist(props);
-      
-      h.query("/dataimport", generateRequest());  
-      props = spw.readIndexerProperties();
-      Date entityDate = df.parse((String) props.get("SomeDates.last_index_time"));
-      Date docDate= df.parse((String) props.get("last_index_time"));
-      int year = currentYearFromDatabase();
-      
-      assertTrue("This date: " + errMsgFormat.format(oneSecondAgo) + " should be prior to the document date: " + errMsgFormat.format(docDate), docDate.getTime() - oneSecondAgo.getTime() > 0);
-      assertTrue("This date: " + errMsgFormat.format(oneSecondAgo) + " should be prior to the entity date: " + errMsgFormat.format(entityDate), entityDate.getTime() - oneSecondAgo.getTime() > 0);
-      assertQ(req("*:*"), "//*[@numFound='1']", "//doc/str[@name=\"ayear_s\"]=\"" + year + "\"");    
-    }
-  }
-  
-  private int currentYearFromDatabase() throws Exception {
-    try (
-        Connection conn = newConnection();
-        Statement s = conn.createStatement();
-        ResultSet rs = s.executeQuery("select year(current_timestamp) from sysibm.sysdummy1");
-    ){
-      if (rs.next()) {
-        return rs.getInt(1);
-      }
-      fail("We should have gotten a row from the db.");
-    }
-    return 0;
-  }
-  
-  @Override
-  protected Database setAllowedDatabases() {
-    return Database.DERBY;
-  }  
-  @Override
-  protected String generateConfig() {
-    StringBuilder sb = new StringBuilder();
-    String q = useJdbcEscapeSyntax ? "" : "'";
-    sb.append("<dataConfig> \n");
-    sb.append("<propertyWriter dateFormat=\"" + dateFormat + "\" type=\"SimplePropertiesWriter\" directory=\"" + fileLocation + "\" filename=\"" + fileName + "\" />\n");
-    sb.append("<dataSource name=\"derby\" driver=\"org.apache.derby.jdbc.EmbeddedDriver\" url=\"jdbc:derby:memory:derbyDB;territory=en_US\" /> \n");
-    sb.append("<document name=\"TestSimplePropertiesWriter\"> \n");
-    sb.append("<entity name=\"SomeDates\" processor=\"SqlEntityProcessor\" dataSource=\"derby\" ");
-    sb.append("query=\"select 1 as id, YEAR(" + q + "${dih.last_index_time}" + q + ") as AYEAR_S from sysibm.sysdummy1 \" >\n");
-    sb.append("<field column=\"AYEAR_S\" name=\"ayear_s\" /> \n");
-    sb.append("</entity>\n");
-    sb.append("</document> \n");
-    sb.append("</dataConfig> \n");
-    String config = sb.toString();
-    log.debug(config); 
-    return config;
-  }
-    
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorEndToEnd.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorEndToEnd.java
deleted file mode 100644
index b552d01..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorEndToEnd.java
+++ /dev/null
@@ -1,374 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.nio.file.Files;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Properties;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.lucene.util.IOUtils;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.embedded.JettySolrRunner;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.common.SolrInputDocument;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * End-to-end test of SolrEntityProcessor. "Real" test using embedded Solr
- */
-public class TestSolrEntityProcessorEndToEnd extends AbstractDataImportHandlerTestCase {
-  
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  private static final String SOLR_CONFIG = "dataimport-solrconfig.xml";
-  private static final String SOLR_SCHEMA = "dataimport-schema.xml";
-  private static final String SOURCE_CONF_DIR = "dih" + File.separator + "solr" + File.separator + "collection1" + File.separator + "conf" + File.separator;
-  private static final String ROOT_DIR = "dih" + File.separator + "solr" + File.separator;
-
-  private static final String DEAD_SOLR_SERVER = "http://" + DEAD_HOST_1 + "/solr";
-  
-  private static final List<Map<String,Object>> DB_DOCS = new ArrayList<>();
-  private static final List<Map<String,Object>> SOLR_DOCS = new ArrayList<>();
-  
-  static {
-    // dynamic fields in the destination schema
-    Map<String,Object> dbDoc = new HashMap<>();
-    dbDoc.put("dbid_s", "1");
-    dbDoc.put("dbdesc_s", "DbDescription");
-    DB_DOCS.add(dbDoc);
-
-    Map<String,Object> solrDoc = new HashMap<>();
-    solrDoc.put("id", "1");
-    solrDoc.put("desc", "SolrDescription");
-    SOLR_DOCS.add(solrDoc);
-  }
-
-  
-  private SolrInstance instance = null;
-  private JettySolrRunner jetty;
-  
-  private String getDihConfigTagsInnerEntity() {
-    return  "<dataConfig>\r\n"
-        + "  <dataSource type='MockDataSource' />\r\n"
-        + "  <document>\r\n"
-        + "    <entity name='db' query='select * from x'>\r\n"
-        + "      <field column='dbid_s' />\r\n"
-        + "      <field column='dbdesc_s' />\r\n"
-        + "      <entity name='se' processor='SolrEntityProcessor' query='id:${db.dbid_s}'\n"
-        + "     url='" + getSourceUrl() + "' fields='id,desc'>\r\n"
-        + "        <field column='id' />\r\n"
-        + "        <field column='desc' />\r\n" + "      </entity>\r\n"
-        + "    </entity>\r\n" + "  </document>\r\n" + "</dataConfig>\r\n";
-  }
-  
-  private String generateDIHConfig(String options, boolean useDeadServer) {
-    return "<dataConfig>\r\n" + "  <document>\r\n"
-        + "    <entity name='se' processor='SolrEntityProcessor'" + "   url='"
-        + (useDeadServer ? DEAD_SOLR_SERVER : getSourceUrl()) + "' " + options + " />\r\n" + "  </document>\r\n"
-        + "</dataConfig>\r\n";
-  }
-  
-  private String getSourceUrl() {
-    return buildUrl(jetty.getLocalPort(), "/solr/collection1");
-  }
-  
-  //TODO: fix this test to close its directories
-  static String savedFactory;
-  @BeforeClass
-  public static void beforeClass() {
-    savedFactory = System.getProperty("solr.DirectoryFactory");
-    System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
-  }
-  
-  @AfterClass
-  public static void afterClass() {
-    if (savedFactory == null) {
-      System.clearProperty("solr.directoryFactory");
-    } else {
-      System.setProperty("solr.directoryFactory", savedFactory);
-    }
-  }
-
-  @Override
-  @Before
-  public void setUp() throws Exception {
-    super.setUp();
-    // destination solr core
-    initCore(SOLR_CONFIG, SOLR_SCHEMA);
-    // data source solr instance
-    instance = new SolrInstance();
-    instance.setUp();
-    jetty = createAndStartJetty(instance);
-  }
-  
-  @Override
-  @After
-  public void tearDown() throws Exception {
-    try {
-      deleteCore();
-    } catch (Exception e) {
-      log.error("Error deleting core", e);
-    }
-    if (null != jetty) {
-      jetty.stop();
-      jetty = null;
-    }
-    if (null != instance) {
-      instance.tearDown();
-      instance = null;
-    }
-    super.tearDown();
-  }
-
-  //commented 23-AUG-2018  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // added 20-Jul-2018
-  public void testFullImport() {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    
-    try {
-      addDocumentsToSolr(SOLR_DOCS);
-      runFullImport(generateDIHConfig("query='*:*' rows='2' fl='id,desc' onError='skip'", false));
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='1']");
-    assertQ(req("id:1"), "//result/doc/str[@name='id'][.='1']",
-        "//result/doc/arr[@name='desc'][.='SolrDescription']");
-  }
-  
-  public void testFullImportFqParam() {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    
-    try {
-      addDocumentsToSolr(generateSolrDocuments(30));
-      Map<String,String> map = new HashMap<>();
-      map.put("rows", "50");
-      runFullImport(generateDIHConfig("query='*:*' fq='desc:Description1*,desc:Description*2' rows='2'", false), map);
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='1']");
-    assertQ(req("id:12"), "//result[@numFound='1']", "//result/doc/arr[@name='desc'][.='Description12']");
-  }
-  
-  public void testFullImportFieldsParam() {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    
-    try {
-      addDocumentsToSolr(generateSolrDocuments(7));
-      runFullImport(generateDIHConfig("query='*:*' fl='id' rows='2'"+(random().nextBoolean() ?" cursorMark='true' sort='id asc'":""), false));
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='7']");
-    assertQ(req("id:1"), "//result[@numFound='1']");
-    assertQ(req("id:1"), "count(//result/doc/arr[@name='desc'])=0");
-  }
-  
-  /**
-   * Receive a row from SQL (Mock) and fetch a row from Solr
-   */
-  public void testFullImportInnerEntity() {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    
-    try {
-      List<Map<String,Object>> DOCS = new ArrayList<>(DB_DOCS);
-      Map<String, Object> doc = new HashMap<>();
-      doc.put("dbid_s", "2");
-      doc.put("dbdesc_s", "DbDescription2");
-      DOCS.add(doc);
-      MockDataSource.setIterator("select * from x", DOCS.iterator());
-
-      DOCS = new ArrayList<>(SOLR_DOCS);
-      Map<String,Object> solrDoc = new HashMap<>();
-      solrDoc.put("id", "2");
-      solrDoc.put("desc", "SolrDescription2");
-      DOCS.add(solrDoc);
-      addDocumentsToSolr(DOCS);
-      runFullImport(getDihConfigTagsInnerEntity());
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    } finally {
-      MockDataSource.clearCache();
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='2']");
-    assertQ(req("id:1"), "//result/doc/str[@name='id'][.='1']",
-        "//result/doc/str[@name='dbdesc_s'][.='DbDescription']",
-        "//result/doc/str[@name='dbid_s'][.='1']",
-        "//result/doc/arr[@name='desc'][.='SolrDescription']");
-    assertQ(req("id:2"), "//result/doc/str[@name='id'][.='2']",
-        "//result/doc/str[@name='dbdesc_s'][.='DbDescription2']",
-        "//result/doc/str[@name='dbid_s'][.='2']",
-        "//result/doc/arr[@name='desc'][.='SolrDescription2']");
-  }
-  
-  public void testFullImportWrongSolrUrl() {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    
-    try {
-      runFullImport(generateDIHConfig("query='*:*' rows='2' fl='id,desc' onError='skip'", true /* use dead server */));
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='0']");
-  }
-  
-  public void testFullImportBadConfig() {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    
-    try {
-      runFullImport(generateDIHConfig("query='bogus:3' rows='2' fl='id,desc' onError='"+
-            (random().nextBoolean() ? "abort" : "justtogetcoverage")+"'", false));
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='0']");
-  }
-  
-  public void testCursorMarkNoSort() throws SolrServerException, IOException {
-    assertQ(req("*:*"), "//result[@numFound='0']");
-    addDocumentsToSolr(generateSolrDocuments(7));
-    try {
-      List<String> errors = Arrays.asList("sort='id'", //wrong sort spec
-          "", //no sort spec
-          "sort='id asc' timeout='12345'"); // sort is fine, but set timeout
-      Collections.shuffle(errors, random());
-      String attrs = "query='*:*' rows='2' fl='id,desc' cursorMark='true' "
-                                                            + errors.get(0);
-      runFullImport(generateDIHConfig(attrs,
-            false));
-    } catch (Exception e) {
-      log.error("Exception running full import", e);
-      fail(e.getMessage());
-    }
-    
-    assertQ(req("*:*"), "//result[@numFound='0']");
-  }
-  
-  private static List<Map<String,Object>> generateSolrDocuments(int num) {
-    List<Map<String,Object>> docList = new ArrayList<>();
-    for (int i = 1; i <= num; i++) {
-      Map<String,Object> map = new HashMap<>();
-      map.put("id", i);
-      map.put("desc", "Description" + i);
-      docList.add(map);
-    }
-    return docList;
-  }
-  
-  private void addDocumentsToSolr(List<Map<String,Object>> docs) throws SolrServerException, IOException {
-    List<SolrInputDocument> sidl = new ArrayList<>();
-    for (Map<String,Object> doc : docs) {
-      SolrInputDocument sd = new SolrInputDocument();
-      for (Entry<String,Object> entry : doc.entrySet()) {
-        sd.addField(entry.getKey(), entry.getValue());
-      }
-      sidl.add(sd);
-    }
-
-    try (HttpSolrClient solrServer = getHttpSolrClient(getSourceUrl(), 15000, 30000)) {
-      solrServer.add(sidl);
-      solrServer.commit(true, true);
-    }
-  }
-  
-  private static class SolrInstance {
-    File homeDir;
-    File confDir;
-    File dataDir;
-    
-    public String getHomeDir() {
-      return homeDir.toString();
-    }
-    
-    public String getSchemaFile() {
-      return SOURCE_CONF_DIR + "dataimport-schema.xml";
-    }
-    
-    public String getDataDir() {
-      return dataDir.toString();
-    }
-    
-    public String getSolrConfigFile() {
-      return SOURCE_CONF_DIR + "dataimport-solrconfig.xml";
-    }
-
-    public String getSolrXmlFile() {
-      return ROOT_DIR + "solr.xml";
-    }
-
-    public void setUp() throws Exception {
-      homeDir = createTempDir().toFile();
-      dataDir = new File(homeDir + "/collection1", "data");
-      confDir = new File(homeDir + "/collection1", "conf");
-      
-      homeDir.mkdirs();
-      dataDir.mkdirs();
-      confDir.mkdirs();
-
-      FileUtils.copyFile(getFile(getSolrXmlFile()), new File(homeDir, "solr.xml"));
-      File f = new File(confDir, "solrconfig.xml");
-      FileUtils.copyFile(getFile(getSolrConfigFile()), f);
-      f = new File(confDir, "schema.xml");
-      
-      FileUtils.copyFile(getFile(getSchemaFile()), f);
-      f = new File(confDir, "data-config.xml");
-      FileUtils.copyFile(getFile(SOURCE_CONF_DIR + "dataconfig-contentstream.xml"), f);
-
-      Files.createFile(confDir.toPath().resolve("../core.properties"));
-    }
-
-    public void tearDown() throws Exception {
-      IOUtils.rm(homeDir.toPath());
-    }
-  }
-  
-  private JettySolrRunner createAndStartJetty(SolrInstance instance) throws Exception {
-    Properties nodeProperties = new Properties();
-    nodeProperties.setProperty("solr.data.dir", instance.getDataDir());
-    JettySolrRunner jetty = new JettySolrRunner(instance.getHomeDir(), nodeProperties, buildJettyConfig("/solr"));
-    jetty.start();
-    return jetty;
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorUnit.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorUnit.java
deleted file mode 100644
index 1753b81..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorUnit.java
+++ /dev/null
@@ -1,188 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.*;
-
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.CursorMarkParams;
-import org.apache.solr.handler.dataimport.SolrEntityProcessor.SolrDocumentListIterator;
-import org.junit.Test;
-
-/**
- * Unit test of SolrEntityProcessor. A very basic test outside of the DIH.
- */
-public class TestSolrEntityProcessorUnit extends AbstractDataImportHandlerTestCase {
-
-  private static final class NoNextMockProcessor extends SolrEntityProcessor {
-    @Override
-    protected void nextPage() {
-    }
-  }
-
-  private static final String ID = "id";
-
-  public void testQuery() {
-    List<Doc> docs = generateUniqueDocs(2);
-
-    MockSolrEntityProcessor processor = createAndInit(docs);
-    try {
-      assertExpectedDocs(docs, processor);
-      assertEquals(1, processor.getQueryCount());
-    } finally {
-      processor.destroy();
-    }
-  }
-
-  private MockSolrEntityProcessor createAndInit(List<Doc> docs) {
-    return createAndInit(docs, SolrEntityProcessor.ROWS_DEFAULT);
-  }
-
-  public void testNumDocsGreaterThanRows() {
-    List<Doc> docs = generateUniqueDocs(44);
-
-    int rowsNum = 10;
-    MockSolrEntityProcessor processor = createAndInit(docs, rowsNum);
-    try {
-      assertExpectedDocs(docs, processor);
-      assertEquals(5, processor.getQueryCount());
-    } finally {
-      processor.destroy();
-    }
-  }
-
-  private MockSolrEntityProcessor createAndInit(List<Doc> docs, int rowsNum) {
-    MockSolrEntityProcessor processor = new MockSolrEntityProcessor(docs, rowsNum);
-    HashMap<String,String> entityAttrs = new HashMap<String,String>(){{put(SolrEntityProcessor.SOLR_SERVER,"http://route:66/no");}};
-    processor.init(getContext(null, null, null, null, Collections.emptyList(), 
-        entityAttrs));
-    return processor;
-  }
-
-  public void testMultiValuedFields() {
-    List<Doc> docs = new ArrayList<>();
-    List<FldType> types = new ArrayList<>();
-    types.add(new FldType(ID, ONE_ONE, new SVal('A', 'Z', 4, 4)));
-    types.add(new FldType("description", new IRange(3, 3), new SVal('a', 'c', 1, 1)));
-    Doc testDoc = createDoc(types);
-    docs.add(testDoc);
-
-    MockSolrEntityProcessor processor = createAndInit(docs);
-    try {
-      Map<String, Object> next = processor.nextRow();
-      assertNotNull(next);
-  
-      @SuppressWarnings({"unchecked", "rawtypes"})
-      List<Comparable> multiField = (List<Comparable>) next.get("description");
-      assertEquals(testDoc.getValues("description").size(), multiField.size());
-      assertEquals(testDoc.getValues("description"), multiField);
-      assertEquals(1, processor.getQueryCount());
-      assertNull(processor.nextRow());
-    } finally {
-      processor.destroy();
-    }
-  }
-  @Test (expected = DataImportHandlerException.class)
-  public void testNoQuery() {
-    SolrEntityProcessor processor = new SolrEntityProcessor();
-    
-    HashMap<String,String> entityAttrs = new HashMap<String,String>(){{put(SolrEntityProcessor.SOLR_SERVER,"http://route:66/no");}};
-    processor.init(getContext(null, null, null, null, Collections.emptyList(), 
-        entityAttrs));
-    try {
-    processor.buildIterator();
-    }finally {
-      processor.destroy();
-    }
-  }
-  
-  public void testPagingQuery() {
-    SolrEntityProcessor processor = new NoNextMockProcessor() ;
-    
-    HashMap<String,String> entityAttrs = new HashMap<String,String>(){{
-      put(SolrEntityProcessor.SOLR_SERVER,"http://route:66/no");
-      if (random().nextBoolean()) {
-        List<String> noCursor = Arrays.asList("","false",CursorMarkParams.CURSOR_MARK_START);//only 'true' not '*'
-        Collections.shuffle(noCursor, random());
-        put(CursorMarkParams.CURSOR_MARK_PARAM,  noCursor.get(0));
-      }}};
-    processor.init(getContext(null, null, null, null, Collections.emptyList(), 
-        entityAttrs));
-    try {
-    processor.buildIterator();
-    SolrQuery query = new SolrQuery();
-    ((SolrDocumentListIterator) processor.rowIterator).passNextPage(query);
-    assertEquals("0", query.get(CommonParams.START));
-    assertNull( query.get(CursorMarkParams.CURSOR_MARK_PARAM));
-    assertNotNull( query.get(CommonParams.TIME_ALLOWED));
-    }finally {
-      processor.destroy();
-    }
-  }
-  
-  public void testCursorQuery() {
-    SolrEntityProcessor processor = new NoNextMockProcessor() ;
-    
-    HashMap<String,String> entityAttrs = new HashMap<String,String>(){{
-      put(SolrEntityProcessor.SOLR_SERVER,"http://route:66/no");
-      put(CursorMarkParams.CURSOR_MARK_PARAM,"true");
-      }};
-    processor.init(getContext(null, null, null, null, Collections.emptyList(), 
-        entityAttrs));
-    try {
-    processor.buildIterator();
-    SolrQuery query = new SolrQuery();
-    ((SolrDocumentListIterator) processor.rowIterator).passNextPage(query);
-    assertNull(query.get(CommonParams.START));
-    assertEquals(CursorMarkParams.CURSOR_MARK_START, query.get(CursorMarkParams.CURSOR_MARK_PARAM));
-    assertNull( query.get(CommonParams.TIME_ALLOWED));
-    }finally {
-      processor.destroy();
-    }
-  }
-
-  private List<Doc> generateUniqueDocs(int numDocs) {
-    List<FldType> types = new ArrayList<>();
-    types.add(new FldType(ID, ONE_ONE, new SVal('A', 'Z', 4, 40)));
-    types.add(new FldType("description", new IRange(1, 3), new SVal('a', 'c', 1, 1)));
-
-    @SuppressWarnings({"rawtypes"})
-    Set<Comparable> previousIds = new HashSet<>();
-    List<Doc> docs = new ArrayList<>(numDocs);
-    for (int i = 0; i < numDocs; i++) {
-      Doc doc = createDoc(types);
-      while (previousIds.contains(doc.id)) {
-        doc = createDoc(types);
-      }
-      previousIds.add(doc.id);
-      docs.add(doc);
-    }
-    return docs;
-  }
-
-  private static void assertExpectedDocs(List<Doc> expectedDocs, SolrEntityProcessor processor) {
-    for (Doc expectedDoc : expectedDocs) {
-      Map<String, Object> next = processor.nextRow();
-      assertNotNull(next);
-      assertEquals(expectedDoc.id, next.get("id"));
-      assertEquals(expectedDoc.getValues("description"), next.get("description"));
-    }
-    assertNull(processor.nextRow());
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSortedMapBackedCache.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSortedMapBackedCache.java
deleted file mode 100644
index 8dd1b55..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSortedMapBackedCache.java
+++ /dev/null
@@ -1,192 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-
-import java.math.BigDecimal;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-
-import org.junit.Assert;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class TestSortedMapBackedCache extends AbstractDIHCacheTestCase {
-  
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
-  @Test
-  public void testCacheWithKeyLookup() {
-    DIHCache cache = null;
-    try {
-      cache = new SortedMapBackedCache();
-      cache.open(getContext(new HashMap<String,String>()));
-      loadData(cache, data, fieldNames, true);
-      List<ControlData> testData = extractDataByKeyLookup(cache, fieldNames);
-      compareData(data, testData);
-    } catch (Exception e) {
-      log.warn("Exception thrown: {}", e);
-      Assert.fail();
-    } finally {
-      try {
-        cache.destroy();
-      } catch (Exception ex) {
-      }
-    }
-  }
-
-  @Test
-  public void testCacheWithOrderedLookup() {
-    DIHCache cache = null;
-    try {
-      cache = new SortedMapBackedCache();
-      cache.open(getContext(new HashMap<String,String>()));
-      loadData(cache, data, fieldNames, true);
-      List<ControlData> testData = extractDataInKeyOrder(cache, fieldNames);
-      compareData(data, testData);
-    } catch (Exception e) {
-      log.warn("Exception thrown: {}", e);
-      Assert.fail();
-    } finally {
-      try {
-        cache.destroy();
-      } catch (Exception ex) {
-      }
-    }
-  }
-  
-  @Test
-  public void testNullKeys() throws Exception {
-    //A null key should just be ignored, but not throw an exception
-    DIHCache cache = null;
-    try {
-      cache = new SortedMapBackedCache();
-      Map<String, String> cacheProps = new HashMap<>();
-      cacheProps.put(DIHCacheSupport.CACHE_PRIMARY_KEY, "a_id");
-      cache.open(getContext(cacheProps));
-      
-      Map<String,Object> data = new HashMap<>();
-      data.put("a_id", null);
-      data.put("bogus", "data");
-      cache.add(data);
-      
-      Iterator<Map<String, Object>> cacheIter = cache.iterator();
-      while (cacheIter.hasNext()) {
-        Assert.fail("cache should be empty.");
-      }
-      Assert.assertNull(cache.iterator(null));
-      cache.delete(null);      
-    } catch (Exception e) {
-      throw e;
-    } finally {
-      try {
-        cache.destroy();
-      } catch (Exception ex) {
-      }
-    }    
-  }
-
-  @Test
-  public void testCacheReopensWithUpdate() {
-    DIHCache cache = null;
-    try {      
-      Map<String, String> cacheProps = new HashMap<>();
-      cacheProps.put(DIHCacheSupport.CACHE_PRIMARY_KEY, "a_id");
-      
-      cache = new SortedMapBackedCache();
-      cache.open(getContext(cacheProps));
-      // We can let the data hit the cache with the fields out of order because
-      // we've identified the pk up-front.
-      loadData(cache, data, fieldNames, false);
-
-      // Close the cache.
-      cache.close();
-
-      List<ControlData> newControlData = new ArrayList<>();
-      Object[] newIdEqualsThree = null;
-      int j = 0;
-      for (int i = 0; i < data.size(); i++) {
-        // We'll be deleting a_id=1 so remove it from the control data.
-        if (data.get(i).data[0].equals(1)) {
-          continue;
-        }
-
-        // We'll be changing "Cookie" to "Carrot" in a_id=3 so change it in the control data.
-        if (data.get(i).data[0].equals(3)) {
-          newIdEqualsThree = new Object[data.get(i).data.length];
-          System.arraycopy(data.get(i).data, 0, newIdEqualsThree, 0, newIdEqualsThree.length);
-          newIdEqualsThree[3] = "Carrot";
-          newControlData.add(new ControlData(newIdEqualsThree));
-        }
-        // Everything else can just be copied over.
-        else {
-          newControlData.add(data.get(i));
-        }
-
-        j++;
-      }
-
-      // These new rows of data will get added to the cache, so add them to the control data too.
-      Object[] newDataRow1 = new Object[] {99, new BigDecimal(Math.PI), "Z", "Zebra", 99.99f, Feb21_2011, null };
-      Object[] newDataRow2 = new Object[] {2, new BigDecimal(Math.PI), "B", "Ballerina", 2.22f, Feb21_2011, null };
-
-      newControlData.add(new ControlData(newDataRow1));
-      newControlData.add(new ControlData(newDataRow2));
-
-      // Re-open the cache
-      cache.open(getContext(new HashMap<String,String>()));
-
-      // Delete a_id=1 from the cache.
-      cache.delete(1);
-
-      // Because the cache allows duplicates, the only way to update is to
-      // delete first then add.
-      cache.delete(3);
-      cache.add(controlDataToMap(new ControlData(newIdEqualsThree), fieldNames, false));
-
-      // Add this row with a new Primary key.
-      cache.add(controlDataToMap(new ControlData(newDataRow1), fieldNames, false));
-
-      // Add this row, creating two records in the cache with a_id=2.
-      cache.add(controlDataToMap(new ControlData(newDataRow2), fieldNames, false));
-
-      // Read the cache back and compare to the newControlData
-      List<ControlData> testData = extractDataInKeyOrder(cache, fieldNames);
-      compareData(newControlData, testData);
-
-      // Now try reading the cache read-only.
-      cache.close();
-      cache.open(getContext(new HashMap<String,String>()));
-      testData = extractDataInKeyOrder(cache, fieldNames);
-      compareData(newControlData, testData);
-
-    } catch (Exception e) {
-      log.warn("Exception thrown: {}", e);
-      Assert.fail();
-    } finally {
-      try {
-        cache.destroy();
-      } catch (Exception ex) {
-      }
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSqlEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSqlEntityProcessor.java
deleted file mode 100644
index f1277c9..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSqlEntityProcessor.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Ignore;
-import org.junit.Test;
-
-/**
- * Test with various combinations of parameters, child entities, caches, transformers.
- */
-public class TestSqlEntityProcessor extends AbstractSqlEntityProcessorTestCase { 
-   
-  @Test
-  public void testSingleEntity() throws Exception {
-    singleEntity(1);
-  }  
-  @Test
-  public void testWithSimpleTransformer() throws Exception {
-    simpleTransform(1);   
-  }
-  @Test
-  public void testWithComplexTransformer() throws Exception {
-    complexTransform(1, 0);
-  }
-  @Test
-  public void testChildEntities() throws Exception {
-    withChildEntities(false, true);
-  }
-  @Test
-  public void testCachedChildEntities() throws Exception {
-    withChildEntities(true, true);
-  }
-  
-  @Test
-  public void testSportZipperChildEntities() throws Exception {
-    sportsZipper = true;
-    withChildEntities(true, true);
-  }
-
-  @Test
-  public void testCountryZipperChildEntities() throws Exception {
-    countryZipper = true;
-    withChildEntities(true, true);
-  }
-  
-  @Test
-  public void testBothZipperChildEntities() throws Exception {
-    countryZipper = true;
-    sportsZipper = true;
-    withChildEntities(true, true);
-  }
-  
-  @Test(expected=RuntimeException.class /* DIH exceptions are not propagated, here we capturing assertQ exceptions */)
-  public void testSportZipperChildEntitiesWrongOrder() throws Exception {
-    if(random().nextBoolean()){
-      wrongPeopleOrder = true;
-    }else{
-      wrongSportsOrder = true;
-    }
-    testSportZipperChildEntities();
-  }
-
-  @Test(expected=RuntimeException.class )
-  public void testCountryZipperChildEntitiesWrongOrder() throws Exception {
-    if(random().nextBoolean()){
-      wrongPeopleOrder = true;
-    }else{
-      wrongCountryOrder = true;
-    }
-    testCountryZipperChildEntities();
-  }
-  
-  @Test(expected=RuntimeException.class)
-  public void testBothZipperChildEntitiesWrongOrder() throws Exception {
-    if(random().nextBoolean()){
-      wrongPeopleOrder = true;
-    }else{
-      if(random().nextBoolean()){
-        wrongSportsOrder = true;
-      }else{
-        wrongCountryOrder = true;
-      }
-    }
-    testBothZipperChildEntities();
-  }
-  
-  @Test
-  @Ignore("broken see SOLR-3857")
-  public void testSimpleCacheChildEntities() throws Exception {
-    simpleCacheChildEntities(true);
-  }
-   
-  @Override
-  protected String deltaQueriesCountryTable() {
-    return "";
-  }
-  @Override
-  protected String deltaQueriesPersonTable() {
-    return "";
-  }  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSqlEntityProcessorDelta.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSqlEntityProcessorDelta.java
deleted file mode 100644
index 9708cdc..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSqlEntityProcessorDelta.java
+++ /dev/null
@@ -1,209 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.junit.Before;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Test with various combinations of parameters, child entities, transformers.
- */
-public class TestSqlEntityProcessorDelta extends AbstractSqlEntityProcessorTestCase {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private boolean delta = false;
-  private boolean useParentDeltaQueryParam = false;
-  private IntChanges personChanges = null;
-  private String[] countryChanges = null;
-  
-  @Before
-  public void setupDeltaTest() {
-    delta = false;
-    personChanges = null;
-    countryChanges = null;
-  }
-  @Test
-  public void testSingleEntity() throws Exception {
-    log.debug("testSingleEntity full-import...");
-    singleEntity(1);
-    logPropertiesFile();
-    changeStuff();
-    int c = calculateDatabaseCalls();
-    log.debug("testSingleEntity delta-import ({} database calls expected)...", c);
-    singleEntity(c);
-    validateChanges();
-  }
-  
-  @Test
-  public void testDeltaImportWithoutInitialFullImport() throws Exception {
-    log.debug("testDeltaImportWithoutInitialFullImport delta-import...");
-    countryEntity = false;
-    delta = true;
-    /*
-     * We need to add 2 in total: 
-     * +1 for deltaQuery i.e identifying id of items to update, 
-     * +1 for deletedPkQuery i.e delete query
-     */
-    singleEntity(totalPeople() + 2);
-    validateChanges();
-  }
-
-  @Test
-  public void testWithSimpleTransformer() throws Exception {
-    log.debug("testWithSimpleTransformer full-import...");    
-    simpleTransform(1); 
-    logPropertiesFile(); 
-    changeStuff();
-    int c = calculateDatabaseCalls();
-    simpleTransform(c);
-    log.debug("testWithSimpleTransformer delta-import ({} database calls expected)...", c);
-    validateChanges(); 
-  }
-  @Test
-  public void testWithComplexTransformer() throws Exception {
-    log.debug("testWithComplexTransformer full-import...");     
-    complexTransform(1, 0);
-    logPropertiesFile();
-    changeStuff();
-    int c = calculateDatabaseCalls();
-    log.debug("testWithComplexTransformer delta-import ({} database calls expected)...", c);
-    complexTransform(c, personChanges.deletedKeys.length);
-    validateChanges();  
-  }
-  @Test
-  public void testChildEntities() throws Exception {
-    log.debug("testChildEntities full-import...");
-    useParentDeltaQueryParam = random().nextBoolean();
-    log.debug("using parent delta? {}", useParentDeltaQueryParam);
-    withChildEntities(false, true);
-    logPropertiesFile();
-    changeStuff();
-    log.debug("testChildEntities delta-import...");
-    withChildEntities(false, false);
-    validateChanges();
-  }
-    
-  
-  private int calculateDatabaseCalls() {
-    //The main query generates 1
-    //Deletes generate 1
-    //Each add/mod generate 1
-    int c = 1;
-    if (countryChanges != null) {
-      c += countryChanges.length + 1;
-    }
-    if (personChanges != null) {
-      c += personChanges.addedKeys.length + personChanges.changedKeys.length + 1;
-    }
-    return c;    
-  }
-  private void validateChanges() throws Exception
-  {
-    if(personChanges!=null) {
-      for(int id : personChanges.addedKeys) {
-        assertQ(req("id:" + id), "//*[@numFound='1']");
-      }
-      for(int id : personChanges.deletedKeys) {
-        assertQ(req("id:" + id), "//*[@numFound='0']");
-      }
-      for(int id : personChanges.changedKeys) {
-        assertQ(req("id:" + id), "//*[@numFound='1']", "substring(//doc/arr[@name='NAME_mult_s']/str[1], 1, 8)='MODIFIED'");
-      }
-    }
-    if(countryChanges!=null) {      
-      for(String code : countryChanges) {
-        assertQ(req("COUNTRY_CODE_s:" + code), "//*[@numFound='" + numberPeopleByCountryCode(code) + "']", "substring((//doc/str[@name='COUNTRY_NAME_s'])[1], 1, 8)='MODIFIED'");
-      }
-    }
-  }
-  private void changeStuff() throws Exception {
-    if(countryEntity)
-    {
-      int n = random().nextInt(2);
-      switch(n) {
-        case 0:
-          personChanges = modifySomePeople();
-          break;
-        case 1:
-          countryChanges = modifySomeCountries();  
-          break;
-        case 2:
-          personChanges = modifySomePeople();
-          countryChanges = modifySomeCountries();
-          break;
-      }
-    } else {
-      personChanges = modifySomePeople();
-    }
-    countryChangesLog();
-    personChangesLog();
-    delta = true;
-  }
-  private void countryChangesLog() 
-  {
-    if(countryChanges!=null) {
-      StringBuilder sb = new StringBuilder();
-      sb.append("country changes { ");
-      for(String s : countryChanges) {
-        sb.append(s).append(" ");
-      }
-      sb.append(" }");    
-      log.debug("{}", sb);
-    }
-  }
-  private void personChangesLog()
-  {
-    if(personChanges!=null) {
-    log.debug("person changes [ {} ] ", personChanges);
-    }
-  }
-  @Override
-  protected LocalSolrQueryRequest generateRequest() {
-    return lrf.makeRequest("command", (delta ? "delta-import" : "full-import"), "dataConfig", generateConfig(), 
-        "clean", (delta ? "false" : "true"), "commit", "true", "synchronous", "true", "indent", "true");
-  }
-  @Override
-  protected String deltaQueriesPersonTable() {
-    return 
-        "deletedPkQuery=''SELECT ID FROM PEOPLE WHERE DELETED='Y' AND last_modified &gt;='${dih.People.last_index_time}' '' " +
-        "deltaImportQuery=''SELECT ID, NAME, COUNTRY_CODE FROM PEOPLE where ID=${dih.delta.ID} '' " +
-        "deltaQuery=''" +
-        "SELECT ID FROM PEOPLE WHERE DELETED!='Y' AND last_modified &gt;='${dih.People.last_index_time}' " +
-        (useParentDeltaQueryParam ? "" : 
-        "UNION DISTINCT " +
-        "SELECT ID FROM PEOPLE WHERE DELETED!='Y' AND COUNTRY_CODE IN (SELECT CODE FROM COUNTRIES WHERE last_modified &gt;='${dih.People.last_index_time}') "
-        ) + "'' "
-    ;
-  }
-  @Override
-  protected String deltaQueriesCountryTable() {
-    if(useParentDeltaQueryParam) {
-      return 
-          "deltaQuery=''SELECT CODE FROM COUNTRIES WHERE DELETED != 'Y' AND last_modified &gt;='${dih.last_index_time}' ''  " +
-          "parentDeltaQuery=''SELECT ID FROM PEOPLE WHERE DELETED != 'Y' AND COUNTRY_CODE='${Countries.CODE}' '' "
-      ;
-          
-    }
-    return "";
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestTemplateTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestTemplateTransformer.java
deleted file mode 100644
index 11ea30b..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestTemplateTransformer.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.Arrays;
-
-/**
- * <p>
- * Test for TemplateTransformer
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestTemplateTransformer extends AbstractDataImportHandlerTestCase {
-
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testTransformRow() {
-    @SuppressWarnings({"rawtypes"})
-    List fields = new ArrayList();
-    fields.add(createMap("column", "firstName"));
-    fields.add(createMap("column", "lastName"));
-    fields.add(createMap("column", "middleName"));
-    fields.add(createMap("column", "name",
-            TemplateTransformer.TEMPLATE,
-            "${e.lastName}, ${e.firstName} ${e.middleName}"));
-    fields.add(createMap("column", "emails",
-            TemplateTransformer.TEMPLATE,
-            "${e.mail}"));
-
-    // test reuse of template output in another template 
-    fields.add(createMap("column", "mrname",
-            TemplateTransformer.TEMPLATE,"Mr ${e.name}"));
-
-    List<String> mails = Arrays.asList("a@b.com", "c@d.com");
-    @SuppressWarnings({"rawtypes"})
-    Map row = createMap(
-            "firstName", "Shalin",
-            "middleName", "Shekhar", 
-            "lastName", "Mangar",
-            "mail", mails);
-
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("e", row);
-    Map<String, String> entityAttrs = createMap("name", "e");
-
-    Context context = getContext(null, resolver,
-            null, Context.FULL_DUMP, fields, entityAttrs);
-    new TemplateTransformer().transformRow(row, context);
-    assertEquals("Mangar, Shalin Shekhar", row.get("name"));
-    assertEquals("Mr Mangar, Shalin Shekhar", row.get("mrname"));
-    assertEquals(mails,row.get("emails"));
-  }
-    
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testTransformRowMultiValue() {
-    @SuppressWarnings({"rawtypes"})
-    List fields = new ArrayList();
-    fields.add(createMap("column", "year"));
-    fields.add(createMap("column", "month"));
-    fields.add(createMap("column", "day"));
-      
-    // create three variations of date format
-    fields.add(createMap( "column", "date",
-                          TemplateTransformer.TEMPLATE,
-                          "${e.day} ${e.month}, ${e.year}" ));
-    fields.add(createMap( "column", "date",
-                          TemplateTransformer.TEMPLATE,
-                          "${e.month} ${e.day}, ${e.year}" ));
-    fields.add(createMap("column", "date",
-                          TemplateTransformer.TEMPLATE,
-                          "${e.year}-${e.month}-${e.day}" ));
-      
-    @SuppressWarnings({"rawtypes"})
-    Map row = createMap( "year", "2016",
-                         "month", "Apr",
-                         "day", "30" );
-    VariableResolver resolver = new VariableResolver();
-    resolver.addNamespace("e", row);
-    Map<String, String> entityAttrs = createMap("date", "e");
-      
-    Context context = getContext(null, resolver,
-                                 null, Context.FULL_DUMP, fields, entityAttrs);
-    new TemplateTransformer().transformRow(row, context);
-    assertTrue( row.get( "date" ) instanceof List );
-    
-    List<Object> dates = (List<Object>)row.get( "date" );
-    assertEquals( dates.size(), 3 );
-    assertEquals( dates.get(0).toString(), "30 Apr, 2016" );
-    assertEquals( dates.get(1).toString(), "Apr 30, 2016" );
-    assertEquals( dates.get(2).toString(), "2016-Apr-30" );
-  }
-
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestURLDataSource.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestURLDataSource.java
deleted file mode 100644
index c1acc54..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestURLDataSource.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-
-import org.junit.Test;
-
-public class TestURLDataSource extends AbstractDataImportHandlerTestCase {
-  private List<Map<String, String>> fields = new ArrayList<>();
-  private URLDataSource dataSource = new URLDataSource();
-  private VariableResolver variableResolver = new VariableResolver();
-  private Context context = AbstractDataImportHandlerTestCase.getContext(null, variableResolver,
-      dataSource, Context.FULL_DUMP, fields, null);
-  private Properties initProps = new Properties();
-  
-  @Test
-  public void substitutionsOnBaseUrl() throws Exception {
-    String url = "http://example.com/";
-    
-    variableResolver.addNamespace("dataimporter.request", Collections.<String,Object>singletonMap("baseurl", url));
-    
-    initProps.setProperty(URLDataSource.BASE_URL, "${dataimporter.request.baseurl}");
-    dataSource.init(context, initProps);
-    assertEquals(url, dataSource.getBaseUrl());
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestVariableResolver.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestVariableResolver.java
deleted file mode 100644
index ef88fff..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestVariableResolver.java
+++ /dev/null
@@ -1,173 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Properties;
-import java.util.TimeZone;
-
-import org.apache.solr.util.DateMathParser;
-import org.junit.Test;
-
-/**
- * <p>
- * Test for VariableResolver
- * </p>
- * 
- * 
- * @since solr 1.3
- */
-public class TestVariableResolver extends AbstractDataImportHandlerTestCase {
-  
-  @Test
-  public void testSimpleNamespace() {
-    VariableResolver vri = new VariableResolver();
-    Map<String,Object> ns = new HashMap<>();
-    ns.put("world", "WORLD");
-    vri.addNamespace("hello", ns);
-    assertEquals("WORLD", vri.resolve("hello.world"));
-  }
-  
-  @Test
-  public void testDefaults() {
-    // System.out.println(System.setProperty(TestVariableResolver.class.getName(),"hello"));
-    System.setProperty(TestVariableResolver.class.getName(), "hello");
-    // System.out.println("s.gP()"+
-    // System.getProperty(TestVariableResolver.class.getName()));
-    
-    Properties p = new Properties();
-    p.put("hello", "world");
-    VariableResolver vri = new VariableResolver(p);
-    Object val = vri.resolve(TestVariableResolver.class.getName());
-    // System.out.println("val = " + val);
-    assertEquals("hello", val);
-    assertEquals("world", vri.resolve("hello"));
-  }
-  
-  @Test
-  public void testNestedNamespace() {
-    VariableResolver vri = new VariableResolver();
-    Map<String,Object> ns = new HashMap<>();
-    ns.put("world", "WORLD");
-    vri.addNamespace("hello", ns);
-    ns = new HashMap<>();
-    ns.put("world1", "WORLD1");
-    vri.addNamespace("hello.my", ns);
-    assertEquals("WORLD1", vri.resolve("hello.my.world1"));
-  }
-  
-  @Test
-  public void test3LevelNestedNamespace() {
-    VariableResolver vri = new VariableResolver();
-    Map<String,Object> ns = new HashMap<>();
-    ns.put("world", "WORLD");
-    vri.addNamespace("hello", ns);
-    ns = new HashMap<>();
-    ns.put("world1", "WORLD1");
-    vri.addNamespace("hello.my.new", ns);
-    assertEquals("WORLD1", vri.resolve("hello.my.new.world1"));
-  }
-  
-  @Test
-  public void dateNamespaceWithValue() {
-    VariableResolver vri = new VariableResolver();
-    vri.setEvaluators(new DataImporter().getEvaluators(Collections
-        .<Map<String,String>> emptyList()));
-    Map<String,Object> ns = new HashMap<>();
-    Date d = new Date();
-    ns.put("dt", d);
-    vri.addNamespace("A", ns);
-    assertEquals(
-        new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.ROOT).format(d),
-        vri.replaceTokens("${dataimporter.functions.formatDate(A.dt,'yyyy-MM-dd HH:mm:ss')}"));
-  }
-  
-  @Test
-  public void dateNamespaceWithExpr() throws Exception {
-    VariableResolver vri = new VariableResolver();
-    vri.setEvaluators(new DataImporter().getEvaluators(Collections
-        .<Map<String,String>> emptyList()));
-    SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'", Locale.ROOT);
-    format.setTimeZone(TimeZone.getTimeZone("UTC"));
-    DateMathParser dmp = new DateMathParser(TimeZone.getDefault());
-    
-    String s = vri
-        .replaceTokens("${dataimporter.functions.formatDate('NOW/DAY','yyyy-MM-dd HH:mm')}");
-    assertEquals(
-        new SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.ROOT).format(dmp.parseMath("/DAY")),
-        s);
-  }
-  
-  @Test
-  public void testDefaultNamespace() {
-    VariableResolver vri = new VariableResolver();
-    Map<String,Object> ns = new HashMap<>();
-    ns.put("world", "WORLD");
-    vri.addNamespace(null, ns);
-    assertEquals("WORLD", vri.resolve("world"));
-  }
-  
-  @Test
-  public void testDefaultNamespace1() {
-    VariableResolver vri = new VariableResolver();
-    Map<String,Object> ns = new HashMap<>();
-    ns.put("world", "WORLD");
-    vri.addNamespace(null, ns);
-    assertEquals("WORLD", vri.resolve("world"));
-  }
-  
-  @Test
-  public void testFunctionNamespace1() throws Exception {
-    VariableResolver resolver = new VariableResolver();
-    final List<Map<String,String>> l = new ArrayList<>();
-    Map<String,String> m = new HashMap<>();
-    m.put("name", "test");
-    m.put("class", E.class.getName());
-    l.add(m);
-    resolver.setEvaluators(new DataImporter().getEvaluators(l));
-    @SuppressWarnings({"unchecked"})
-    ContextImpl context = new ContextImpl(null, resolver, null,
-        Context.FULL_DUMP, Collections.EMPTY_MAP, null, null);
-    
-    SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'", Locale.ROOT);
-    format.setTimeZone(TimeZone.getTimeZone("UTC"));
-    DateMathParser dmp = new DateMathParser(TimeZone.getDefault());
-    
-    String s = resolver
-        .replaceTokens("${dataimporter.functions.formatDate('NOW/DAY','yyyy-MM-dd HH:mm')}");
-    assertEquals(
-        new SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.ROOT).format(dmp.parseMath("/DAY")),
-        s);
-    assertEquals("Hello World",
-        resolver.replaceTokens("${dataimporter.functions.test('TEST')}"));
-  }
-  
-  public static class E extends Evaluator {
-    @Override
-    public String evaluate(String expression, Context context) {
-      return "Hello World";
-    }
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestVariableResolverEndToEnd.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestVariableResolverEndToEnd.java
deleted file mode 100644
index 8ee6878..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestVariableResolverEndToEnd.java
+++ /dev/null
@@ -1,141 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.lang.invoke.MethodHandles;
-import java.sql.Connection;
-import java.sql.Statement;
-import java.util.Locale;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import junit.framework.Assert;
-
-import org.apache.solr.request.SolrQueryRequest;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class TestVariableResolverEndToEnd  extends AbstractDIHJdbcTestCase {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Test
-  public void test() throws Exception {
-    h.query("/dataimport", generateRequest());
-    SolrQueryRequest req = null;
-    try {
-      req = req("q", "*:*", "wt", "json", "indent", "true");
-      String response = h.query(req);
-      log.debug(response);
-      response = response.replaceAll("\\s","");
-      Assert.assertTrue(response.contains("\"numFound\":1"));
-      Pattern p = Pattern.compile("[\"]second1_s[\"][:][\"](.*?)[\"]");
-      Matcher m = p.matcher(response);
-      Assert.assertTrue(m.find());
-      String yearStr = m.group(1);
-      Assert.assertTrue(response.contains("\"second1_s\":\"" + yearStr + "\""));
-      Assert.assertTrue(response.contains("\"second2_s\":\"" + yearStr + "\""));
-      Assert.assertTrue(response.contains("\"second3_s\":\"" + yearStr + "\""));
-      Assert.assertTrue(response.contains("\"PORK_s\":\"GRILL\""));
-      Assert.assertTrue(response.contains("\"FISH_s\":\"FRY\""));
-      Assert.assertTrue(response.contains("\"BEEF_CUTS_mult_s\":[\"ROUND\",\"SIRLOIN\"]"));
-    } catch(Exception e) {
-      throw e;
-    } finally {
-      req.close();
-    }
-  } 
-  
-  @Override
-  protected String generateConfig() {
-    String thirdLocaleParam = random().nextBoolean() ? "" : (", '" + Locale.getDefault().toLanguageTag() + "'");
-    StringBuilder sb = new StringBuilder();
-    sb.append("<dataConfig> \n");
-    sb.append("<dataSource name=\"hsqldb\" driver=\"${dataimporter.request.dots.in.hsqldb.driver}\" url=\"jdbc:hsqldb:mem:.\" /> \n");
-    sb.append("<document name=\"TestEvaluators\"> \n");
-    sb.append("<entity name=\"FIRST\" processor=\"SqlEntityProcessor\" dataSource=\"hsqldb\" ");
-    sb.append(" query=\"" +
-        "select " +
-        " 1 as id, " +
-        " 'SELECT' as SELECT_KEYWORD, " +
-        " {ts '2017-02-18 12:34:56'} as FIRST_TS " +
-        "from DUAL \" >\n");
-    sb.append("  <field column=\"SELECT_KEYWORD\" name=\"select_keyword_s\" /> \n");
-    sb.append("  <entity name=\"SECOND\" processor=\"SqlEntityProcessor\" dataSource=\"hsqldb\" transformer=\"TemplateTransformer\" ");
-    sb.append("   query=\"" +
-        "${dataimporter.functions.encodeUrl(FIRST.SELECT_KEYWORD)} " +
-        " 1 as SORT, " +
-        " {ts '2017-02-18 12:34:56'} as SECOND_TS, " +
-        " '${dataimporter.functions.formatDate(FIRST.FIRST_TS, 'yyyy'" + thirdLocaleParam + ")}' as SECOND1_S,  " +
-        " 'PORK' AS MEAT, " +
-        " 'GRILL' AS METHOD, " +
-        " 'ROUND' AS CUTS, " +
-        " 'BEEF_CUTS' AS WHATKIND " +
-        "from DUAL " +
-        "WHERE 1=${FIRST.ID} " +
-        "UNION " +        
-        "${dataimporter.functions.encodeUrl(FIRST.SELECT_KEYWORD)} " +
-        " 2 as SORT, " +
-        " {ts '2017-02-18 12:34:56'} as SECOND_TS, " +
-        " '${dataimporter.functions.formatDate(FIRST.FIRST_TS, 'yyyy'" + thirdLocaleParam + ")}' as SECOND1_S,  " +
-        " 'FISH' AS MEAT, " +
-        " 'FRY' AS METHOD, " +
-        " 'SIRLOIN' AS CUTS, " +
-        " 'BEEF_CUTS' AS WHATKIND " +
-        "from DUAL " +
-        "WHERE 1=${FIRST.ID} " +
-        "ORDER BY SORT \"" +
-        ">\n");
-    sb.append("   <field column=\"SECOND_S\" name=\"second_s\" /> \n");
-    sb.append("   <field column=\"SECOND1_S\" name=\"second1_s\" /> \n");
-    sb.append("   <field column=\"second2_s\" template=\"${dataimporter.functions.formatDate(SECOND.SECOND_TS, 'yyyy'" + thirdLocaleParam + ")}\" /> \n");
-    sb.append("   <field column=\"second3_s\" template=\"${dih.functions.formatDate(SECOND.SECOND_TS, 'yyyy'" + thirdLocaleParam + ")}\" /> \n");
-    sb.append("   <field column=\"METHOD\" name=\"${SECOND.MEAT}_s\"/>\n");
-    sb.append("   <field column=\"CUTS\" name=\"${SECOND.WHATKIND}_mult_s\"/>\n");
-    sb.append("  </entity>\n");
-    sb.append("</entity>\n");
-    sb.append("</document> \n");
-    sb.append("</dataConfig> \n");
-    String config = sb.toString();
-    log.info(config); 
-    return config;
-  }
-  @Override
-  protected void populateData(Connection conn) throws Exception {
-    Statement s = null;
-    try {
-      s = conn.createStatement();
-      s.executeUpdate("create table dual(dual char(1) not null)");
-      s.executeUpdate("insert into dual values('Y')");
-      conn.commit();
-    } catch (Exception e) {
-      throw e;
-    } finally {
-      try {
-        s.close();
-      } catch (Exception ex) {}
-      try {
-        conn.close();
-      } catch (Exception ex) {}
-    }
-  }
-  @Override
-  protected Database setAllowedDatabases() {
-    return Database.HSQLDB;
-  }  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestWriterImpl.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestWriterImpl.java
deleted file mode 100644
index 24eb28b..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestWriterImpl.java
+++ /dev/null
@@ -1,83 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.update.processor.UpdateRequestProcessor;
-
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.util.*;
-
-/**
- * <p>
- * Test for writerImpl paramater (to provide own SolrWriter)
- * </p>
- * 
- * 
- * @since solr 4.0
- */
-public class TestWriterImpl extends AbstractDataImportHandlerTestCase {
-  
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    initCore("dataimport-nodatasource-solrconfig.xml", "dataimport-schema.xml");
-  }
-  
-  @Test
-  @SuppressWarnings("unchecked")
-  public void testDataConfigWithDataSource() throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(createMap("id", "1", "desc", "one"));
-    rows.add(createMap("id", "2", "desc", "two"));
-    rows.add(createMap("id", "3", "desc", "break"));
-    rows.add(createMap("id", "4", "desc", "four"));
-    
-    MockDataSource.setIterator("select * from x", rows.iterator());
-    
-    @SuppressWarnings({"rawtypes"})
-    Map extraParams = createMap("writerImpl", TestSolrWriter.class.getName(),
-        "commit", "true");
-    runFullImport(loadDataConfig("data-config-with-datasource.xml"),
-        extraParams);
-    
-    assertQ(req("id:1"), "//*[@numFound='1']");
-    assertQ(req("id:2"), "//*[@numFound='1']");
-    assertQ(req("id:3"), "//*[@numFound='0']");
-    assertQ(req("id:4"), "//*[@numFound='1']");
-  }
-  
-  public static class TestSolrWriter extends SolrWriter {
-    
-    public TestSolrWriter(UpdateRequestProcessor processor, SolrQueryRequest req) {
-      super(processor, req);
-    }
-    
-    @Override
-    public boolean upload(SolrInputDocument doc) {
-      if (doc.getField("desc").getFirstValue().equals("break")) {
-        return false;
-      }
-      return super.upload(doc);
-    }
-    
-  }
-  
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestXPathEntityProcessor.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestXPathEntityProcessor.java
deleted file mode 100644
index e2200ea..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestXPathEntityProcessor.java
+++ /dev/null
@@ -1,506 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.File;
-import java.io.Reader;
-import java.io.StringReader;
-import java.nio.charset.StandardCharsets;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.concurrent.TimeUnit;
-
-import org.junit.Test;
-
-/**
- * <p>
- * Test for XPathEntityProcessor
- * </p>
- *
- *
- * @since solr 1.3
- */
-public class TestXPathEntityProcessor extends AbstractDataImportHandlerTestCase {
-  boolean simulateSlowReader;
-  boolean simulateSlowResultProcessor;
-  int rowsToRead = -1;
-  
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void withFieldsAndXpath() throws Exception {
-    File tmpdir = createTempDir().toFile();
-    
-    createFile(tmpdir, "x.xsl", xsl.getBytes(StandardCharsets.UTF_8), false);
-    @SuppressWarnings({"rawtypes"})
-    Map entityAttrs = createMap("name", "e", "url", "cd.xml",
-            XPathEntityProcessor.FOR_EACH, "/catalog/cd");
-    @SuppressWarnings({"rawtypes"})
-    List fields = new ArrayList();
-    fields.add(createMap("column", "title", "xpath", "/catalog/cd/title"));
-    fields.add(createMap("column", "artist", "xpath", "/catalog/cd/artist"));
-    fields.add(createMap("column", "year", "xpath", "/catalog/cd/year"));
-    Context c = getContext(null,
-            new VariableResolver(), getDataSource(cdData), Context.FULL_DUMP, fields, entityAttrs);
-    XPathEntityProcessor xPathEntityProcessor = new XPathEntityProcessor();
-    xPathEntityProcessor.init(c);
-    List<Map<String, Object>> result = new ArrayList<>();
-    while (true) {
-      Map<String, Object> row = xPathEntityProcessor.nextRow();
-      if (row == null)
-        break;
-      result.add(row);
-    }
-    assertEquals(3, result.size());
-    assertEquals("Empire Burlesque", result.get(0).get("title"));
-    assertEquals("Bonnie Tyler", result.get(1).get("artist"));
-    assertEquals("1982", result.get(2).get("year"));
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testMultiValued() throws Exception  {
-    @SuppressWarnings({"rawtypes"})
-    Map entityAttrs = createMap("name", "e", "url", "testdata.xml",
-            XPathEntityProcessor.FOR_EACH, "/root");
-    @SuppressWarnings({"rawtypes"})
-    List fields = new ArrayList();
-    fields.add(createMap("column", "a", "xpath", "/root/a", DataImporter.MULTI_VALUED, "true"));
-    Context c = getContext(null,
-            new VariableResolver(), getDataSource(testXml), Context.FULL_DUMP, fields, entityAttrs);
-    XPathEntityProcessor xPathEntityProcessor = new XPathEntityProcessor();
-    xPathEntityProcessor.init(c);
-    List<Map<String, Object>> result = new ArrayList<>();
-    while (true) {
-      Map<String, Object> row = xPathEntityProcessor.nextRow();
-      if (row == null)
-        break;
-      result.add(row);
-    }
-    @SuppressWarnings({"rawtypes"})
-    List l = (List)result.get(0).get("a");
-    assertEquals(3, l.size());
-    assertEquals("1", l.get(0));
-    assertEquals("2", l.get(1));
-    assertEquals("ü", l.get(2));
-  }
-  
-  @SuppressWarnings({"rawtypes", "unchecked"})
-  @Test
-  public void testMultiValuedWithMultipleDocuments() throws Exception {
-    Map entityAttrs = createMap("name", "e", "url", "testdata.xml", XPathEntityProcessor.FOR_EACH, "/documents/doc");
-    List fields = new ArrayList();
-    fields.add(createMap("column", "id", "xpath", "/documents/doc/id", DataImporter.MULTI_VALUED, "false"));
-    fields.add(createMap("column", "a", "xpath", "/documents/doc/a", DataImporter.MULTI_VALUED, "true"));
-    fields.add(createMap("column", "s1dataA", "xpath", "/documents/doc/sec1/s1dataA", DataImporter.MULTI_VALUED, "true"));
-    fields.add(createMap("column", "s1dataB", "xpath", "/documents/doc/sec1/s1dataB", DataImporter.MULTI_VALUED, "true")); 
-    fields.add(createMap("column", "s1dataC", "xpath", "/documents/doc/sec1/s1dataC", DataImporter.MULTI_VALUED, "true")); 
-    
-    Context c = getContext(null,
-            new VariableResolver(), getDataSource(textMultipleDocuments), Context.FULL_DUMP, fields, entityAttrs);
-    XPathEntityProcessor xPathEntityProcessor = new XPathEntityProcessor();
-    xPathEntityProcessor.init(c);
-    List<Map<String, Object>> result = new ArrayList<>();
-    while (true) {
-      Map<String, Object> row = xPathEntityProcessor.nextRow();
-      if (row == null)
-        break;
-      result.add(row);
-    }
-    {  
-      assertEquals("1", result.get(0).get("id"));
-      List a = (List)result.get(0).get("a");
-      List s1dataA = (List)result.get(0).get("s1dataA");
-      List s1dataB = (List)result.get(0).get("s1dataB");
-      List s1dataC = (List)result.get(0).get("s1dataC");      
-      assertEquals(2, a.size());
-      assertEquals("id1-a1", a.get(0));
-      assertEquals("id1-a2", a.get(1));
-      assertEquals(3, s1dataA.size());
-      assertEquals("id1-s1dataA-1", s1dataA.get(0));
-      assertNull(s1dataA.get(1));
-      assertEquals("id1-s1dataA-3", s1dataA.get(2));
-      assertEquals(3, s1dataB.size());
-      assertEquals("id1-s1dataB-1", s1dataB.get(0));
-      assertEquals("id1-s1dataB-2", s1dataB.get(1));
-      assertEquals("id1-s1dataB-3", s1dataB.get(2));
-      assertEquals(3, s1dataC.size());
-      assertNull(s1dataC.get(0));
-      assertNull(s1dataC.get(1));
-      assertNull(s1dataC.get(2));
-    }
-    { 
-      assertEquals("2", result.get(1).get("id"));
-      List a = (List)result.get(1).get("a");
-      List s1dataA = (List)result.get(1).get("s1dataA");
-      List s1dataB = (List)result.get(1).get("s1dataB");
-      List s1dataC = (List)result.get(1).get("s1dataC");  
-      assertTrue(a==null || a.size()==0);
-      assertEquals(1, s1dataA.size()); 
-      assertNull(s1dataA.get(0));
-      assertEquals(1, s1dataB.size());
-      assertEquals("id2-s1dataB-1", s1dataB.get(0));
-      assertEquals(1, s1dataC.size());
-      assertNull(s1dataC.get(0));
-    }  
-    {
-      assertEquals("3", result.get(2).get("id"));
-      List a = (List)result.get(2).get("a");
-      List s1dataA = (List)result.get(2).get("s1dataA");
-      List s1dataB = (List)result.get(2).get("s1dataB");
-      List s1dataC = (List)result.get(2).get("s1dataC");  
-      assertTrue(a==null || a.size()==0);
-      assertEquals(1, s1dataA.size());
-      assertEquals("id3-s1dataA-1", s1dataA.get(0));
-      assertEquals(1, s1dataB.size());
-      assertNull(s1dataB.get(0));
-      assertEquals(1, s1dataC.size());
-      assertNull(s1dataC.get(0)); 
-    }
-    {  
-      assertEquals("4", result.get(3).get("id"));
-      List a = (List)result.get(3).get("a");
-      List s1dataA = (List)result.get(3).get("s1dataA");
-      List s1dataB = (List)result.get(3).get("s1dataB");
-      List s1dataC = (List)result.get(3).get("s1dataC");  
-      assertTrue(a==null || a.size()==0);
-      assertEquals(1, s1dataA.size());
-      assertEquals("id4-s1dataA-1", s1dataA.get(0));
-      assertEquals(1, s1dataB.size());
-      assertEquals("id4-s1dataB-1", s1dataB.get(0));
-      assertEquals(1, s1dataC.size());
-      assertEquals("id4-s1dataC-1", s1dataC.get(0));
-    }
-    {
-      assertEquals("5", result.get(4).get("id"));
-      List a = (List)result.get(4).get("a");
-      List s1dataA = (List)result.get(4).get("s1dataA");
-      List s1dataB = (List)result.get(4).get("s1dataB");
-      List s1dataC = (List)result.get(4).get("s1dataC");  
-      assertTrue(a==null || a.size()==0);      
-      assertEquals(1, s1dataA.size());
-      assertNull(s1dataA.get(0)); 
-      assertEquals(1, s1dataB.size());
-      assertNull(s1dataB.get(0)); 
-      assertEquals(1, s1dataC.size());
-      assertEquals("id5-s1dataC-1", s1dataC.get(0));
-    }
-    {  
-      assertEquals("6", result.get(5).get("id"));
-      List a = (List)result.get(5).get("a");
-      List s1dataA = (List)result.get(5).get("s1dataA");
-      List s1dataB = (List)result.get(5).get("s1dataB");
-      List s1dataC = (List)result.get(5).get("s1dataC");     
-      assertTrue(a==null || a.size()==0); 
-      assertEquals(3, s1dataA.size());
-      assertEquals("id6-s1dataA-1", s1dataA.get(0));
-      assertEquals("id6-s1dataA-2", s1dataA.get(1));
-      assertNull(s1dataA.get(2));
-      assertEquals(3, s1dataB.size());
-      assertEquals("id6-s1dataB-1", s1dataB.get(0));
-      assertEquals("id6-s1dataB-2", s1dataB.get(1));
-      assertEquals("id6-s1dataB-3", s1dataB.get(2));
-      assertEquals(3, s1dataC.size());
-      assertEquals("id6-s1dataC-1", s1dataC.get(0));
-      assertNull(s1dataC.get(1));
-      assertEquals("id6-s1dataC-3", s1dataC.get(2));
-    }
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testMultiValuedFlatten() throws Exception  {
-    @SuppressWarnings({"rawtypes"})
-    Map entityAttrs = createMap("name", "e", "url", "testdata.xml",
-            XPathEntityProcessor.FOR_EACH, "/root");
-    @SuppressWarnings({"rawtypes"})
-    List fields = new ArrayList();
-    fields.add(createMap("column", "a", "xpath", "/root/a" ,"flatten","true"));
-    Context c = getContext(null,
-            new VariableResolver(), getDataSource(testXmlFlatten), Context.FULL_DUMP, fields, entityAttrs);
-    XPathEntityProcessor xPathEntityProcessor = new XPathEntityProcessor();
-    xPathEntityProcessor.init(c);
-    Map<String, Object> result = null;
-    while (true) {
-      Map<String, Object> row = xPathEntityProcessor.nextRow();
-      if (row == null)
-        break;
-      result = row;
-    }
-    assertEquals("1B2", result.get("a"));
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void withFieldsAndXpathStream() throws Exception {
-    final Object monitor = new Object();
-    final boolean[] done = new boolean[1];
-    
-    @SuppressWarnings({"rawtypes"})
-    Map entityAttrs = createMap("name", "e", "url", "cd.xml",
-        XPathEntityProcessor.FOR_EACH, "/catalog/cd", "stream", "true", "batchSize","1");
-    @SuppressWarnings({"rawtypes"})
-    List fields = new ArrayList();
-    fields.add(createMap("column", "title", "xpath", "/catalog/cd/title"));
-    fields.add(createMap("column", "artist", "xpath", "/catalog/cd/artist"));
-    fields.add(createMap("column", "year", "xpath", "/catalog/cd/year"));
-    Context c = getContext(null,
-        new VariableResolver(), getDataSource(cdData), Context.FULL_DUMP, fields, entityAttrs);
-    XPathEntityProcessor xPathEntityProcessor = new XPathEntityProcessor() {
-      private int count;
-      
-      @Override
-      protected Map<String, Object> readRow(Map<String, Object> record,
-          String xpath) {
-        synchronized (monitor) {
-          if (simulateSlowReader && !done[0]) {
-            try {
-              monitor.wait(100);
-            } catch (InterruptedException e) {
-              throw new RuntimeException(e);
-            }
-          }
-        }
-        
-        return super.readRow(record, xpath);
-      }
-    };
-    
-    if (simulateSlowResultProcessor) {
-      xPathEntityProcessor.blockingQueueSize = 1;
-    }
-    xPathEntityProcessor.blockingQueueTimeOut = 1;
-    xPathEntityProcessor.blockingQueueTimeOutUnits = TimeUnit.MICROSECONDS;
-    
-    xPathEntityProcessor.init(c);
-    List<Map<String, Object>> result = new ArrayList<>();
-    while (true) {
-      if (rowsToRead >= 0 && result.size() >= rowsToRead) {
-        Thread.currentThread().interrupt();
-      }
-      Map<String, Object> row = xPathEntityProcessor.nextRow();
-      if (row == null)
-        break;
-      result.add(row);
-      if (simulateSlowResultProcessor) {
-        synchronized (xPathEntityProcessor.publisherThread) {
-          if (xPathEntityProcessor.publisherThread.isAlive()) {
-            xPathEntityProcessor.publisherThread.wait(1000);
-          }
-        }
-      }
-    }
-    
-    synchronized (monitor) {
-      done[0] = true;
-      monitor.notify();
-    }
-    
-    // confirm that publisher thread stops.
-    xPathEntityProcessor.publisherThread.join(1000);
-    assertEquals("Expected thread to stop", false, xPathEntityProcessor.publisherThread.isAlive());
-    
-    assertEquals(rowsToRead < 0 ? 3 : rowsToRead, result.size());
-    
-    if (rowsToRead < 0) {
-      assertEquals("Empire Burlesque", result.get(0).get("title"));
-      assertEquals("Bonnie Tyler", result.get(1).get("artist"));
-      assertEquals("1982", result.get(2).get("year"));
-    }
-  }
-
-  @Test
-  public void withFieldsAndXpathStreamContinuesOnTimeout() throws Exception {
-    simulateSlowReader = true;
-    withFieldsAndXpathStream();
-  }
-  
-  @Test
-  public void streamWritesMessageAfterBlockedAttempt() throws Exception {
-    simulateSlowResultProcessor = true;
-    withFieldsAndXpathStream();
-  }
-  
-  @Test
-  public void streamStopsAfterInterrupt() throws Exception {
-    simulateSlowResultProcessor = true;
-    rowsToRead = 1;
-    withFieldsAndXpathStream();
-  }
-  
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void withDefaultSolrAndXsl() throws Exception {
-    File tmpdir = createTempDir().toFile();
-    AbstractDataImportHandlerTestCase.createFile(tmpdir, "x.xsl", xsl.getBytes(StandardCharsets.UTF_8),
-            false);
-
-    @SuppressWarnings({"rawtypes"})
-    Map entityAttrs = createMap("name", "e",
-            XPathEntityProcessor.USE_SOLR_ADD_SCHEMA, "true", "xsl", ""
-            + new File(tmpdir, "x.xsl").toURI(), "url", "cd.xml");
-    Context c = getContext(null,
-            new VariableResolver(), getDataSource(cdData), Context.FULL_DUMP, null, entityAttrs);
-    XPathEntityProcessor xPathEntityProcessor = new XPathEntityProcessor();
-    xPathEntityProcessor.init(c);
-    List<Map<String, Object>> result = new ArrayList<>();
-    while (true) {
-      Map<String, Object> row = xPathEntityProcessor.nextRow();
-      if (row == null)
-        break;
-      result.add(row);
-    }
-    assertEquals(3, result.size());
-    assertEquals("Empire Burlesque", result.get(0).get("title"));
-    assertEquals("Bonnie Tyler", result.get(1).get("artist"));
-    assertEquals("1982", result.get(2).get("year"));
-  }
-
-  private DataSource<Reader> getDataSource(final String xml) {
-    return new DataSource<Reader>() {
-
-      @Override
-      public void init(Context context, Properties initProps) {
-      }
-
-      @Override
-      public void close() {
-      }
-
-      @Override
-      public Reader getData(String query) {
-        return new StringReader(xml);
-      }
-    };
-  }
-
-  private static final String xsl = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
-          + "<xsl:stylesheet version=\"1.0\"\n"
-          + "xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n"
-          + "<xsl:output version='1.0' method='xml' encoding='UTF-8' indent='yes'/>\n"
-          + "\n"
-          + "<xsl:template match=\"/\">\n"
-          + "  <add> \n"
-          + "      <xsl:for-each select=\"catalog/cd\">\n"
-          + "      <doc>\n"
-          + "      <field name=\"title\"><xsl:value-of select=\"title\"/></field>\n"
-          + "      <field name=\"artist\"><xsl:value-of select=\"artist\"/></field>\n"
-          + "      <field name=\"country\"><xsl:value-of select=\"country\"/></field>\n"
-          + "      <field name=\"company\"><xsl:value-of select=\"company\"/></field>      \n"
-          + "      <field name=\"price\"><xsl:value-of select=\"price\"/></field>\n"
-          + "      <field name=\"year\"><xsl:value-of select=\"year\"/></field>      \n"
-          + "      </doc>\n"
-          + "      </xsl:for-each>\n"
-          + "    </add>  \n"
-          + "</xsl:template>\n" + "</xsl:stylesheet>";
-
-  private static final String cdData = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
-          + "<?xml-stylesheet type=\"text/xsl\" href=\"solr.xsl\"?>\n"
-          + "<catalog>\n"
-          + "\t<cd>\n"
-          + "\t\t<title>Empire Burlesque</title>\n"
-          + "\t\t<artist>Bob Dylan</artist>\n"
-          + "\t\t<country>USA</country>\n"
-          + "\t\t<company>Columbia</company>\n"
-          + "\t\t<price>10.90</price>\n"
-          + "\t\t<year>1985</year>\n"
-          + "\t</cd>\n"
-          + "\t<cd>\n"
-          + "\t\t<title>Hide your heart</title>\n"
-          + "\t\t<artist>Bonnie Tyler</artist>\n"
-          + "\t\t<country>UK</country>\n"
-          + "\t\t<company>CBS Records</company>\n"
-          + "\t\t<price>9.90</price>\n"
-          + "\t\t<year>1988</year>\n"
-          + "\t</cd>\n"
-          + "\t<cd>\n"
-          + "\t\t<title>Greatest Hits</title>\n"
-          + "\t\t<artist>Dolly Parton</artist>\n"
-          + "\t\t<country>USA</country>\n"
-          + "\t\t<company>RCA</company>\n"
-          + "\t\t<price>9.90</price>\n"
-          + "\t\t<year>1982</year>\n" + "\t</cd>\n" + "</catalog>\t";
-
-  private static final String testXml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE root [\n<!ENTITY uuml \"&#252;\" >\n]>\n<root><a>1</a><a>2</a><a>&uuml;</a></root>";
-
-  private static final String testXmlFlatten = "<?xml version=\"1.0\" encoding=\"UTF-8\"?><root><a>1<b>B</b>2</a></root>";
-  
-  private static final String textMultipleDocuments = 
-      "<?xml version=\"1.0\" ?>" +
-          "<documents>" +          
-          " <doc>" +
-          "  <id>1</id>" +
-          "  <a>id1-a1</a>" +
-          "  <a>id1-a2</a>" +
-          "  <sec1>" +
-          "   <s1dataA>id1-s1dataA-1</s1dataA>" +
-          "   <s1dataB>id1-s1dataB-1</s1dataB>" +
-          "  </sec1>" +
-          "  <sec1>" +
-          "   <s1dataB>id1-s1dataB-2</s1dataB>" +
-          "  </sec1>" +
-          "  <sec1>" +
-          "   <s1dataA>id1-s1dataA-3</s1dataA>" +
-          "   <s1dataB>id1-s1dataB-3</s1dataB>" +
-          "  </sec1>" +
-          " </doc>" +
-          " <doc>" +
-          "  <id>2</id>" +          
-          "  <sec1>" +
-          "   <s1dataB>id2-s1dataB-1</s1dataB>" +
-          "  </sec1>" + 
-          " </doc>" +
-          " <doc>" +
-          "  <id>3</id>" +          
-          "  <sec1>" +
-          "   <s1dataA>id3-s1dataA-1</s1dataA>" +
-          "  </sec1>" + 
-          " </doc>" +
-          " <doc>" +
-          "  <id>4</id>" +          
-          "  <sec1>" +
-          "   <s1dataA>id4-s1dataA-1</s1dataA>" +
-          "   <s1dataB>id4-s1dataB-1</s1dataB>" +
-          "   <s1dataC>id4-s1dataC-1</s1dataC>" +
-          "  </sec1>" + 
-          " </doc>" +
-          " <doc>" +
-          "  <id>5</id>" +          
-          "  <sec1>" +
-          "   <s1dataC>id5-s1dataC-1</s1dataC>" +
-          "  </sec1>" + 
-          " </doc>" +
-          " <doc>" +
-          "  <id>6</id>" +
-          "  <sec1>" +
-          "   <s1dataA>id6-s1dataA-1</s1dataA>" +
-          "   <s1dataB>id6-s1dataB-1</s1dataB>" +
-          "   <s1dataC>id6-s1dataC-1</s1dataC>" +
-          "  </sec1>" +
-          "  <sec1>" +
-          "   <s1dataA>id6-s1dataA-2</s1dataA>" +
-          "   <s1dataB>id6-s1dataB-2</s1dataB>" +
-          "  </sec1>" +
-          "  <sec1>" +
-          "   <s1dataB>id6-s1dataB-3</s1dataB>" +
-          "   <s1dataC>id6-s1dataC-3</s1dataC>" +
-          "  </sec1>" +
-          " </doc>" +
-          "</documents>"
-         ;
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestXPathRecordReader.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestXPathRecordReader.java
deleted file mode 100644
index fe8c657..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestXPathRecordReader.java
+++ /dev/null
@@ -1,591 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.io.StringReader;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-import org.junit.Test;
-
-/**
- * <p> Test for XPathRecordReader </p>
- *
- *
- * @since solr 1.3
- */
-public class TestXPathRecordReader extends AbstractDataImportHandlerTestCase {
-  @Test
-  public void testBasic() {
-    String xml="<root>\n"
-             + "   <b><c>Hello C1</c>\n"
-             + "      <c>Hello C1</c>\n"
-             + "      </b>\n"
-             + "   <b><c>Hello C2</c>\n"
-             + "     </b>\n"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/b");
-    rr.addField("c", "/root/b/c", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(2, l.size());
-    assertEquals(2, ((List) l.get(0).get("c")).size());
-    assertEquals(1, ((List) l.get(1).get("c")).size());
-  }
-
-  @Test
-  public void testAttributes() {
-    String xml="<root>\n"
-             + "   <b a=\"x0\" b=\"y0\" />\n"
-             + "   <b a=\"x1\" b=\"y1\" />\n"
-             + "   <b a=\"x2\" b=\"y2\" />\n"
-            + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/b");
-    rr.addField("a", "/root/b/@a", false);
-    rr.addField("b", "/root/b/@b", false);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(3, l.size());
-    assertEquals("x0", l.get(0).get("a"));
-    assertEquals("x1", l.get(1).get("a"));
-    assertEquals("x2", l.get(2).get("a"));
-    assertEquals("y0", l.get(0).get("b"));
-    assertEquals("y1", l.get(1).get("b"));
-    assertEquals("y2", l.get(2).get("b"));
-  }
-  
-  @Test
-  public void testAttrInRoot(){
-    String xml="<r>\n" +
-            "<merchantProduct id=\"814636051\" mid=\"189973\">\n" +
-            "                   <in_stock type=\"stock-4\" />\n" +
-            "                   <condition type=\"cond-0\" />\n" +
-            "                   <price>301.46</price>\n" +
-               "   </merchantProduct>\n" +
-            "<merchantProduct id=\"814636052\" mid=\"189974\">\n" +
-            "                   <in_stock type=\"stock-5\" />\n" +
-            "                   <condition type=\"cond-1\" />\n" +
-            "                   <price>302.46</price>\n" +
-               "   </merchantProduct>\n" +
-            "\n" +
-            "</r>";
-     XPathRecordReader rr = new XPathRecordReader("/r/merchantProduct");
-    rr.addField("id", "/r/merchantProduct/@id", false);
-    rr.addField("mid", "/r/merchantProduct/@mid", false);
-    rr.addField("price", "/r/merchantProduct/price", false);
-    rr.addField("conditionType", "/r/merchantProduct/condition/@type", false);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    Map<String, Object> m = l.get(0);
-    assertEquals("814636051", m.get("id"));
-    assertEquals("189973", m.get("mid"));
-    assertEquals("301.46", m.get("price"));
-    assertEquals("cond-0", m.get("conditionType"));
-
-    m = l.get(1);
-    assertEquals("814636052", m.get("id"));
-    assertEquals("189974", m.get("mid"));
-    assertEquals("302.46", m.get("price"));
-    assertEquals("cond-1", m.get("conditionType"));
-  }
-
-  @Test
-  public void testAttributes2Level() {
-    String xml="<root>\n"
-             + "<a>\n  <b a=\"x0\" b=\"y0\" />\n"
-             + "       <b a=\"x1\" b=\"y1\" />\n"
-             + "       <b a=\"x2\" b=\"y2\" />\n"
-             + "       </a>"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a/b");
-    rr.addField("a", "/root/a/b/@a", false);
-    rr.addField("b", "/root/a/b/@b", false);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(3, l.size());
-    assertEquals("x0", l.get(0).get("a"));
-    assertEquals("y1", l.get(1).get("b"));
-  }
-
-  @Test
-  public void testAttributes2LevelHetero() {
-    String xml="<root>\n"
-             + "<a>\n   <b a=\"x0\" b=\"y0\" />\n"
-             + "        <b a=\"x1\" b=\"y1\" />\n"
-             + "        <b a=\"x2\" b=\"y2\" />\n"
-             + "        </a>"
-             + "<x>\n   <b a=\"x4\" b=\"y4\" />\n"
-             + "        <b a=\"x5\" b=\"y5\" />\n"
-             + "        <b a=\"x6\" b=\"y6\" />\n"
-             + "        </x>"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a | /root/x");
-    rr.addField("a", "/root/a/b/@a", false);
-    rr.addField("b", "/root/a/b/@b", false);
-    rr.addField("a", "/root/x/b/@a", false);
-    rr.addField("b", "/root/x/b/@b", false);
-
-    final List<Map<String, Object>> a = new ArrayList<>();
-    final List<Map<String, Object>> x = new ArrayList<>();
-    rr.streamRecords(new StringReader(xml), (record, xpath) -> {
-      if (record == null) return;
-      if (xpath.equals("/root/a")) a.add(record);
-      if (xpath.equals("/root/x")) x.add(record);
-    });
-
-    assertEquals(1, a.size());
-    assertEquals(1, x.size());
-  }
-
-  @Test
-  public void testAttributes2LevelMissingAttrVal() {
-    String xml="<root>\n"
-             + "<a>\n  <b a=\"x0\" b=\"y0\" />\n"
-             + "       <b a=\"x1\" b=\"y1\" />\n"
-             + "       </a>"
-             + "<a>\n  <b a=\"x3\"  />\n"
-             + "       <b b=\"y4\" />\n"
-             + "       </a>"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a");
-    rr.addField("a", "/root/a/b/@a", true);
-    rr.addField("b", "/root/a/b/@b", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(2, l.size());
-    assertNull(((List) l.get(1).get("a")).get(1));
-    assertNull(((List) l.get(1).get("b")).get(0));
-  }
-
-  @Test
-  public void testElems2LevelMissing() {
-    String xml="<root>\n"
-             + "\t<a>\n"
-             + "\t   <b>\n\t  <x>x0</x>\n"
-             + "\t            <y>y0</y>\n"
-             + "\t            </b>\n"
-             + "\t   <b>\n\t  <x>x1</x>\n"
-             + "\t            <y>y1</y>\n"
-             + "\t            </b>\n"
-             + "\t   </a>\n"
-             + "\t<a>\n"
-             + "\t   <b>\n\t  <x>x3</x>\n\t   </b>\n"
-             + "\t   <b>\n\t  <y>y4</y>\n\t   </b>\n"
-             + "\t   </a>\n"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a");
-    rr.addField("a", "/root/a/b/x", true);
-    rr.addField("b", "/root/a/b/y", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(2, l.size());
-    assertNull(((List) l.get(1).get("a")).get(1));
-    assertNull(((List) l.get(1).get("b")).get(0));
-  }
-
-  @Test
-  public void testElems2LevelEmpty() {
-    String xml="<root>\n"
-             + "\t<a>\n"
-             + "\t   <b>\n\t  <x>x0</x>\n"
-             + "\t            <y>y0</y>\n"
-             + "\t   </b>\n"
-             + "\t   <b>\n\t  <x></x>\n"    // empty
-             + "\t            <y>y1</y>\n"
-             + "\t   </b>\n"
-             + "\t</a>\n"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a");
-    rr.addField("a", "/root/a/b/x", true);
-    rr.addField("b", "/root/a/b/y", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(1, l.size());
-    assertEquals("x0",((List) l.get(0).get("a")).get(0));
-    assertEquals("y0",((List) l.get(0).get("b")).get(0));
-    assertEquals("",((List) l.get(0).get("a")).get(1));
-    assertEquals("y1",((List) l.get(0).get("b")).get(1));
-  }
-
-  @Test
-  public void testMixedContent() {
-    String xml = "<xhtml:p xmlns:xhtml=\"http://xhtml.com/\" >This text is \n" +
-            "  <xhtml:b>bold</xhtml:b> and this text is \n" +
-            "  <xhtml:u>underlined</xhtml:u>!\n" +
-            "</xhtml:p>";
-    XPathRecordReader rr = new XPathRecordReader("/p");
-    rr.addField("p", "/p", true);
-    rr.addField("b", "/p/b", true);
-    rr.addField("u", "/p/u", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    Map<String, Object> row = l.get(0);
-
-    assertEquals("bold", ((List) row.get("b")).get(0));
-    assertEquals("underlined", ((List) row.get("u")).get(0));
-    String p = (String) ((List) row.get("p")).get(0);
-    assertTrue(p.contains("This text is"));
-    assertTrue(p.contains("and this text is"));
-    assertTrue(p.contains("!"));
-    // Should not contain content from child elements
-    assertFalse(p.contains("bold"));
-  }
-
-  @Test
-  public void testMixedContentFlattened() {
-    String xml = "<xhtml:p xmlns:xhtml=\"http://xhtml.com/\" >This text is \n" +
-            "  <xhtml:b>bold</xhtml:b> and this text is \n" +
-            "  <xhtml:u>underlined</xhtml:u>!\n" +
-            "</xhtml:p>";
-    XPathRecordReader rr = new XPathRecordReader("/p");
-    rr.addField("p", "/p", false, XPathRecordReader.FLATTEN);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    Map<String, Object> row = l.get(0);
-    assertEquals("This text is \n" +
-            "  bold and this text is \n" +
-            "  underlined!", ((String)row.get("p")).trim() );
-  }
-
-  @Test
-  public void testElems2LevelWithAttrib() {
-    String xml = "<root>\n\t<a>\n\t   <b k=\"x\">\n"
-            + "\t                        <x>x0</x>\n"
-            + "\t                        <y></y>\n"  // empty
-            + "\t                        </b>\n"
-            + "\t                     <b k=\"y\">\n"
-            + "\t                        <x></x>\n"  // empty
-            + "\t                        <y>y1</y>\n"
-            + "\t                        </b>\n"
-            + "\t                     <b k=\"z\">\n"
-            + "\t                        <x>x2</x>\n"
-            + "\t                        <y>y2</y>\n"
-            + "\t                        </b>\n"
-            + "\t                </a>\n"
-            + "\t           <a>\n\t   <b>\n"
-            + "\t                        <x>x3</x>\n"
-            + "\t                        </b>\n"
-            + "\t                     <b>\n"
-            + "\t                     <y>y4</y>\n"
-            + "\t                        </b>\n"
-            + "\t               </a>\n"
-            + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a");
-    rr.addField("x", "/root/a/b[@k]/x", true);
-    rr.addField("y", "/root/a/b[@k]/y", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(2, l.size());
-    assertEquals(3, ((List) l.get(0).get("x")).size());
-    assertEquals(3, ((List) l.get(0).get("y")).size());
-    assertEquals("x0", ((List) l.get(0).get("x")).get(0));
-    assertEquals("", ((List) l.get(0).get("y")).get(0));
-    assertEquals("", ((List) l.get(0).get("x")).get(1));
-    assertEquals("y1", ((List) l.get(0).get("y")).get(1));
-    assertEquals("x2", ((List) l.get(0).get("x")).get(2));
-    assertEquals("y2", ((List) l.get(0).get("y")).get(2));
-    assertEquals(0, l.get(1).size());
-  }
-
-  @Test
-  public void testElems2LevelWithAttribMultiple() {
-    String xml="<root>\n"
-             + "\t<a>\n\t   <b k=\"x\" m=\"n\" >\n"
-             + "\t             <x>x0</x>\n"
-             + "\t             <y>y0</y>\n"
-             + "\t             </b>\n"
-             + "\t          <b k=\"y\" m=\"p\">\n"
-             + "\t             <x>x1</x>\n"
-             + "\t             <y>y1</y>\n"
-             + "\t             </b>\n"
-             + "\t   </a>\n"
-             + "\t<a>\n\t   <b k=\"x\">\n"
-             + "\t             <x>x3</x>\n"
-             + "\t             </b>\n"
-             + "\t          <b m=\"n\">\n"
-             + "\t             <y>y4</y>\n"
-             + "\t             </b>\n"
-             + "\t   </a>\n"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a");
-    rr.addField("x", "/root/a/b[@k][@m='n']/x", true);
-    rr.addField("y", "/root/a/b[@k][@m='n']/y", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(2, l.size());
-    assertEquals(1, ((List) l.get(0).get("x")).size());
-    assertEquals(1, ((List) l.get(0).get("y")).size());
-    assertEquals(0, l.get(1).size());
-  }
-
-  @Test
-  public void testElems2LevelWithAttribVal() {
-    String xml="<root>\n\t<a>\n   <b k=\"x\">\n"
-             + "\t                  <x>x0</x>\n"
-             + "\t                  <y>y0</y>\n"
-             + "\t                  </b>\n"
-             + "\t                <b k=\"y\">\n"
-             + "\t                  <x>x1</x>\n"
-             + "\t                  <y>y1</y>\n"
-             + "\t                  </b>\n"
-             + "\t                </a>\n"
-             + "\t        <a>\n   <b><x>x3</x></b>\n"
-             + "\t                <b><y>y4</y></b>\n"
-             + "\t</a>\n" + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/a");
-    rr.addField("x", "/root/a/b[@k='x']/x", true);
-    rr.addField("y", "/root/a/b[@k='x']/y", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(2, l.size());
-    assertEquals(1, ((List) l.get(0).get("x")).size());
-    assertEquals(1, ((List) l.get(0).get("y")).size());
-    assertEquals(0, l.get(1).size());
-  }
-
-  @Test
-  public void testAttribValWithSlash() {
-    String xml = "<root><b>\n" +
-            "  <a x=\"a/b\" h=\"hello-A\"/>  \n" +
-            "</b></root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/b");
-    rr.addField("x", "/root/b/a[@x='a/b']/@h", false);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(1, l.size());
-    Map<String, Object> m = l.get(0);
-    assertEquals("hello-A", m.get("x"));    
-  }
-
-  @Test
-  public void testUnsupportedXPaths() {
-    RuntimeException ex = expectThrows(RuntimeException.class, () -> new XPathRecordReader("//b"));
-    assertEquals("forEach cannot start with '//': //b", ex.getMessage());
-
-    XPathRecordReader rr = new XPathRecordReader("/anyd/contenido");
-    ex = expectThrows(RuntimeException.class, () -> rr.addField("bold", "b", false));
-    assertEquals("xpath must start with '/' : b", ex.getMessage());
-  }
-
-  @Test
-  public void testAny_decendent_from_root() {
-    XPathRecordReader rr = new XPathRecordReader("/anyd/contenido");
-    rr.addField("descdend", "//boo",                   true);
-    rr.addField("inr_descd","//boo/i",                false);
-    rr.addField("cont",     "/anyd/contenido",        false);
-    rr.addField("id",       "/anyd/contenido/@id",    false);
-    rr.addField("status",   "/anyd/status",           false);
-    rr.addField("title",    "/anyd/contenido/titulo", false,XPathRecordReader.FLATTEN);
-    rr.addField("resume",   "/anyd/contenido/resumen",false);
-    rr.addField("text",     "/anyd/contenido/texto",  false);
-
-    String xml="<anyd>\n"
-             + "  this <boo>top level</boo> is ignored because it is external to the forEach\n"
-             + "  <status>as is <boo>this element</boo></status>\n"
-             + "  <contenido id=\"10097\" idioma=\"cat\">\n"
-             + "    This one is <boo>not ignored as it's</boo> inside a forEach\n"
-             + "    <antetitulo><i> big <boo>antler</boo></i></antetitulo>\n"
-             + "    <titulo>  My <i>flattened <boo>title</boo></i> </titulo>\n"
-             + "    <resumen> My summary <i>skip this!</i>  </resumen>\n"
-             + "    <texto>   <boo>Within the body of</boo>My text</texto>\n"
-             + "    <p>Access <boo>inner <i>sub clauses</i> as well</boo></p>\n"
-             + "    </contenido>\n"
-             + "</anyd>";
-
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(1, l.size());
-    Map<String, Object> m = l.get(0);
-    assertEquals("This one is  inside a forEach", m.get("cont").toString().trim());
-    assertEquals("10097"              ,m.get("id"));
-    assertEquals("My flattened title" ,m.get("title").toString().trim());
-    assertEquals("My summary"         ,m.get("resume").toString().trim());
-    assertEquals("My text"            ,m.get("text").toString().trim());
-    assertEquals("not ignored as it's",(String) ((List) m.get("descdend")).get(0) );
-    assertEquals("antler"             ,(String) ((List) m.get("descdend")).get(1) );
-    assertEquals("Within the body of" ,(String) ((List) m.get("descdend")).get(2) );
-    assertEquals("inner  as well"     ,(String) ((List) m.get("descdend")).get(3) );
-    assertEquals("sub clauses"        ,m.get("inr_descd").toString().trim());
-  }
-
-  @Test
-  public void testAny_decendent_of_a_child1() {
-    XPathRecordReader rr = new XPathRecordReader("/anycd");
-    rr.addField("descdend", "/anycd//boo",         true);
-
-    // same test string as above but checking to see if *all* //boo's are collected
-    String xml="<anycd>\n"
-             + "  this <boo>top level</boo> is ignored because it is external to the forEach\n"
-             + "  <status>as is <boo>this element</boo></status>\n"
-             + "  <contenido id=\"10097\" idioma=\"cat\">\n"
-             + "    This one is <boo>not ignored as it's</boo> inside a forEach\n"
-             + "    <antetitulo><i> big <boo>antler</boo></i></antetitulo>\n"
-             + "    <titulo>  My <i>flattened <boo>title</boo></i> </titulo>\n"
-             + "    <resumen> My summary <i>skip this!</i>  </resumen>\n"
-             + "    <texto>   <boo>Within the body of</boo>My text</texto>\n"
-             + "    <p>Access <boo>inner <i>sub clauses</i> as well</boo></p>\n"
-             + "    </contenido>\n"
-             + "</anycd>";
-
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(1, l.size());
-    Map<String, Object> m = l.get(0);
-    assertEquals("top level"          ,(String) ((List) m.get("descdend")).get(0) );
-    assertEquals("this element"       ,(String) ((List) m.get("descdend")).get(1) );
-    assertEquals("not ignored as it's",(String) ((List) m.get("descdend")).get(2) );
-    assertEquals("antler"             ,(String) ((List) m.get("descdend")).get(3) );
-    assertEquals("title"              ,(String) ((List) m.get("descdend")).get(4) );
-    assertEquals("Within the body of" ,(String) ((List) m.get("descdend")).get(5) );
-    assertEquals("inner  as well"     ,(String) ((List) m.get("descdend")).get(6) );
-  }
-
-  @Test
-  public void testAny_decendent_of_a_child2() {
-    XPathRecordReader rr = new XPathRecordReader("/anycd");
-    rr.addField("descdend", "/anycd/contenido//boo",         true);
-
-    // same test string as above but checking to see if *some* //boo's are collected
-    String xml="<anycd>\n"
-             + "  this <boo>top level</boo> is ignored because it is external to the forEach\n"
-             + "  <status>as is <boo>this element</boo></status>\n"
-             + "  <contenido id=\"10097\" idioma=\"cat\">\n"
-             + "    This one is <boo>not ignored as it's</boo> inside a forEach\n"
-             + "    <antetitulo><i> big <boo>antler</boo></i></antetitulo>\n"
-             + "    <titulo>  My <i>flattened <boo>title</boo></i> </titulo>\n"
-             + "    <resumen> My summary <i>skip this!</i>  </resumen>\n"
-             + "    <texto>   <boo>Within the body of</boo>My text</texto>\n"
-             + "    <p>Access <boo>inner <i>sub clauses</i> as well</boo></p>\n"
-             + "    </contenido>\n"
-             + "</anycd>";
-
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(1, l.size());
-    Map<String, Object> m = l.get(0);
-    assertEquals("not ignored as it's",((List) m.get("descdend")).get(0) );
-    assertEquals("antler"             ,((List) m.get("descdend")).get(1) );
-    assertEquals("title"              ,((List) m.get("descdend")).get(2) );
-    assertEquals("Within the body of" ,((List) m.get("descdend")).get(3) );
-    assertEquals("inner  as well"     ,((List) m.get("descdend")).get(4) );
-  }
-  
-  @Test
-  public void testAnother() {
-    String xml="<root>\n"
-            + "       <contenido id=\"10097\" idioma=\"cat\">\n"
-             + "    <antetitulo></antetitulo>\n"
-             + "    <titulo>    This is my title             </titulo>\n"
-             + "    <resumen>   This is my summary           </resumen>\n"
-             + "    <texto>     This is the body of my text  </texto>\n"
-             + "    </contenido>\n"
-             + "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/contenido");
-    rr.addField("id", "/root/contenido/@id", false);
-    rr.addField("title", "/root/contenido/titulo", false);
-    rr.addField("resume","/root/contenido/resumen",false);
-    rr.addField("text", "/root/contenido/texto", false);
-
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals(1, l.size());
-    Map<String, Object> m = l.get(0);
-    assertEquals("10097", m.get("id"));
-    assertEquals("This is my title", m.get("title").toString().trim());
-    assertEquals("This is my summary", m.get("resume").toString().trim());
-    assertEquals("This is the body of my text", m.get("text").toString()
-            .trim());
-  }
-
-  @Test
-  public void testSameForEachAndXpath(){
-    String xml="<root>\n" +
-            "   <cat>\n" +
-            "     <name>hello</name>\n" +
-            "   </cat>\n" +
-            "   <item name=\"item name\"/>\n" +
-            "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/cat/name");
-    rr.addField("catName", "/root/cat/name",false);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    assertEquals("hello",l.get(0).get("catName"));
-  }
-
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testPutNullTest(){
-    String xml = "<root>\n" +
-            "  <i>\n" +
-            "    <x>\n" +
-            "      <a>A.1.1</a>\n" +
-            "      <b>B.1.1</b>\n" +
-            "    </x>\n" +
-            "    <x>\n" +
-            "      <b>B.1.2</b>\n" +
-            "      <c>C.1.2</c>\n" +
-            "    </x>\n" +
-            "  </i>\n" +
-            "  <i>\n" +
-            "    <x>\n" +
-            "      <a>A.2.1</a>\n" +
-            "      <c>C.2.1</c>\n" +
-            "    </x>\n" +
-            "    <x>\n" +
-            "      <b>B.2.2</b>\n" +
-            "      <c>C.2.2</c>\n" +
-            "    </x>\n" +
-            "  </i>\n" +
-            "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/i");
-    rr.addField("a", "/root/i/x/a", true);
-    rr.addField("b", "/root/i/x/b", true);
-    rr.addField("c", "/root/i/x/c", true);
-    List<Map<String, Object>> l = rr.getAllRecords(new StringReader(xml));
-    Map<String, Object> map = l.get(0);
-    List<String> a = (List<String>) map.get("a");
-    List<String> b = (List<String>) map.get("b");
-    List<String> c = (List<String>) map.get("c");
-
-    assertEquals("A.1.1",a.get(0));
-    assertEquals("B.1.1",b.get(0));
-    assertNull(c.get(0));
-
-    assertNull(a.get(1));
-    assertEquals("B.1.2",b.get(1));
-    assertEquals("C.1.2",c.get(1));
-
-    map = l.get(1);
-    a = (List<String>) map.get("a");
-    b = (List<String>) map.get("b");
-    c = (List<String>) map.get("c");
-    assertEquals("A.2.1",a.get(0));
-    assertNull(b.get(0));
-    assertEquals("C.2.1",c.get(0));
-
-    assertNull(a.get(1));
-    assertEquals("B.2.2",b.get(1));
-    assertEquals("C.2.2",c.get(1));
-  }
-
-
-  @Test
-  public void testError(){
-    String malformedXml = "<root>\n" +
-          "    <node>\n" +
-          "        <id>1</id>\n" +
-          "        <desc>test1</desc>\n" +
-          "    </node>\n" +
-          "    <node>\n" +
-          "        <id>2</id>\n" +
-          "        <desc>test2</desc>\n" +
-          "    </node>\n" +
-          "    <node>\n" +
-          "        <id/>3</id>\n" +   // invalid XML
-          "        <desc>test3</desc>\n" +
-          "    </node>\n" +
-          "</root>";
-    XPathRecordReader rr = new XPathRecordReader("/root/node");
-    rr.addField("id", "/root/node/id", true);
-    rr.addField("desc", "/root/node/desc", true);
-    RuntimeException e = expectThrows(RuntimeException.class, () -> rr.getAllRecords(new StringReader(malformedXml)));
-    assertTrue(e.getMessage().contains("Unexpected close tag </id>"));
- }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestZKPropertiesWriter.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestZKPropertiesWriter.java
deleted file mode 100644
index 54a5e12..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestZKPropertiesWriter.java
+++ /dev/null
@@ -1,279 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import javax.xml.xpath.XPathExpressionException;
-import java.io.ByteArrayOutputStream;
-import java.io.StringWriter;
-import java.lang.invoke.MethodHandles;
-
-import java.nio.charset.StandardCharsets;
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-
-import org.apache.solr.client.solrj.embedded.JettySolrRunner;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.cloud.MiniSolrCloudCluster;
-import org.apache.solr.cloud.SolrCloudTestCase;
-import org.apache.solr.cloud.ZkTestServer;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.cloud.DocCollection;
-import org.apache.solr.common.cloud.Replica;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.SuppressForbidden;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.request.SolrRequestInfo;
-import org.apache.solr.response.BinaryQueryResponseWriter;
-import org.apache.solr.response.QueryResponseWriter;
-import org.apache.solr.response.SolrQueryResponse;
-import org.apache.solr.util.BaseTestHarness;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Tests that DIH properties writer works when using Zookeeper. Zookeeper is used by virtue of starting a SolrCloud cluster.<p>
- *
- * Note this test is an unelegant bridge between code that assumes a non SolrCloud environment and that would normally use
- * test infra that is not meant to work in a SolrCloud environment ({@link org.apache.solr.util.TestHarness} and some methods in
- * {@link org.apache.solr.SolrTestCaseJ4}) and between a test running SolrCloud (extending {@link SolrCloudTestCase} and
- * using {@link MiniSolrCloudCluster}).<p>
- *
- * These changes were introduced when https://issues.apache.org/jira/browse/SOLR-12823 got fixed and the legacy
- * behaviour of SolrCloud that allowed a SolrCloud (Zookeeper active) to function like a standalone Solr (in which the
- * cluster would adopt cores contributed by the nodes even if they were unknown to Zookeeper) was no more.
- */
-public class TestZKPropertiesWriter extends SolrCloudTestCase {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  protected static ZkTestServer zkServer;
-
-  private static MiniSolrCloudCluster minicluster;
-
-  private String dateFormat = "yyyy-MM-dd HH:mm:ss.SSSSSS";
-
-  @BeforeClass
-  public static void dihZk_beforeClass() throws Exception {
-    System.setProperty(DataImportHandler.ENABLE_DIH_DATA_CONFIG_PARAM, "true");
-
-    minicluster = configureCluster(1)
-        .addConfig("conf", configset("dihconfigset"))
-        .configure();
-
-    zkServer = minicluster.getZkServer();
-  }
-
-  @After
-  public void afterDihZkTest() throws Exception {
-    MockDataSource.clearCache();
-  }
-
-  @AfterClass
-  public static void dihZk_afterClass() throws Exception {
-    shutdownCluster();
-  }
-
-  @SuppressForbidden(reason = "Needs currentTimeMillis to construct date stamps")
-  @Test
-  @SuppressWarnings({"unchecked"})
-  public void testZKPropertiesWriter() throws Exception {
-    CollectionAdminRequest.createCollectionWithImplicitRouter("collection1", "conf", "1", 1)
-        .process(cluster.getSolrClient());
-
-    // DIH talks core, SolrCloud talks collection.
-    DocCollection coll = getCollectionState("collection1");
-    Replica replica = coll.getReplicas().iterator().next();
-    JettySolrRunner jetty = minicluster.getReplicaJetty(replica);
-    SolrCore core = jetty.getCoreContainer().getCore(replica.getCoreName());
-
-    localAssertQ("test query on empty index", request(core, "qlkciyopsbgzyvkylsjhchghjrdf"), "//result[@numFound='0']");
-
-    SimpleDateFormat errMsgFormat = new SimpleDateFormat(dateFormat, Locale.ROOT);
-
-    // These two calls are from SolrTestCaseJ4 and end up in TestHarness... That's ok they are static and do not reference
-    // the various variables that were not initialized (so not copying them to this test class as some other methods at the bottom).
-    delQ("*:*");
-    commit();
-    SimpleDateFormat df = new SimpleDateFormat(dateFormat, Locale.ROOT);
-    Date oneSecondAgo = new Date(System.currentTimeMillis() - 1000);
-
-    Map<String, String> init = new HashMap<>();
-    init.put("dateFormat", dateFormat);
-    ZKPropertiesWriter spw = new ZKPropertiesWriter();
-    spw.init(new DataImporter(core, "dataimport"), init);
-    Map<String, Object> props = new HashMap<>();
-    props.put("SomeDates.last_index_time", oneSecondAgo);
-    props.put("last_index_time", oneSecondAgo);
-    spw.persist(props);
-
-    @SuppressWarnings({"rawtypes"})
-    List rows = new ArrayList();
-    rows.add(AbstractDataImportHandlerTestCase.createMap("id", "1", "year_s", "2013"));
-    MockDataSource.setIterator("select " + df.format(oneSecondAgo) + " from dummy", rows.iterator());
-
-    localQuery("/dataimport", localMakeRequest(core, "command", "full-import", "dataConfig",
-        generateConfig(), "clean", "true", "commit", "true", "synchronous",
-        "true", "indent", "true"));
-    props = spw.readIndexerProperties();
-    Date entityDate = df.parse((String) props.get("SomeDates.last_index_time"));
-    Date docDate = df.parse((String) props.get("last_index_time"));
-
-    Assert.assertTrue("This date: " + errMsgFormat.format(oneSecondAgo) + " should be prior to the document date: " + errMsgFormat.format(docDate), docDate.getTime() - oneSecondAgo.getTime() > 0);
-    Assert.assertTrue("This date: " + errMsgFormat.format(oneSecondAgo) + " should be prior to the entity date: " + errMsgFormat.format(entityDate), entityDate.getTime() - oneSecondAgo.getTime() > 0);
-    localAssertQ("Should have found 1 doc, year 2013", request(core, "*:*"), "//*[@numFound='1']", "//doc/str[@name=\"year_s\"]=\"2013\"");
-
-    core.close();
-  }
-
-  private static SolrQueryRequest request(SolrCore core, String... q) {
-    LocalSolrQueryRequest req = localMakeRequest(core, q);
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.add(req.getParams());
-    params.set("distrib", true);
-    req.setParams(params);
-    return req;
-  }
-
-  private String generateConfig() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("<dataConfig> \n");
-    sb.append("<propertyWriter dateFormat=\"").append(dateFormat).append("\" type=\"ZKPropertiesWriter\" />\n");
-    sb.append("<dataSource name=\"mock\" type=\"MockDataSource\"/>\n");
-    sb.append("<document name=\"TestSimplePropertiesWriter\"> \n");
-    sb.append("<entity name=\"SomeDates\" processor=\"SqlEntityProcessor\" dataSource=\"mock\" ");
-    sb.append("query=\"select ${dih.last_index_time} from dummy\" >\n");
-    sb.append("<field column=\"AYEAR_S\" name=\"year_s\" /> \n");
-    sb.append("</entity>\n");
-    sb.append("</document> \n");
-    sb.append("</dataConfig> \n");
-    String config = sb.toString();
-    log.debug(config);
-    return config;
-  }
-
-  /**
-   * Code copied with some adaptations from {@link org.apache.solr.util.TestHarness.LocalRequestFactory#makeRequest(String...)}.
-   */
-  @SuppressWarnings({"unchecked"})
-  private static LocalSolrQueryRequest localMakeRequest(SolrCore core, String ... q) {
-    if (q.length==1) {
-      Map<String, String> args = new HashMap<>();
-      args.put(CommonParams.VERSION,"2.2");
-
-      return new LocalSolrQueryRequest(core, q[0], "", 0, 20, args);
-    }
-    if (q.length%2 != 0) {
-      throw new RuntimeException("The length of the string array (query arguments) needs to be even");
-    }
-    @SuppressWarnings({"rawtypes"})
-    Map.Entry<String, String> [] entries = new NamedList.NamedListEntry[q.length / 2];
-    for (int i = 0; i < q.length; i += 2) {
-      entries[i/2] = new NamedList.NamedListEntry<>(q[i], q[i+1]);
-    }
-    @SuppressWarnings({"rawtypes"})
-    NamedList nl = new NamedList(entries);
-    if(nl.get("wt" ) == null) nl.add("wt","xml");
-    return new LocalSolrQueryRequest(core, nl);
-  }
-
-  /**
-   * Code copied from {@link org.apache.solr.util.TestHarness#query(String, SolrQueryRequest)} because it is not
-   * <code>static</code> there (it could have been) and we do not have an instance of {@link org.apache.solr.util.TestHarness}.
-   */
-  private static String localQuery(String handler, SolrQueryRequest req) throws Exception {
-    try {
-      SolrCore core = req.getCore();
-      SolrQueryResponse rsp = new SolrQueryResponse();
-      SolrRequestInfo.setRequestInfo(new SolrRequestInfo(req, rsp));
-      core.execute(core.getRequestHandler(handler),req,rsp); // TODO the core doesn't have the request handler
-      if (rsp.getException() != null) {
-        throw rsp.getException();
-      }
-      QueryResponseWriter responseWriter = core.getQueryResponseWriter(req);
-      if (responseWriter instanceof BinaryQueryResponseWriter) {
-        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(32000);
-        BinaryQueryResponseWriter writer = (BinaryQueryResponseWriter) responseWriter;
-        writer.write(byteArrayOutputStream, req, rsp);
-        return new String(byteArrayOutputStream.toByteArray(), StandardCharsets.UTF_8);
-      } else {
-        StringWriter sw = new StringWriter(32000);
-        responseWriter.write(sw,req,rsp);
-        return sw.toString();
-      }
-
-    } finally {
-      req.close();
-      SolrRequestInfo.clearRequestInfo();
-    }
-  }
-
-  /**
-   * Code copied from {@link org.apache.solr.SolrTestCaseJ4#assertQ(String, SolrQueryRequest, String...)} in order not to
-   * use the instance of the {@link org.apache.solr.util.TestHarness}.
-   */
-  private static void localAssertQ(String message, SolrQueryRequest req, String... tests) {
-    try {
-      String m = (null == message) ? "" : message + " "; // TODO log 'm' !!!
-      //since the default (standard) response format is now JSON
-      //need to explicitly request XML since this class uses XPath
-      ModifiableSolrParams xmlWriterTypeParams = new ModifiableSolrParams(req.getParams());
-      xmlWriterTypeParams.set(CommonParams.WT,"xml");
-      //for tests, let's turn indention off so we don't have to handle extraneous spaces
-      xmlWriterTypeParams.set("indent", xmlWriterTypeParams.get("indent", "off"));
-      req.setParams(xmlWriterTypeParams);
-      String response = localQuery(req.getParams().get(CommonParams.QT), req);
-
-      if (req.getParams().getBool("facet", false)) {
-        // add a test to ensure that faceting did not throw an exception
-        // internally, where it would be added to facet_counts/exception
-        String[] allTests = new String[tests.length+1];
-        System.arraycopy(tests,0,allTests,1,tests.length);
-        allTests[0] = "*[count(//lst[@name='facet_counts']/*[@name='exception'])=0]";
-        tests = allTests;
-      }
-
-      String results = BaseTestHarness.validateXPath(response, tests);
-
-      if (null != results) {
-        String msg = "REQUEST FAILED: xpath=" + results
-            + "\n\txml response was: " + response
-            + "\n\trequest was:" + req.getParamString();
-
-        log.error(msg);
-        throw new RuntimeException(msg);
-      }
-
-    } catch (XPathExpressionException e1) {
-      throw new RuntimeException("XPath is invalid", e1);
-    } catch (Exception e2) {
-      SolrException.log(log,"REQUEST FAILED: " + req.getParamString(), e2);
-      throw new RuntimeException("Exception during query", e2);
-    }
-  }
-}
diff --git a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TripleThreatTransformer.java b/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TripleThreatTransformer.java
deleted file mode 100644
index 2d0aadb..0000000
--- a/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TripleThreatTransformer.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler.dataimport;
-
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.LinkedHashMap;
-import java.util.List;
-import java.util.Map;
-
-/**
- * This transformer does 3 things
- * <ul>
- * <li>It turns every row into 3 rows, 
- *     modifying any "id" column to ensure duplicate entries in the index
- * <li>The 2nd Row has 2x values for every column, 
- *   with the added one being backwards of the original
- * <li>The 3rd Row has an added static value
- * </ul>
- * 
- * Also, this does not extend Transformer.
- */
-public class TripleThreatTransformer {
-  public Object transformRow(Map<String, Object> row) {
-    List<Map<String, Object>> rows = new ArrayList<>(3);
-    rows.add(row);
-    rows.add(addDuplicateBackwardsValues(row));
-    rows.add(new LinkedHashMap<>(row));
-    rows.get(2).put("AddAColumn_s", "Added");
-    modifyIdColumn(rows.get(1), 1);
-    modifyIdColumn(rows.get(2), 2);
-    return rows;
-  }
-  private LinkedHashMap<String,Object> addDuplicateBackwardsValues(Map<String, Object> row) {
-    LinkedHashMap<String,Object> n = new LinkedHashMap<>();
-    for(Map.Entry<String,Object> entry : row.entrySet()) {
-      String key = entry.getKey();
-      if(!"id".equalsIgnoreCase(key)) {
-        String[] vals = new String[2];
-        vals[0] = entry.getValue()==null ? "null" : entry.getValue().toString();
-        vals[1] = new StringBuilder(vals[0]).reverse().toString();
-        n.put(key, Arrays.asList(vals));
-      } else {
-        n.put(key, entry.getValue());
-      }
-    }
-    return n;
-  }
-  
-  private void modifyIdColumn(Map<String, Object> row, int num) {
-    Object o = row.remove("ID");
-    if(o==null) {
-      o = row.remove("id");
-    }
-    if(o!=null) {
-      String id = o.toString();
-      id = "TripleThreat-" + num + "-" + id;
-      row.put("id", id);
-    }
-  }
-}
diff --git a/solr/contrib/extraction/build.xml b/solr/contrib/extraction/build.xml
deleted file mode 100644
index ab56899..0000000
--- a/solr/contrib/extraction/build.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-cell" default="default">
-
-  <description>
-    Solr Integration with Tika for extracting content from binary file formats such as Microsoft Word and Adobe PDF.
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-</project>
diff --git a/solr/contrib/extraction/ivy.xml b/solr/contrib/extraction/ivy.xml
deleted file mode 100644
index fb95664..0000000
--- a/solr/contrib/extraction/ivy.xml
+++ /dev/null
@@ -1,80 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="cell"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <!-- Tika JARs -->
-    <dependency org="org.apache.tika" name="tika-core" rev="${/org.apache.tika/tika-core}" conf="compile"/>
-    <dependency org="org.apache.tika" name="tika-parsers" rev="${/org.apache.tika/tika-parsers}" conf="compile"/>
-    <dependency org="org.apache.tika" name="tika-xmp" rev="${/org.apache.tika/tika-xmp}" conf="compile"/>
-    <dependency org="org.apache.tika" name="tika-java7" rev="${/org.apache.tika/tika-java7}" conf="compile"/>
-    <!-- Tika dependencies - see http://tika.apache.org/1.3/gettingstarted.html#Using_Tika_as_a_Maven_dependency -->
-    <!-- When upgrading Tika, upgrade dependencies versions and add any new ones
-         (except slf4j-api, commons-codec, commons-logging, commons-httpclient, geronimo-stax-api_1.0_spec, jcip-annotations, xml-apis, asm)
-         WARNING: Don't add netcdf / unidataCommon (partially LGPL code) -->
-    <dependency org="com.healthmarketscience.jackcess" name="jackcess" rev="${/com.healthmarketscience.jackcess/jackcess}" conf="compile"/>
-    <dependency org="com.healthmarketscience.jackcess" name="jackcess-encrypt" rev="${/com.healthmarketscience.jackcess/jackcess-encrypt}" conf="compile"/>
-    <dependency org="org.gagravarr" name="vorbis-java-tika" rev="${/org.gagravarr/vorbis-java-tika}" conf="compile"/>
-    <dependency org="org.gagravarr" name="vorbis-java-core" rev="${/org.gagravarr/vorbis-java-core}" conf="compile"/>
-    <dependency org="org.apache.james" name="apache-mime4j-core" rev="${/org.apache.james/apache-mime4j-core}" conf="compile"/>
-    <dependency org="org.apache.james" name="apache-mime4j-dom" rev="${/org.apache.james/apache-mime4j-dom}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-compress" rev="${/org.apache.commons/commons-compress}" conf="compile"/>
-    <dependency org="org.apache.pdfbox" name="pdfbox" rev="${/org.apache.pdfbox/pdfbox}" conf="compile"/>
-    <dependency org="org.apache.pdfbox" name="pdfbox-tools" rev="${/org.apache.pdfbox/pdfbox-tools}" conf="compile"/>
-    <dependency org="org.apache.pdfbox" name="fontbox" rev="${/org.apache.pdfbox/fontbox}" conf="compile"/>
-
-    <dependency org="org.apache.pdfbox" name="jempbox" rev="${/org.apache.pdfbox/jempbox}" conf="compile"/>
-    <dependency org="org.bouncycastle" name="bcmail-jdk15on" rev="${/org.bouncycastle/bcmail-jdk15on}" conf="compile"/>
-    <dependency org="org.bouncycastle" name="bcpkix-jdk15on" rev="${/org.bouncycastle/bcpkix-jdk15on}" conf="compile"/>
-    <dependency org="org.bouncycastle" name="bcprov-jdk15on" rev="${/org.bouncycastle/bcprov-jdk15on}" conf="compile"/>
-    <dependency org="org.apache.poi" name="poi" rev="${/org.apache.poi/poi}" conf="compile"/>
-    <dependency org="org.apache.poi" name="poi-scratchpad" rev="${/org.apache.poi/poi-scratchpad}" conf="compile"/>
-    <dependency org="org.apache.poi" name="poi-ooxml" rev="${/org.apache.poi/poi-ooxml}" conf="compile"/>
-    <dependency org="org.apache.poi" name="poi-ooxml-schemas" rev="${/org.apache.poi/poi-ooxml-schemas}" conf="compile"/>
-    <dependency org="org.apache.xmlbeans" name="xmlbeans" rev="${/org.apache.xmlbeans/xmlbeans}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-collections4" rev="${/org.apache.commons/commons-collections4}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-csv" rev="${/org.apache.commons/commons-csv}" conf="compile"/>
-    <dependency org="com.github.virtuald" name="curvesapi" rev="${/com.github.virtuald/curvesapi}" conf="compile"/>
-    <dependency org="org.ccil.cowan.tagsoup" name="tagsoup" rev="${/org.ccil.cowan.tagsoup/tagsoup}" conf="compile"/>
-    <dependency org="com.googlecode.mp4parser" name="isoparser" rev="${/com.googlecode.mp4parser/isoparser}" conf="compile"/>
-    <dependency org="org.aspectj" name="aspectjrt" rev="${/org.aspectj/aspectjrt}" conf="compile"/>
-    <dependency org="com.drewnoakes" name="metadata-extractor" rev="${/com.drewnoakes/metadata-extractor}" conf="compile"/>
-    <dependency org="de.l3s.boilerpipe" name="boilerpipe" rev="${/de.l3s.boilerpipe/boilerpipe}" conf="compile"/>
-    <dependency org="com.rometools" name="rome" rev="${/com.rometools/rome}" conf="compile"/>
-    <dependency org="com.rometools" name="rome-utils" rev="${/com.rometools/rome-utils}" conf="compile"/>
-    <dependency org="org.jdom" name="jdom2" rev="${/org.jdom/jdom2}" conf="compile"/>
-    <dependency org="com.googlecode.juniversalchardet" name="juniversalchardet" rev="${/com.googlecode.juniversalchardet/juniversalchardet}" conf="compile"/>
-    <dependency org="org.tukaani" name="xz" rev="${/org.tukaani/xz}" conf="compile"/>
-    <dependency org="com.adobe.xmp" name="xmpcore" rev="${/com.adobe.xmp/xmpcore}" conf="compile"/>
-    <dependency org="com.pff" name="java-libpst" rev="${/com.pff/java-libpst}" conf="compile"/>
-    <dependency org="org.tallison" name="jmatio" rev="${/org.tallison/jmatio}" conf="compile"/>
-    <dependency org="com.epam" name="parso" rev="${/com.epam/parso}" conf="compile"/>
-    <dependency org="org.brotli" name="dec" rev="${/org.brotli/dec}" conf="compile"/>
-
-    <!-- Other ExtractingRequestHandler dependencies -->
-    <dependency org="com.ibm.icu" name="icu4j" rev="${/com.ibm.icu/icu4j}" conf="compile"/>
-    <dependency org="xerces" name="xercesImpl" rev="${/xerces/xercesImpl}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/jaegertracer-configurator/build.xml b/solr/contrib/jaegertracer-configurator/build.xml
deleted file mode 100644
index 379eea3..0000000
--- a/solr/contrib/jaegertracer-configurator/build.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-jaegertracer-configurator" default="default">
-
-  <description>
-    Jaeger tracer configurator for tracing Solr using OpenTracing with Jaeger backend.
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-  <path id="classpath">
-    <path refid="solr.base.classpath"/>
-  </path>
-
-  <target name="compile-core" depends="solr-contrib-build.compile-core"/>
-
-</project>
diff --git a/solr/contrib/jaegertracer-configurator/ivy.xml b/solr/contrib/jaegertracer-configurator/ivy.xml
deleted file mode 100644
index 5243ca3..0000000
--- a/solr/contrib/jaegertracer-configurator/ivy.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="jaegertracer-configurator"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="io.jaegertracing" name="jaeger-core" rev="${/io.jaegertracing/jaeger-core}" conf="compile"/>
-    <dependency org="io.jaegertracing" name="jaeger-thrift" rev="${/io.jaegertracing/jaeger-thrift}" conf="compile"/>
-    <dependency org="org.apache.thrift" name="libthrift" rev="${/org.apache.thrift/libthrift}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/langid/build.xml b/solr/contrib/langid/build.xml
deleted file mode 100644
index 8c9a8ad..0000000
--- a/solr/contrib/langid/build.xml
+++ /dev/null
@@ -1,102 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-langid" default="default">
-
-  <description>
-    Language Identifier contrib for extracting language from a document being indexed
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-  <property name="test.model.dir" location="${tests.userdir}/langid/solr/collection1/conf"/>
-  <property name="test.leipzig.folder.link" value="http://pcai056.informatik.uni-leipzig.de/downloads/corpora"/><!-- URL broken? -->
-  <property name="test.build.models.dir" location="${build.dir}/build-test-models"/>
-  <property name="test.build.models.data.dir" location="${test.build.models.dir}/data"/>
-  <property name="test.build.models.sentences.dir" location="${test.build.models.dir}/train"/>
-  <property name="test.opennlp.model" value="opennlp-langdetect.eng-swe-spa-rus-deu.bin"/>
-
-  <path id="opennlp.jars">
-    <fileset dir="lib" includes="opennlp*.jar"/>
-  </path>
-  
-  <path id="classpath">
-    <fileset dir="../extraction/lib" excludes="${common.classpath.excludes}"/>
-    <fileset dir="lib" excludes="${common.classpath.excludes}"/>
-    <path refid="solr.base.classpath"/>   
-  </path>
-
-  <!-- we don't actually need to compile this thing, we just want its libs -->
-  <target name="resolve-extraction-libs">
-    <ant dir="${common-solr.dir}/contrib/extraction" target="resolve" inheritAll="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="compile-core" depends="resolve-extraction-libs,solr-contrib-build.compile-core"/>
-
-  <!--
-  Create test models using data for five languages from the Leipzig corpora.
-  See https://opennlp.apache.org/docs/1.8.3/manual/opennlp.html#tools.langdetect.training.leipzig
-  -->
-  <target name="train-test-models" description="Train small test models for unit tests" depends="resolve">
-    <download-leipzig language.code="eng"/> 
-    <download-leipzig language.code="swe"/>
-    <download-leipzig language.code="spa"/>
-    <download-leipzig language.code="rus"/>
-    <download-leipzig language.code="deu"/>
-
-    <echo message="Train OpenNLP test model over data from the Leipzig corpora"/>
-    <java classname="opennlp.tools.cmdline.CLI" classpathref="opennlp.jars" fork="true" failonerror="true">
-      <arg value="LanguageDetectorTrainer.leipzig"/>
-
-      <arg value="-model"/>
-      <arg value="${test.model.dir}/${test.opennlp.model}"/>
-
-      <arg value="-params"/>
-      <arg value="${tests.userdir}/opennlp.langdetect.trainer.params.txt"/>
-
-      <arg value="-sentencesDir"/> 
-      <arg value="${test.build.models.sentences.dir}"/>
-      
-      <arg value="-sentencesPerSample"/>
-      <arg value="3"/>  
-      
-      <arg value="-samplesPerLanguage"/>
-      <arg value="10000"/>
-    </java>
-  </target>
-
-  <macrodef name="download-leipzig">
-    <attribute name="language.code"/>
-    <attribute name="leipzig.tarball" default="@{language.code}_news_2007_30K.tar.gz"/>
-    <sequential>
-      <mkdir dir="${test.build.models.data.dir}"/>
-      <get src="${test.leipzig.folder.link}/@{leipzig.tarball}" dest="${test.build.models.data.dir}"/>
-      <untar compression="gzip" src="${test.build.models.data.dir}/@{leipzig.tarball}"
-             dest="${test.build.models.sentences.dir}">
-        <patternset>
-          <include name="*-sentences.txt"/>
-        </patternset>
-      </untar>
-    </sequential>
-  </macrodef>
-
-  <target name="regenerate" depends="train-test-models"/>
-</project>
diff --git a/solr/contrib/langid/ivy.xml b/solr/contrib/langid/ivy.xml
deleted file mode 100644
index 04c6b25..0000000
--- a/solr/contrib/langid/ivy.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="langid"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="com.cybozu.labs" name="langdetect" rev="${/com.cybozu.labs/langdetect}" conf="compile"/>
-    <dependency org="net.arnx" name="jsonic" rev="${/net.arnx/jsonic}" conf="compile"/>
-    <dependency org="org.apache.opennlp" name="opennlp-tools" rev="${/org.apache.opennlp/opennlp-tools}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java
index d4fbe60..90aef72 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java
@@ -45,7 +45,7 @@
  *   Identifies the language of a set of input fields.
  *   Also supports mapping of field names based on detected language.
  * </p>
- * See <a href="https://lucene.apache.org/solr/guide/7_4/detecting-languages-during-indexing.html">Detecting Languages During Indexing</a> in reference guide
+ * See <a href="https://lucene.apache.org/solr/guide/detecting-languages-during-indexing.html">Detecting Languages During Indexing</a> in reference guide
  * @since 3.5
  * @lucene.experimental
  */
@@ -428,7 +428,7 @@
   protected SolrInputDocumentReader solrDocReader(SolrInputDocument doc, String[] fields) {
     return new SolrInputDocumentReader(doc, fields, maxTotalChars, maxFieldValueChars, " ");
   }
-  
+
   /**
    * Concatenates content from input fields defined in langid.fl.
    * For test purposes only
diff --git a/solr/contrib/ltr/build.xml b/solr/contrib/ltr/build.xml
deleted file mode 100644
index a5778c4..0000000
--- a/solr/contrib/ltr/build.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-ltr" default="default">
-
-  <description>
-    Learning to Rank Package
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-  <path id="test.classpath">
-    <path refid="solr.test.base.classpath"/>
-    <fileset dir="${test.lib.dir}" includes="*.jar"/>
-  </path>
-
-  <target name="compile-core" depends=" solr-contrib-build.compile-core"/>
-
-</project>
diff --git a/solr/contrib/ltr/ivy.xml b/solr/contrib/ltr/ivy.xml
deleted file mode 100644
index 3b7e1c7..0000000
--- a/solr/contrib/ltr/ivy.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="ltr"/>
-    <configurations defaultconfmapping="compile->master;test->master">
-      <conf name="compile" transitive="false"/> <!-- keep unused 'compile' configuration to allow build to succeed -->
-      <conf name="test" transitive="false"/>
-    </configurations>
-
-   <dependencies>
-     <dependency org="org.slf4j" name="jcl-over-slf4j" rev="${/org.slf4j/jcl-over-slf4j}" conf="test"/>
-     <dependency org="org.mockito" name="mockito-core" rev="${/org.mockito/mockito-core}" conf="test"/>
-     <dependency org="net.bytebuddy" name="byte-buddy" rev="${/net.bytebuddy/byte-buddy}" conf="test"/>
-     <dependency org="org.objenesis" name="objenesis" rev="${/org.objenesis/objenesis}" conf="test"/>
-     <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-   </dependencies>
-</ivy-module>
diff --git a/solr/contrib/prometheus-exporter/build.xml b/solr/contrib/prometheus-exporter/build.xml
deleted file mode 100644
index 3c6ce7e..0000000
--- a/solr/contrib/prometheus-exporter/build.xml
+++ /dev/null
@@ -1,64 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-prometheus-exporter" default="default">
-
-  <description>
-    Prometheus exporter for exposing metrics from Solr using Metrics API and Search API.
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-  <path id="common.analysis.lucene.libs">
-    <pathelement path="${analyzers-common.jar}"/>
-  </path>
-
-  <path id="classpath">
-    <path refid="common.analysis.lucene.libs"/>
-    <path refid="solr.base.classpath"/>
-  </path>
-
-  <target name="module-jars-to-solr" depends="-module-jars-to-solr-not-for-package,-module-jars-to-solr-package"/>
-
-  <target name="-module-jars-to-solr-not-for-package" unless="called.from.create-package">
-    <antcall target="jar-analyzers-common" inheritall="true"/>
-    <property name="analyzers-common.uptodate" value="true"/>
-    <mkdir dir="${build.dir}/lucene-libs"/>
-    <copy todir="${build.dir}/lucene-libs" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-      <fileset file="${analyzers-common.jar}"/>
-    </copy>
-  </target>
-
-  <target name="-module-jars-to-solr-package" if="called.from.create-package">
-    <antcall target="-unpack-lucene-tgz" inheritall="true"/>
-    <pathconvert property="relative.common.analysis.lucene.libs" pathsep=",">
-      <path refid="common.analysis.lucene.libs"/>
-      <globmapper from="${common.build.dir}/*" to="*" handledirsep="true"/>
-    </pathconvert>
-    <mkdir dir="${build.dir}/lucene-libs"/>
-    <copy todir="${build.dir}/lucene-libs" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-      <fileset dir="${lucene.tgz.unpack.dir}/lucene-${version}" includes="${relative.common.analysis.lucene.libs}"/>
-    </copy>
-  </target>
-
-  <target name="compile-core" depends="jar-analyzers-common, solr-contrib-build.compile-core"/>
-
-  <target name="dist" depends="module-jars-to-solr, common-solr.dist"/>
-
-</project>
diff --git a/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml b/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml
index b043835..e20680c 100644
--- a/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml
+++ b/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml
@@ -985,7 +985,7 @@
             (if $parent_key_item_len == 5 then $parent_key_items[3] else "" end) as $shard |
             (if $parent_key_item_len == 5 then $parent_key_items[4] else "" end) as $replica |
             (if $parent_key_item_len == 5 then ($collection + "_" + $shard + "_" + $replica) else $core end) as $core |
-            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isMaster") as $object |
+            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isLeader") as $object |
             $object.key | split(".")[0] as $category |
             $object.key | split(".")[1] as $handler |
             (if $object.value == true then 1.0 else 0.0 end) as $value |
@@ -1018,13 +1018,13 @@
             (if $parent_key_item_len == 5 then $parent_key_items[3] else "" end) as $shard |
             (if $parent_key_item_len == 5 then $parent_key_items[4] else "" end) as $replica |
             (if $parent_key_item_len == 5 then ($collection + "_" + $shard + "_" + $replica) else $core end) as $core |
-            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isSlave") as $object |
+            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isFollower") as $object |
             $object.key | split(".")[0] as $category |
             $object.key | split(".")[1] as $handler |
             (if $object.value == true then 1.0 else 0.0 end) as $value |
             if $parent_key_item_len == 3 then
             {
-              name: "solr_metrics_core_replication_slave",
+              name: "solr_metrics_core_replication_follower",
               type: "GAUGE",
               help: "See following URL: https://lucene.apache.org/solr/guide/metrics-reporting.html",
               label_names: ["category", "handler", "core"],
@@ -1033,7 +1033,7 @@
             }
             else
             {
-              name: "solr_metrics_core_replication_slave",
+              name: "solr_metrics_core_replication_follower",
               type: "GAUGE",
               help: "See following URL: https://lucene.apache.org/solr/guide/metrics-reporting.html",
               label_names: ["category", "handler", "core", "collection", "shard", "replica"],
diff --git a/solr/contrib/prometheus-exporter/ivy.xml b/solr/contrib/prometheus-exporter/ivy.xml
deleted file mode 100644
index 7d62a3e..0000000
--- a/solr/contrib/prometheus-exporter/ivy.xml
+++ /dev/null
@@ -1,41 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="prometheus"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="io.prometheus" name="simpleclient" rev="${/io.prometheus/simpleclient}" conf="compile"/>
-    <dependency org="io.prometheus" name="simpleclient_common" rev="${/io.prometheus/simpleclient_common}" conf="compile"/>
-    <dependency org="io.prometheus" name="simpleclient_httpserver" rev="${/io.prometheus/simpleclient_httpserver}" conf="compile"/>
-    <dependency org="com.fasterxml.jackson.core" name="jackson-core" rev="${/com.fasterxml.jackson.core/jackson-core}" conf="compile"/>
-    <dependency org="com.fasterxml.jackson.core" name="jackson-databind" rev="${/com.fasterxml.jackson.core/jackson-databind}" conf="compile"/>
-    <dependency org="com.fasterxml.jackson.core" name="jackson-annotations" rev="${/com.fasterxml.jackson.core/jackson-annotations}" conf="compile"/>
-    <dependency org="net.thisptr" name="jackson-jq" rev="${/net.thisptr/jackson-jq}" conf="compile"/>
-    <dependency org="net.sourceforge.argparse4j" name="argparse4j" rev="${/net.sourceforge.argparse4j/argparse4j}" conf="compile"/>
-    <dependency org="org.slf4j" name="slf4j-api" rev="${/org.slf4j/slf4j-api}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-slf4j-impl" rev="${/org.apache.logging.log4j/log4j-slf4j-impl}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-api" rev="${/org.apache.logging.log4j/log4j-api}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-core" rev="${/org.apache.logging.log4j/log4j-core}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/MetricsCollectorFactory.java b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/MetricsCollectorFactory.java
index 1ad98d1..fdf8c8e 100644
--- a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/MetricsCollectorFactory.java
+++ b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/MetricsCollectorFactory.java
@@ -18,7 +18,7 @@
 package org.apache.solr.prometheus.collector;
 
 import java.util.List;
-import java.util.concurrent.Executor;
+import java.util.concurrent.ExecutorService;
 import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
@@ -29,12 +29,12 @@
 public class MetricsCollectorFactory {
 
   private final MetricsConfiguration metricsConfiguration;
-  private final Executor executor;
+  private final ExecutorService executor;
   private final int refreshInSeconds;
   private final SolrScraper solrScraper;
 
   public MetricsCollectorFactory(
-      Executor executor,
+      ExecutorService executor,
       int refreshInSeconds,
       SolrScraper solrScraper,
       MetricsConfiguration metricsConfiguration) {
diff --git a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SchedulerMetricsCollector.java b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SchedulerMetricsCollector.java
index 53b0aa1..62763df 100644
--- a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SchedulerMetricsCollector.java
+++ b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SchedulerMetricsCollector.java
@@ -19,20 +19,20 @@
 
 import java.io.Closeable;
 import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
 import java.util.List;
-import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.Callable;
 import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Executor;
+import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
 
 import io.prometheus.client.Collector;
 import io.prometheus.client.Histogram;
 import org.apache.solr.prometheus.exporter.SolrExporter;
-import org.apache.solr.prometheus.scraper.Async;
 import org.apache.solr.common.util.SolrNamedThreadFactory;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -53,7 +53,7 @@
       1,
       new SolrNamedThreadFactory("scheduled-metrics-collector"));
 
-  private final Executor executor;
+  private final ExecutorService executor;
 
   private final List<Observer> observers = new CopyOnWriteArrayList<>();
 
@@ -63,7 +63,7 @@
       .register(SolrExporter.defaultRegistry);
 
   public SchedulerMetricsCollector(
-      Executor executor,
+      ExecutorService executor,
       int duration,
       TimeUnit timeUnit,
       List<MetricCollector> metricCollectors) {
@@ -83,31 +83,27 @@
     try (Histogram.Timer timer = metricsCollectionTime.startTimer()) {
       log.info("Beginning metrics collection");
 
-      List<CompletableFuture<MetricSamples>> futures = new ArrayList<>();
-
-      for (MetricCollector metricsCollector : metricCollectors) {
-        futures.add(CompletableFuture.supplyAsync(() -> {
-          try {
-            return metricsCollector.collect();
-          } catch (Exception e) {
-            throw new RuntimeException(e);
-          }
-        }, executor));
+      final List<Future<MetricSamples>> futures = executor.invokeAll(
+          metricCollectors.stream()
+              .map(metricCollector -> (Callable<MetricSamples>) metricCollector::collect)
+              .collect(Collectors.toList())
+      );
+      MetricSamples metricSamples = new MetricSamples();
+      for (Future<MetricSamples> future : futures) {
+        try {
+          metricSamples.addAll(future.get());
+        } catch (ExecutionException e) {
+          log.error("Error occurred during metrics collection", e.getCause());//logok
+          // continue any ways; do not fail
+        }
       }
 
-      try {
-        CompletableFuture<List<MetricSamples>> sampleFuture = Async.waitForAllSuccessfulResponses(futures);
-        List<MetricSamples> samples = sampleFuture.get();
+      notifyObservers(metricSamples.asList());
 
-        MetricSamples metricSamples = new MetricSamples();
-        samples.forEach(metricSamples::addAll);
-
-        notifyObservers(metricSamples.asList());
-
-        log.info("Completed metrics collection");
-      } catch (InterruptedException | ExecutionException e) {
-        log.error("Error while waiting for metric collection to complete", e);
-      }
+      log.info("Completed metrics collection");
+    } catch (InterruptedException e) {
+      log.warn("Interrupted waiting for metric collection to complete", e);
+      Thread.currentThread().interrupt();
     }
 
   }
diff --git a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/Async.java b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/Async.java
deleted file mode 100644
index 2b8c763..0000000
--- a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/Async.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.prometheus.scraper;
-
-import java.lang.invoke.MethodHandles;
-import java.util.List;
-import java.util.concurrent.CompletableFuture;
-import java.util.stream.Collectors;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class Async {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @SuppressWarnings({"rawtypes"})
-  public static <T> CompletableFuture<List<T>> waitForAllSuccessfulResponses(List<CompletableFuture<T>> futures) {
-    CompletableFuture<Void> completed = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
-
-    return completed.thenApply(values -> {
-        return futures.stream()
-          .map(CompletableFuture::join)
-          .collect(Collectors.toList());
-      }
-    ).exceptionally(error -> {
-      futures.stream()
-          .filter(CompletableFuture::isCompletedExceptionally)
-          .forEach(future -> {
-            try {
-              future.get();
-            } catch (Exception exception) {
-              log.warn("Error occurred during metrics collection", exception);
-            }
-          });
-
-      return futures.stream()
-          .filter(future -> !(future.isCompletedExceptionally() || future.isCancelled()))
-          .map(CompletableFuture::join)
-          .collect(Collectors.toList());
-      }
-    );
-  }
-
-
-}
diff --git a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrCloudScraper.java b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrCloudScraper.java
index 896ea27..e4b98e7 100644
--- a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrCloudScraper.java
+++ b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrCloudScraper.java
@@ -21,7 +21,7 @@
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Executor;
+import java.util.concurrent.ExecutorService;
 import java.util.function.Function;
 import java.util.stream.Collectors;
 
@@ -44,7 +44,7 @@
 
   private Cache<String, HttpSolrClient> hostClientCache = CacheBuilder.newBuilder().build();
 
-  public SolrCloudScraper(CloudSolrClient solrClient, Executor executor, SolrClientFactory solrClientFactory) {
+  public SolrCloudScraper(CloudSolrClient solrClient, ExecutorService executor, SolrClientFactory solrClientFactory) {
     super(executor);
     this.solrClient = solrClient;
     this.solrClientFactory = solrClientFactory;
diff --git a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrScraper.java b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrScraper.java
index bbbfc20..c1ee6aa 100644
--- a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrScraper.java
+++ b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrScraper.java
@@ -21,12 +21,11 @@
 import java.lang.invoke.MethodHandles;
 import java.util.ArrayList;
 import java.util.Collection;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Executor;
-import java.util.concurrent.Future;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
 import java.util.function.Function;
 import java.util.stream.Collectors;
 
@@ -42,7 +41,6 @@
 import org.apache.solr.client.solrj.impl.HttpSolrClient;
 import org.apache.solr.client.solrj.request.QueryRequest;
 import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.Pair;
 import org.apache.solr.prometheus.collector.MetricSamples;
 import org.apache.solr.prometheus.exporter.MetricsQuery;
 import org.apache.solr.prometheus.exporter.SolrExporter;
@@ -59,7 +57,7 @@
   protected static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  protected final Executor executor;
+  protected final ExecutorService executor;
 
   public abstract Map<String, MetricSamples> metricsForAllHosts(MetricsQuery query) throws IOException;
 
@@ -69,7 +67,7 @@
   public abstract MetricSamples search(MetricsQuery query) throws IOException;
   public abstract MetricSamples collections(MetricsQuery metricsQuery) throws IOException;
 
-  public SolrScraper(Executor executor) {
+  public SolrScraper(ExecutorService executor) {
     this.executor = executor;
   }
 
@@ -77,17 +75,32 @@
       Collection<String> items,
       Function<String, MetricSamples> samplesCallable) throws IOException {
 
-    List<CompletableFuture<Pair<String, MetricSamples>>> futures = items.stream()
-        .map(item -> CompletableFuture.supplyAsync(() -> new Pair<>(item, samplesCallable.apply(item)), executor))
-        .collect(Collectors.toList());
-
-    Future<List<Pair<String, MetricSamples>>> allComplete = Async.waitForAllSuccessfulResponses(futures);
+    Map<String, MetricSamples> result = new HashMap<>(); // sync on this when adding to it below
 
     try {
-      return allComplete.get().stream().collect(Collectors.toMap(Pair::first, Pair::second));
-    } catch (InterruptedException | ExecutionException e) {
-      throw new IOException(e);
+      // invoke each samplesCallable with each item and putting the results in the above "result" map.
+      executor.invokeAll(
+          items.stream()
+              .map(item -> (Callable<MetricSamples>) () -> {
+                try {
+                  final MetricSamples samples = samplesCallable.apply(item);
+                  synchronized (result) {
+                    result.put(item, samples);
+                  }
+                } catch (Exception e) {
+                  // do NOT totally fail; just log and move on
+                  log.warn("Error occurred during metrics collection", e);
+                }
+                return null;//not used
+              })
+              .collect(Collectors.toList())
+      );
+    } catch (InterruptedException e) {
+      Thread.currentThread().interrupt();
+      throw new RuntimeException(e);
     }
+
+    return result;
   }
 
   protected MetricSamples request(SolrClient client, MetricsQuery query) throws IOException {
diff --git a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrStandaloneScraper.java b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrStandaloneScraper.java
index 8c1ee78..4bd8370 100644
--- a/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrStandaloneScraper.java
+++ b/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrStandaloneScraper.java
@@ -22,7 +22,7 @@
 import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
-import java.util.concurrent.Executor;
+import java.util.concurrent.ExecutorService;
 
 import com.fasterxml.jackson.databind.JsonNode;
 import org.apache.solr.client.solrj.SolrServerException;
@@ -38,7 +38,7 @@
 
   private final HttpSolrClient solrClient;
 
-  public SolrStandaloneScraper(HttpSolrClient solrClient, Executor executor) {
+  public SolrStandaloneScraper(HttpSolrClient solrClient, ExecutorService executor) {
     super(executor);
     this.solrClient = solrClient;
   }
diff --git a/solr/contrib/prometheus-exporter/src/test-files/conf/prometheus-solr-exporter-integration-test-config.xml b/solr/contrib/prometheus-exporter/src/test-files/conf/prometheus-solr-exporter-integration-test-config.xml
index 6b306c9..f2f96a0 100644
--- a/solr/contrib/prometheus-exporter/src/test-files/conf/prometheus-solr-exporter-integration-test-config.xml
+++ b/solr/contrib/prometheus-exporter/src/test-files/conf/prometheus-solr-exporter-integration-test-config.xml
@@ -989,7 +989,7 @@
             (if $parent_key_item_len == 5 then $parent_key_items[3] else "" end) as $shard |
             (if $parent_key_item_len == 5 then $parent_key_items[4] else "" end) as $replica |
             (if $parent_key_item_len == 5 then ($collection + "_" + $shard + "_" + $replica) else $core end) as $core |
-            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isMaster") as $object |
+            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isLeader") as $object |
             $object.key | split(".")[0] as $category |
             $object.key | split(".")[1] as $handler |
             (if $object.value == true then 1.0 else 0.0 end) as $value |
@@ -1022,13 +1022,13 @@
             (if $parent_key_item_len == 5 then $parent_key_items[3] else "" end) as $shard |
             (if $parent_key_item_len == 5 then $parent_key_items[4] else "" end) as $replica |
             (if $parent_key_item_len == 5 then ($collection + "_" + $shard + "_" + $replica) else $core end) as $core |
-            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isSlave") as $object |
+            $parent.value | to_entries | .[] | select(.key == "REPLICATION./replication.isFollower") as $object |
             $object.key | split(".")[0] as $category |
             $object.key | split(".")[1] as $handler |
             (if $object.value == true then 1.0 else 0.0 end) as $value |
             if $parent_key_item_len == 3 then
             {
-              name: "solr_metrics_core_replication_slave",
+              name: "solr_metrics_core_replication_follower",
               type: "GAUGE",
               help: "See following URL: https://lucene.apache.org/solr/guide/metrics-reporting.html",
               label_names: ["category", "handler", "core"],
@@ -1037,7 +1037,7 @@
             }
             else
             {
-              name: "solr_metrics_core_replication_slave",
+              name: "solr_metrics_core_replication_follower",
               type: "GAUGE",
               help: "See following URL: https://lucene.apache.org/solr/guide/metrics-reporting.html",
               label_names: ["category", "handler", "core", "collection", "shard", "replica"],
diff --git a/solr/contrib/prometheus-exporter/src/test/org/apache/solr/prometheus/scraper/AsyncTest.java b/solr/contrib/prometheus-exporter/src/test/org/apache/solr/prometheus/scraper/AsyncTest.java
deleted file mode 100644
index 0959bd4..0000000
--- a/solr/contrib/prometheus-exporter/src/test/org/apache/solr/prometheus/scraper/AsyncTest.java
+++ /dev/null
@@ -1,78 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.prometheus.scraper;
-
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.List;
-import java.util.concurrent.CompletableFuture;
-import java.util.stream.Collectors;
-
-import org.junit.Test;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-public class AsyncTest {
-
-  private CompletableFuture<Integer> failedFuture() {
-    CompletableFuture<Integer> result = new CompletableFuture<>();
-    result.completeExceptionally(new RuntimeException("Some error"));
-    return result;
-  }
-
-  @Test
-  public void getAllResults() throws Exception {
-    List<Integer> expectedValues = Arrays.asList(1, 2, 3, 4, 5, 6, 7);
-
-    CompletableFuture<List<Integer>> results = Async.waitForAllSuccessfulResponses(
-        expectedValues.stream()
-            .map(CompletableFuture::completedFuture)
-            .collect(Collectors.toList()));
-
-    List<Integer> actualValues = results.get();
-
-    Collections.sort(expectedValues);
-    Collections.sort(actualValues);
-
-    assertEquals(expectedValues, actualValues);
-  }
-
-  @Test
-  public void ignoresFailures() throws Exception {
-    CompletableFuture<List<Integer>> results = Async.waitForAllSuccessfulResponses(Arrays.asList(
-        CompletableFuture.completedFuture(1),
-        failedFuture()
-    ));
-
-    List<Integer> values = results.get();
-
-    assertEquals(Collections.singletonList(1), values);
-  }
-
-  @Test
-  public void allFuturesFail() throws Exception {
-    CompletableFuture<List<Integer>> results = Async.waitForAllSuccessfulResponses(Collections.singletonList(
-        failedFuture()
-    ));
-
-    List<Integer> values = results.get();
-
-    assertTrue(values.isEmpty());
-  }
-}
\ No newline at end of file
diff --git a/solr/contrib/velocity/build.xml b/solr/contrib/velocity/build.xml
deleted file mode 100644
index a6712af..0000000
--- a/solr/contrib/velocity/build.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-
-<project name="solr-velocity" default="default">
-
-  <description>
-    Solr Velocity Response Writer
-  </description>
-
-  <import file="../contrib-build.xml"/>
-
-</project>
diff --git a/solr/contrib/velocity/ivy.xml b/solr/contrib/velocity/ivy.xml
deleted file mode 100644
index ab8758b..0000000
--- a/solr/contrib/velocity/ivy.xml
+++ /dev/null
@@ -1,36 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="velocity"/>
-  <configurations defaultconfmapping="compile->master;test->master">
-    <conf name="compile" transitive="false"/>
-    <conf name="test" transitive="false"/>
-  </configurations>
-  <dependencies>
-    <dependency org="org.apache.commons" name="commons-lang3" rev="${/org.apache.commons/commons-lang3}" conf="compile"/>
-
-    <dependency org="org.apache.velocity" name="velocity-engine-core" rev="${/org.apache.velocity/velocity-engine-core}" conf="compile"/>
-
-    <dependency org="org.apache.velocity.tools" name="velocity-tools-generic" rev="${/org.apache.velocity.tools/velocity-tools-generic}" conf="compile"/>
-    <dependency org="org.apache.velocity.tools" name="velocity-tools-view" rev="${/org.apache.velocity.tools/velocity-tools-view}" conf="compile"/>
-    <dependency org="org.apache.velocity.tools" name="velocity-tools-view-jsp" rev="${/org.apache.velocity.tools/velocity-tools-view-jsp}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/core/build.xml b/solr/core/build.xml
deleted file mode 100644
index 3c3d28a..0000000
--- a/solr/core/build.xml
+++ /dev/null
@@ -1,131 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="solr-core" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-  <description>Solr Core</description>
-
-  <!-- html file for testing -->
-  <property name="rat.excludes" value="**/htmlStripReaderTest.html,**/*.iml"/>
-  
-  <property name="test.lib.dir" location="test-lib"/>
-
-  <property name="forbidden-tests-excludes" value="
-    org/apache/solr/internal/**
-    org/apache/hadoop/**
-  "/>
-
-  <import file="../common-build.xml"/>
-
-  <target name="compile-core" depends="compile-solrj,common-solr.compile-core"/>
-
-  <target name="compile-test" depends="jar-analyzers-icu,-compile-test-lucene-queryparser,-compile-test-lucene-backward-codecs,-compile-analysis-extras,common-solr.compile-test"/>
-
-  <path id="test.classpath">
-    <path refid="solr.test.base.classpath"/>
-    <fileset dir="${test.lib.dir}" includes="*.jar"/>
-    <pathelement location="${analyzers-icu.jar}"/>
-    <pathelement location="${common-solr.dir}/build/contrib/solr-analysis-extras/classes/java"/>
-    <pathelement location="${common.dir}/build/queryparser/classes/test"/>
-    <pathelement location="${common.dir}/build/backward-codecs/classes/test"/>
-    <fileset dir="${common-solr.dir}/contrib/analysis-extras/lib" includes="icu4j*.jar"/>
-  </path>
-
-  <!-- specialized to ONLY depend on solrj -->
-  <target name="javadocs" depends="compile-core,define-lucene-javadoc-url,lucene-javadocs,javadocs-solrj,check-javadocs-uptodate" unless="javadocs-uptodate-${name}">
-    <sequential>
-      <mkdir dir="${javadoc.dir}/${name}"/>
-      <solr-invoke-javadoc>
-        <solrsources>
-          <packageset dir="${src.dir}"/>
-        </solrsources>
-        <links>
-          <link href="../solr-solrj"/>
-        </links>
-      </solr-invoke-javadoc>
-      <solr-jarify basedir="${javadoc.dir}/${name}" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-     </sequential>
-  </target>
-
-  <target name="-dist-maven" depends="-dist-maven-src-java"/>
-
-  <target name="-install-to-maven-local-repo" depends="-install-src-java-to-maven-local-repo"/>
-
-  <target name="resolve" depends="ivy-availability-check,ivy-fail,ivy-configure">
-    <sequential>
-      <ivy:retrieve conf="compile,compile.hadoop" type="jar,bundle" sync="${ivy.sync}" log="download-only" symlink="${ivy.symlink}"/>
-      <ivy:retrieve conf="test,test.DfsMiniCluster,test.MiniKdc" type="jar,bundle,test" sync="${ivy.sync}" log="download-only" symlink="${ivy.symlink}"
-                    pattern="${test.lib.dir}/[artifact]-[revision](-[classifier]).[ext]"/>
-    </sequential>
-  </target>
-
-  <target name="javacc" depends="javacc-QueryParser"/>
-  <target name="javacc-QueryParser" depends="resolve-javacc">
-    <sequential>
-      <invoke-javacc target="src/java/org/apache/solr/parser/QueryParser.jj"
-                     outputDir="src/java/org/apache/solr/parser"/>
-
-      <!-- Change the incorrect public ctors for QueryParser to be protected instead -->
-      <replaceregexp file="src/java/org/apache/solr/parser/QueryParser.java"
-                     byline="true"
-                     match="public QueryParser\(CharStream "
-                     replace="protected QueryParser(CharStream "/>
-      <replaceregexp file="src/java/org/apache/solr/parser/QueryParser.java"
-                     byline="true"
-                     match="public QueryParser\(QueryParserTokenManager "
-                     replace="protected QueryParser(QueryParserTokenManager "/>
-      <!-- change an exception used for signaling to be static -->
-      <replaceregexp file="src/java/org/apache/solr/parser/QueryParser.java"
-                     byline="true"
-                     match="final private LookaheadSuccess jj_ls ="
-                     replace="static final private LookaheadSuccess jj_ls =" />
-      <replace token="StringBuffer" value="StringBuilder" encoding="UTF-8">
-         <fileset dir="src/java/org/apache/solr/parser" includes="ParseException.java TokenMgrError.java"/>
-      </replace>
-
-    </sequential>
-  </target>
-  <target name="resolve-javacc" xmlns:ivy="antlib:org.apache.ivy.ant">
-    <!-- setup a "fake" JavaCC distribution folder in ${build.dir} to make JavaCC ANT task happy: -->
-    <ivy:retrieve organisation="net.java.dev.javacc" module="javacc" revision="5.0"
-      inline="true" transitive="false" type="jar" sync="true" symlink="${ivy.symlink}"
-      pattern="${build.dir}/javacc/bin/lib/[artifact].[ext]"/>
-  </target>
-
-  <macrodef name="invoke-javacc">
-    <attribute name="target"/>
-    <attribute name="outputDir"/>
-    <sequential>
-      <mkdir dir="@{outputDir}"/>
-      <delete>
-        <fileset dir="@{outputDir}" includes="*.java">
-          <containsregexp expression="Generated.*By.*JavaCC"/>
-        </fileset>
-      </delete>
-      <javacc
-          target="@{target}"
-          outputDirectory="@{outputDir}"
-          javacchome="${build.dir}/javacc"
-          jdkversion="1.${javac.release}"
-      />
-      <fixcrlf srcdir="@{outputDir}" includes="*.java" encoding="UTF-8">
-        <containsregexp expression="Generated.*By.*JavaCC"/>
-      </fixcrlf>
-    </sequential>
-  </macrodef>
-
-
-</project>
diff --git a/solr/core/ivy.xml b/solr/core/ivy.xml
deleted file mode 100644
index 3bb6682..0000000
--- a/solr/core/ivy.xml
+++ /dev/null
@@ -1,148 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0"  xmlns:maven="http://ant.apache.org/ivy/maven">
-  <info organisation="org.apache.solr" module="core"/>
-  
-  <configurations defaultconfmapping="compile->master;compile.hadoop->master;test->master;test.DfsMiniCluster->master;test.MiniKdc->master">
-    <!-- artifacts in the "compile" and "compile.hadoop" configurations will go into solr/core/lib/ -->
-    <conf name="compile" transitive="false"/>
-    <conf name="compile.hadoop" transitive="false"/>
-    <!-- artifacts in the "test", "test.DfsMiniCluster", and "test.MiniKdc" configuration will go into solr/core/test-lib/ -->
-    <conf name="test" transitive="false"/>
-    <conf name="test.DfsMiniCluster" transitive="false"/>
-    <conf name="test.MiniKdc" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="io.opentracing" name="opentracing-api" rev="${/io.opentracing/opentracing-api}" conf="compile"/>
-    <dependency org="io.opentracing" name="opentracing-noop" rev="${/io.opentracing/opentracing-noop}" conf="compile"/>
-    <dependency org="io.opentracing" name="opentracing-util" rev="${/io.opentracing/opentracing-util}" conf="compile"/>
-    <dependency org="io.opentracing" name="opentracing-mock" rev="${/io.opentracing/opentracing-mock}" conf="test"/>
-
-    <dependency org="commons-codec" name="commons-codec" rev="${/commons-codec/commons-codec}" conf="compile"/>
-    <dependency org="commons-io" name="commons-io" rev="${/commons-io/commons-io}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-exec" rev="${/org.apache.commons/commons-exec}" conf="compile"/>
-    <dependency org="commons-cli" name="commons-cli" rev="${/commons-cli/commons-cli}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-collections4" rev="${org.apache.commons.commons-collections4-rev}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-text" rev="${/org.apache.commons/commons-text}" conf="compile"/>
-    <dependency org="com.google.guava" name="guava" rev="${/com.google.guava/guava}" conf="compile"/>
-    <dependency org="org.locationtech.spatial4j" name="spatial4j" rev="${/org.locationtech.spatial4j/spatial4j}" conf="compile"/>
-    <dependency org="org.antlr" name="antlr4-runtime" rev="${/org.antlr/antlr4-runtime}"/>
-    <dependency org="org.apache.commons" name="commons-math3" rev="${/org.apache.commons/commons-math3}" conf="compile"/>
-    <dependency org="org.ow2.asm" name="asm" rev="${/org.ow2.asm/asm}" conf="compile"/>
-    <dependency org="org.ow2.asm" name="asm-commons" rev="${/org.ow2.asm/asm-commons}" conf="compile"/>
-    <dependency org="org.restlet.jee" name="org.restlet" rev="${/org.restlet.jee/org.restlet}" conf="compile"/>
-    <dependency org="org.restlet.jee" name="org.restlet.ext.servlet" rev="${/org.restlet.jee/org.restlet.ext.servlet}" conf="compile"/>
-    <dependency org="com.carrotsearch" name="hppc" rev="${/com.carrotsearch/hppc}" conf="compile"/>
-    <dependency org="io.sgr" name="s2-geometry-library-java" rev="${/io.sgr/s2-geometry-library-java}" conf="compile"/>
-
-    <dependency org="org.apache.logging.log4j" name="log4j-api" rev="${/org.apache.logging.log4j/log4j-api}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-core" rev="${/org.apache.logging.log4j/log4j-core}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-slf4j-impl" rev="${/org.apache.logging.log4j/log4j-slf4j-impl}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-1.2-api" rev="${/org.apache.logging.log4j/log4j-1.2-api}" conf="compile"/>
-    <dependency org="com.lmax" name="disruptor" rev="${/com.lmax/disruptor}" conf="compile"/>
-    <dependency org="org.slf4j" name="jcl-over-slf4j" rev="${/org.slf4j/jcl-over-slf4j}" conf="compile"/>
-
-    <dependency org="org.mockito" name="mockito-core" rev="${/org.mockito/mockito-core}" conf="test"/>
-    <dependency org="net.bytebuddy" name="byte-buddy" rev="${/net.bytebuddy/byte-buddy}" conf="test"/>
-    <dependency org="org.objenesis" name="objenesis" rev="${/org.objenesis/objenesis}" conf="test"/>
-
-    <dependency org="com.fasterxml.jackson.core" name="jackson-core" rev="${/com.fasterxml.jackson.core/jackson-core}" conf="compile"/>
-    <dependency org="com.fasterxml.jackson.core" name="jackson-databind" rev="${/com.fasterxml.jackson.core/jackson-databind}" conf="compile"/>
-    <dependency org="com.fasterxml.jackson.core" name="jackson-annotations" rev="${/com.fasterxml.jackson.core/jackson-annotations}" conf="compile"/>
-    <dependency org="com.fasterxml.jackson.dataformat" name="jackson-dataformat-smile" rev="${/com.fasterxml.jackson.dataformat/jackson-dataformat-smile}" conf="compile"/>
-
-    <dependency org="org.apache.hadoop" name="hadoop-auth" rev="${/org.apache.hadoop/hadoop-auth}" conf="compile.hadoop"/>
-    <dependency org="org.apache.hadoop" name="hadoop-common" rev="${/org.apache.hadoop/hadoop-common}" conf="compile.hadoop"/>
-    <dependency org="org.apache.hadoop" name="hadoop-hdfs-client" rev="${/org.apache.hadoop/hadoop-hdfs-client}" conf="compile.hadoop"/>
-    <!--
-      hadoop-annotations is a runtime dependencies,
-      so even though they are not compile-time dependencies, they are included
-      here as such so that they are included in the runtime distribution.
-     -->
-    <dependency org="org.apache.hadoop" name="hadoop-annotations" rev="${/org.apache.hadoop/hadoop-annotations}" conf="compile.hadoop"/>
-
-    <dependency org="org.apache.commons" name="commons-configuration2" rev="${/org.apache.commons/commons-configuration2}" conf="compile.hadoop"/>
-    <dependency org="commons-collections" name="commons-collections" rev="${/commons-collections/commons-collections}" conf="compile.hadoop"/>
-    <dependency org="com.github.ben-manes.caffeine" name="caffeine" rev="${/com.github.ben-manes.caffeine/caffeine}" conf="compile.hadoop"/>
-    <dependency org="com.google.re2j" name="re2j" rev="${/com.google.re2j/re2j}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-lang3" rev="${/org.apache.commons/commons-lang3}" conf="compile.hadoop"/>
-    <dependency org="org.apache.htrace" name="htrace-core4" rev="${/org.apache.htrace/htrace-core4}" conf="compile.hadoop"/>
-
-    <dependency org="org.apache.curator" name="curator-framework" rev="${/org.apache.curator/curator-framework}" conf="compile.hadoop"/>
-    <dependency org="org.apache.curator" name="curator-client" rev="${/org.apache.curator/curator-client}" conf="compile.hadoop"/>
-    <dependency org="org.apache.curator" name="curator-recipes" rev="${/org.apache.curator/curator-recipes}" conf="compile.hadoop"/>
-
-    <!-- Hadoop auth framework needs kerby jars on server classpath -->
-    <dependency org="org.apache.kerby" name="kerb-core"  rev="${/org.apache.kerby/kerb-core}"  conf="compile.hadoop"/>
-    <dependency org="org.apache.kerby" name="kerb-util"  rev="${/org.apache.kerby/kerb-util}"  conf="compile.hadoop"/>
-    <dependency org="org.apache.kerby" name="kerby-asn1" rev="${/org.apache.kerby/kerby-asn1}" conf="compile.hadoop"/>
-    <dependency org="org.apache.kerby" name="kerby-pkix" rev="${/org.apache.kerby/kerby-pkix}" conf="compile.hadoop"/>
-
-    <!-- Hadoop DfsMiniCluster Dependencies-->
-    <dependency org="org.apache.hadoop" name="hadoop-common" rev="${/org.apache.hadoop/hadoop-common}" conf="test.DfsMiniCluster">
-      <artifact name="hadoop-common" type="test" ext="jar" maven:classifier="tests" />
-    </dependency>
-    <dependency org="org.apache.hadoop" name="hadoop-hdfs" rev="${/org.apache.hadoop/hadoop-hdfs}" conf="test.DfsMiniCluster">
-      <artifact name="hadoop-hdfs" ext="jar" />
-      <artifact name="hadoop-hdfs" type="test" ext="jar" maven:classifier="tests" />
-    </dependency>
-    <dependency org="com.sun.jersey" name="jersey-servlet" rev="${/com.sun.jersey/jersey-servlet}" conf="test.DfsMiniCluster"/>
-    <dependency org="commons-logging" name="commons-logging" rev="${/commons-logging/commons-logging}" conf="test.DfsMiniCluster"/>
-    <dependency org="org.apache.commons" name="commons-compress" rev="${/org.apache.commons/commons-compress}" conf="test.DfsMiniCluster"/>
-    <dependency org="org.apache.commons" name="commons-text" rev="${/org.apache.commons/commons-text}" conf="test.DfsMiniCluster"/>
-
-    <!-- Hadoop MiniKdc Dependencies-->
-    <dependency org="org.apache.hadoop" name="hadoop-minikdc" rev="${/org.apache.hadoop/hadoop-minikdc}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerby-config" rev="${/org.apache.kerby/kerby-config}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-client" rev="${/org.apache.kerby/kerb-client}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-common" rev="${/org.apache.kerby/kerb-common}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-crypto" rev="${/org.apache.kerby/kerb-crypto}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-identity" rev="${/org.apache.kerby/kerb-identity}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-server" rev="${/org.apache.kerby/kerb-server}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-simplekdc" rev="${/org.apache.kerby/kerb-simplekdc}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerby-util" rev="${/org.apache.kerby/kerby-util}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerb-admin" rev="${/org.apache.kerby/kerb-admin}" conf="test.MiniKdc"/>
-    <dependency org="org.apache.kerby" name="kerby-kdc" rev="${/org.apache.kerby/kerby-kdc}" conf="test.MiniKdc"/>
-
-    <!-- StatsComponents percentiles Dependencies-->
-    <dependency org="com.tdunning" name="t-digest" rev="${/com.tdunning/t-digest}" conf="compile->*"/>
-
-    <!-- SQL Parser -->
-    <dependency org="org.apache.calcite" name="calcite-core" rev="${/org.apache.calcite/calcite-core}" conf="compile"/>
-    <dependency org="org.apache.calcite" name="calcite-linq4j" rev="${/org.apache.calcite/calcite-linq4j}" conf="compile"/>
-    <dependency org="org.apache.calcite.avatica" name="avatica-core" rev="${/org.apache.calcite.avatica/avatica-core}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-lang3" rev="${/org.apache.commons/commons-lang3}" conf="compile"/>
-    <dependency org="net.hydromatic" name="eigenbase-properties" rev="${/net.hydromatic/eigenbase-properties}" conf="compile"/>
-    <dependency org="org.codehaus.janino" name="janino" rev="${/org.codehaus.janino/janino}" conf="compile"/>
-    <dependency org="org.codehaus.janino" name="commons-compiler" rev="${/org.codehaus.janino/commons-compiler}" conf="compile"/>
-    <dependency org="com.google.protobuf" name="protobuf-java" rev="${/com.google.protobuf/protobuf-java}" conf="compile"/>
-    <dependency org="com.jayway.jsonpath" name="json-path" rev="${/com.jayway.jsonpath/json-path}" conf="compile"/>
-
-    <!-- Package manager -->
-    <dependency org="com.github.zafarkhaja" name="java-semver" rev="${/com.github.zafarkhaja/java-semver}" conf="compile"/>
-
-    <dependency org="org.rrd4j" name="rrd4j" rev="${/org.rrd4j/rrd4j}" conf="compile"/>
-
-    <!-- JWT Auth plugin -->
-    <dependency org="org.bitbucket.b_c" name="jose4j" rev="${/org.bitbucket.b_c/jose4j}" conf="compile"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java b/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
index 720222c..bcdec9a 100644
--- a/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
+++ b/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
@@ -341,8 +341,7 @@
             if (log.isDebugEnabled()) {
               log.debug("{}: Get the message id: {} message: {}", messageHandler.getName(), head.getId(), message);
             }
-            Runner runner = new Runner(messageHandler, message,
-                operation, head, lock);
+            Runner runner = new Runner(messageHandler, message, operation, head, lock);
             tpe.execute(runner);
           }
 
diff --git a/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java b/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
index 2be35fb..a3e2f7e 100644
--- a/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
+++ b/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java
@@ -58,7 +58,6 @@
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.request.SolrRequestHandler;
 import org.apache.solr.search.SolrIndexSearcher;
-import org.apache.solr.update.CdcrUpdateLog;
 import org.apache.solr.update.CommitUpdateCommand;
 import org.apache.solr.update.PeerSyncWithLeader;
 import org.apache.solr.update.UpdateLog;
@@ -239,14 +238,8 @@
     }
 
     ModifiableSolrParams solrParams = new ModifiableSolrParams();
-    solrParams.set(ReplicationHandler.MASTER_URL, leaderUrl);
-    solrParams.set(ReplicationHandler.SKIP_COMMIT_ON_MASTER_VERSION_ZERO, replicaType == Replica.Type.TLOG);
-    // always download the tlogs from the leader when running with cdcr enabled. We need to have all the tlogs
-    // to ensure leader failover doesn't cause missing docs on the target
-    if (core.getUpdateHandler().getUpdateLog() != null
-        && core.getUpdateHandler().getUpdateLog() instanceof CdcrUpdateLog) {
-      solrParams.set(ReplicationHandler.TLOG_FILES, true);
-    }
+    solrParams.set(ReplicationHandler.LEADER_URL, leaderUrl);
+    solrParams.set(ReplicationHandler.SKIP_COMMIT_ON_LEADER_VERSION_ZERO, replicaType == Replica.Type.TLOG);
 
     if (isClosed()) return; // we check closed on return
     boolean success = replicationHandler.doFetch(solrParams, false).getSuccessful();
diff --git a/solr/core/src/java/org/apache/solr/cloud/ReplicateFromLeader.java b/solr/core/src/java/org/apache/solr/cloud/ReplicateFromLeader.java
index 17a6ec3..7e2b872 100644
--- a/solr/core/src/java/org/apache/solr/cloud/ReplicateFromLeader.java
+++ b/solr/core/src/java/org/apache/solr/cloud/ReplicateFromLeader.java
@@ -76,12 +76,12 @@
       }
       log.info("Will start replication from leader with poll interval: {}", pollIntervalStr );
 
-      NamedList<Object> slaveConfig = new NamedList<>();
-      slaveConfig.add("fetchFromLeader", Boolean.TRUE);
-      slaveConfig.add(ReplicationHandler.SKIP_COMMIT_ON_MASTER_VERSION_ZERO, switchTransactionLog);
-      slaveConfig.add("pollInterval", pollIntervalStr);
+      NamedList<Object> followerConfig = new NamedList<>();
+      followerConfig.add("fetchFromLeader", Boolean.TRUE);
+      followerConfig.add(ReplicationHandler.SKIP_COMMIT_ON_LEADER_VERSION_ZERO, switchTransactionLog);
+      followerConfig.add("pollInterval", pollIntervalStr);
       NamedList<Object> replicationConfig = new NamedList<>();
-      replicationConfig.add("slave", slaveConfig);
+      replicationConfig.add("follower", followerConfig);
 
       String lastCommitVersion = getCommitVersion(core);
       if (lastCommitVersion != null) {
diff --git a/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteCollectionCmd.java b/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteCollectionCmd.java
index 70d8d2b..352d9e9 100644
--- a/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteCollectionCmd.java
+++ b/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteCollectionCmd.java
@@ -45,6 +45,7 @@
 import org.apache.solr.common.util.Utils;
 import org.apache.solr.core.SolrInfoBean;
 import org.apache.solr.core.snapshots.SolrSnapshotManager;
+import org.apache.solr.handler.admin.ConfigSetsHandlerApi;
 import org.apache.solr.handler.admin.MetricsHistoryHandler;
 import org.apache.solr.metrics.SolrMetricManager;
 import org.apache.zookeeper.KeeperException;
@@ -160,6 +161,33 @@
         });
       }
 
+      // delete related config set iff: it is auto generated AND not related to any other collection
+      String configSetName = zkStateReader.readConfigName(collection);
+
+      if (ConfigSetsHandlerApi.isAutoGeneratedConfigSet(configSetName)) {
+        boolean configSetIsUsedByOtherCollection = false;
+
+        // make sure the configSet is not shared with other collections
+        // Similar to what happens in: OverseerConfigSetMessageHandler::deleteConfigSet
+        for (Map.Entry<String, DocCollection> entry : zkStateReader.getClusterState().getCollectionsMap().entrySet()) {
+          String otherConfigSetName = null;
+          try {
+            otherConfigSetName = zkStateReader.readConfigName(entry.getKey());
+          } catch (KeeperException ex) {
+            // ignore 'no config found' errors
+          }
+          if (configSetName.equals(otherConfigSetName)) {
+            configSetIsUsedByOtherCollection = true;
+            break;
+          }
+        }
+
+        if (!configSetIsUsedByOtherCollection) {
+          // delete the config set
+          zkStateReader.getConfigManager().deleteConfigDir(configSetName);
+        }
+      }
+
 //      TimeOut timeout = new TimeOut(60, TimeUnit.SECONDS, timeSource);
 //      boolean removed = false;
 //      while (! timeout.hasTimedOut()) {
diff --git a/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteReplicaCmd.java b/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteReplicaCmd.java
index ff168c4..4d7975d 100644
--- a/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteReplicaCmd.java
+++ b/solr/core/src/java/org/apache/solr/cloud/api/collections/DeleteReplicaCmd.java
@@ -62,7 +62,6 @@
 
   @Override
   @SuppressWarnings("unchecked")
-
   public void call(ClusterState clusterState, ZkNodeProps message, @SuppressWarnings({"rawtypes"})NamedList results) throws Exception {
     deleteReplica(clusterState, message, results,null);
   }
diff --git a/solr/core/src/java/org/apache/solr/core/ConfigSetService.java b/solr/core/src/java/org/apache/solr/core/ConfigSetService.java
index 22dacdf..c77d0b4 100644
--- a/solr/core/src/java/org/apache/solr/core/ConfigSetService.java
+++ b/solr/core/src/java/org/apache/solr/core/ConfigSetService.java
@@ -81,8 +81,8 @@
               ) ? false: true;
 
       SolrConfig solrConfig = createSolrConfig(dcore, coreLoader, trusted);
-      ConfigSet.SchemaSupplier schema = force -> createIndexSchema(dcore, solrConfig, force);
-      return new ConfigSet(configSetName(dcore), solrConfig, schema, properties, trusted);
+      IndexSchema indexSchema = createIndexSchema(dcore, solrConfig, false);
+      return new ConfigSet(configSetName(dcore), solrConfig, force -> indexSchema, properties, trusted);
     } catch (Exception e) {
       throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
           "Could not load conf for core " + dcore.getName() +
@@ -135,7 +135,6 @@
       if (modVersion != null) {
         // note: luceneMatchVersion influences the schema
         String cacheKey = configSet + "/" + guessSchemaName + "/" + modVersion + "/" + solrConfig.luceneMatchVersion;
-        if(forceFetch) schemaCache.invalidate(cacheKey);
         return schemaCache.get(cacheKey,
             (key) -> indexSchemaFactory.create(cdSchemaName, solrConfig));
       } else {
diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
index da95aab..dd1b963 100644
--- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
+++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
@@ -719,6 +719,11 @@
     createHandler(ZK_PATH, ZookeeperInfoHandler.class.getName(), ZookeeperInfoHandler.class);
     createHandler(ZK_STATUS_PATH, ZookeeperStatusHandler.class.getName(), ZookeeperStatusHandler.class);
     collectionsHandler = createHandler(COLLECTIONS_HANDLER_PATH, cfg.getCollectionsHandlerClass(), CollectionsHandler.class);
+    /*
+     * HealthCheckHandler needs to be initialized before InfoHandler, since the later one will call CoreContainer.getHealthCheckHandler().
+     * We don't register the handler here because it'll be registered inside InfoHandler
+     */
+    healthCheckHandler = loader.newInstance(cfg.getHealthCheckHandlerClass(), HealthCheckHandler.class, null, new Class<?>[]{CoreContainer.class}, new Object[]{this});
     infoHandler = createHandler(INFO_HANDLER_PATH, cfg.getInfoHandlerClass(), InfoHandler.class);
     coreAdminHandler = createHandler(CORES_HANDLER_PATH, cfg.getCoreAdminHandlerClass(), CoreAdminHandler.class);
     configSetsHandler = createHandler(CONFIGSETS_HANDLER_PATH, cfg.getConfigSetsHandlerClass(), ConfigSetsHandler.class);
@@ -953,7 +958,7 @@
             (authorizationPlugin != null) ? "enabled" : "disabled");
     }
 
-    if (authenticationPlugin !=null && StringUtils.isNotEmpty(System.getProperty("solr.jetty.https.port"))) {
+    if (authenticationPlugin != null && StringUtils.isEmpty(System.getProperty("solr.jetty.https.port"))) {
       log.warn("Solr authentication is enabled, but SSL is off.  Consider enabling SSL to protect user credentials and data with encryption.");
     }
   }
diff --git a/solr/core/src/java/org/apache/solr/core/MemClassLoader.java b/solr/core/src/java/org/apache/solr/core/MemClassLoader.java
deleted file mode 100644
index 03e4de2..0000000
--- a/solr/core/src/java/org/apache/solr/core/MemClassLoader.java
+++ /dev/null
@@ -1,203 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.core;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.lang.invoke.MethodHandles;
-import java.net.MalformedURLException;
-import java.net.URL;
-import java.nio.ByteBuffer;
-import java.security.CodeSource;
-import java.security.ProtectionDomain;
-import java.security.cert.Certificate;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.atomic.AtomicReference;
-
-import org.apache.lucene.analysis.util.ResourceLoader;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.params.CollectionAdminParams;
-import org.apache.solr.common.util.StrUtils;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-
-public class MemClassLoader extends ClassLoader implements AutoCloseable, ResourceLoader {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private boolean allJarsLoaded = false;
-  private final SolrResourceLoader parentLoader;
-  private List<PluginBag.RuntimeLib> libs = new ArrayList<>();
-  @SuppressWarnings("rawtypes")
-  private Map<String, Class> classCache = new HashMap<>();
-  private List<String> errors = new ArrayList<>();
-
-
-  public MemClassLoader(List<PluginBag.RuntimeLib> libs, SolrResourceLoader resourceLoader) {
-    this.parentLoader = resourceLoader;
-    this.libs = libs;
-  }
-
-  synchronized void loadRemoteJars() {
-    if (allJarsLoaded) return;
-    int count = 0;
-    for (PluginBag.RuntimeLib lib : libs) {
-      if (lib.getUrl() != null) {
-        try {
-          lib.loadJar();
-          lib.verify();
-        } catch (Exception e) {
-          log.error("Error loading runtime library", e);
-        }
-        count++;
-      }
-    }
-    if (count == libs.size()) allJarsLoaded = true;
-  }
-
-  public synchronized void loadJars() {
-    if (allJarsLoaded) return;
-
-    for (PluginBag.RuntimeLib lib : libs) {
-      try {
-        lib.loadJar();
-        lib.verify();
-      } catch (Exception exception) {
-        errors.add(exception.getMessage());
-        if (exception instanceof SolrException) throw (SolrException) exception;
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Atleast one runtimeLib could not be loaded", exception);
-      }
-    }
-    allJarsLoaded = true;
-  }
-
-
-  @Override
-  protected Class<?> findClass(String name) throws ClassNotFoundException {
-    if(!allJarsLoaded ) loadJars();
-    try {
-      return parentLoader.findClass(name, Object.class);
-    } catch (Exception e) {
-      return loadFromRuntimeLibs(name);
-    }
-  }
-
-  @SuppressWarnings({"rawtypes"})
-  private synchronized  Class<?> loadFromRuntimeLibs(String name) throws ClassNotFoundException {
-    Class result = classCache.get(name);
-    if(result != null)
-      return result;
-    AtomicReference<String> jarName = new AtomicReference<>();
-    ByteBuffer buf = null;
-    try {
-      buf = getByteBuffer(name, jarName);
-    } catch (Exception e) {
-      throw new ClassNotFoundException("class could not be loaded " + name + (errors.isEmpty()? "": "Some dynamic libraries could not be loaded: "+ StrUtils.join(errors, '|')), e);
-    }
-    if (buf == null) throw new ClassNotFoundException("Class not found :" + name);
-    ProtectionDomain defaultDomain = null;
-    //using the default protection domain, with no permissions
-    try {
-      defaultDomain = new ProtectionDomain(new CodeSource(new URL("http://localhost/" + CollectionAdminParams.SYSTEM_COLL + "/blob/" + jarName.get()), (Certificate[]) null),
-          null);
-    } catch (MalformedURLException mue) {
-      throw new ClassNotFoundException("Unexpected exception ", mue);
-      //should not happen
-    }
-    log.info("Defining_class {} from runtime jar {} ", name, jarName);
-
-    result = defineClass(name, buf.array(), buf.arrayOffset(), buf.limit(), defaultDomain);
-    classCache.put(name, result);
-    return result;
-  }
-
-  private ByteBuffer getByteBuffer(String name, AtomicReference<String> jarName) throws Exception {
-    if (!allJarsLoaded) {
-      loadJars();
-
-    }
-
-    String path = name.replace('.', '/').concat(".class");
-    ByteBuffer buf = null;
-    for (PluginBag.RuntimeLib lib : libs) {
-      try {
-        buf = lib.getFileContent(path);
-        if (buf != null) {
-          jarName.set(lib.getName());
-          break;
-        }
-      } catch (Exception exp) {
-        throw new ClassNotFoundException("Unable to load class :" + name, exp);
-      }
-    }
-
-    return buf;
-  }
-
-  @Override
-  public void close() {
-    for (PluginBag.RuntimeLib lib : libs) {
-      try {
-        lib.close();
-      } catch (Exception e) {
-        log.error("Error closing lib {}", lib.getName(), e);
-      }
-    }
-  }
-
-  @Override
-  public InputStream openResource(String resource) throws IOException {
-    AtomicReference<String> jarName = new AtomicReference<>();
-    try {
-      ByteBuffer buf = getByteBuffer(resource, jarName);
-      if (buf == null) throw new IOException("Resource could not be found " + resource);
-    } catch (Exception e) {
-      throw new IOException("Resource could not be found " + resource, e);
-    }
-    return null;
-  }
-
-  @Override
-  public <T> Class<? extends T> findClass(String cname, Class<T> expectedType) {
-    if(!allJarsLoaded ) loadJars();
-    try {
-      return findClass(cname).asSubclass(expectedType);
-    } catch (Exception e) {
-      if (e instanceof SolrException) {
-        throw (SolrException) e;
-      } else {
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "error loading class " + cname, e);
-      }
-    }
-
-  }
-
-  @Override
-  public <T> T newInstance(String cname, Class<T> expectedType) {
-    try {
-      return findClass(cname, expectedType).getConstructor().newInstance();
-    } catch (SolrException e) {
-      throw e;
-    } catch (Exception e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "error instantiating class :" + cname, e);
-    }
-  }
-
-
-}
diff --git a/solr/core/src/java/org/apache/solr/core/PluginBag.java b/solr/core/src/java/org/apache/solr/core/PluginBag.java
index d2442a1..e868ada 100644
--- a/solr/core/src/java/org/apache/solr/core/PluginBag.java
+++ b/solr/core/src/java/org/apache/solr/core/PluginBag.java
@@ -16,11 +16,8 @@
  */
 package org.apache.solr.core;
 
-import java.io.ByteArrayInputStream;
 import java.io.IOException;
 import java.lang.invoke.MethodHandles;
-import java.nio.ByteBuffer;
-import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
@@ -31,25 +28,18 @@
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.function.Supplier;
 import java.util.stream.Collectors;
-import java.util.zip.ZipEntry;
-import java.util.zip.ZipInputStream;
 
 import org.apache.lucene.analysis.util.ResourceLoader;
 import org.apache.lucene.analysis.util.ResourceLoaderAware;
 import org.apache.solr.api.Api;
 import org.apache.solr.api.ApiBag;
 import org.apache.solr.api.ApiSupport;
-import org.apache.solr.cloud.CloudUtil;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.util.StrUtils;
 import org.apache.solr.handler.RequestHandlerBase;
 import org.apache.solr.handler.component.SearchComponent;
 import org.apache.solr.pkg.PackagePluginHolder;
 import org.apache.solr.request.SolrRequestHandler;
-import org.apache.solr.update.processor.UpdateRequestProcessorChain;
-import org.apache.solr.update.processor.UpdateRequestProcessorFactory;
-import org.apache.solr.util.CryptoKeys;
-import org.apache.solr.util.SimplePostTool;
 import org.apache.solr.util.plugin.NamedListInitializedPlugin;
 import org.apache.solr.util.plugin.PluginInfoInitialized;
 import org.apache.solr.util.plugin.SolrCoreAware;
@@ -58,7 +48,6 @@
 
 import static java.util.Collections.singletonMap;
 import static org.apache.solr.api.ApiBag.HANDLER_NAME;
-import static org.apache.solr.common.params.CommonParams.NAME;
 
 /**
  * This manages the lifecycle of a set of plugin of the same type .
@@ -130,22 +119,11 @@
 
   @SuppressWarnings({"unchecked"})
   public PluginHolder<T> createPlugin(PluginInfo info) {
-    if ("true".equals(String.valueOf(info.attributes.get("runtimeLib")))) {
-      if (log.isDebugEnabled()) {
-        log.debug(" {} : '{}'  created with runtimeLib=true ", meta.getCleanTag(), info.name);
-      }
-      LazyPluginHolder<T> holder = new LazyPluginHolder<>(meta, info, core, RuntimeLib.isEnabled() ?
-          core.getMemClassLoader() :
-          core.getResourceLoader(), true);
-
-      return meta.clazz == UpdateRequestProcessorFactory.class ?
-          (PluginHolder<T>) new UpdateRequestProcessorChain.LazyUpdateProcessorFactoryHolder(holder) :
-          holder;
-    } else if ("lazy".equals(info.attributes.get("startup")) && meta.options.contains(SolrConfig.PluginOpts.LAZY)) {
+   if ("lazy".equals(info.attributes.get("startup")) && meta.options.contains(SolrConfig.PluginOpts.LAZY)) {
       if (log.isDebugEnabled()) {
         log.debug("{} : '{}' created with startup=lazy ", meta.getCleanTag(), info.name);
       }
-      return new LazyPluginHolder<T>(meta, info, core, core.getResourceLoader(), false);
+      return new LazyPluginHolder<T>(meta, info, core, core.getResourceLoader());
     } else {
       if (info.pkgName != null) {
         PackagePluginHolder<T> holder = new PackagePluginHolder<>(info, core, meta);
@@ -433,22 +411,13 @@
     protected SolrException solrException;
     private final SolrCore core;
     protected ResourceLoader resourceLoader;
-    private final boolean isRuntimeLib;
 
 
-    LazyPluginHolder(SolrConfig.SolrPluginInfo pluginMeta, PluginInfo pluginInfo, SolrCore core, ResourceLoader loader, boolean isRuntimeLib) {
+    LazyPluginHolder(SolrConfig.SolrPluginInfo pluginMeta, PluginInfo pluginInfo, SolrCore core, ResourceLoader loader) {
       super(pluginInfo);
       this.pluginMeta = pluginMeta;
-      this.isRuntimeLib = isRuntimeLib;
       this.core = core;
       this.resourceLoader = loader;
-      if (loader instanceof MemClassLoader) {
-        if (!RuntimeLib.isEnabled()) {
-          String s = "runtime library loading is not enabled, start Solr with -Denable.runtime.lib=true";
-          log.warn(s);
-          solrException = new SolrException(SolrException.ErrorCode.SERVER_ERROR, s);
-        }
-      }
     }
 
     @Override
@@ -474,24 +443,14 @@
       if (log.isInfoEnabled()) {
         log.info("Going to create a new {} with {} ", pluginMeta.getCleanTag(), pluginInfo);
       }
-      if (resourceLoader instanceof MemClassLoader) {
-        MemClassLoader loader = (MemClassLoader) resourceLoader;
-        loader.loadJars();
-      }
+
       @SuppressWarnings({"unchecked"})
       Class<T> clazz = (Class<T>) pluginMeta.clazz;
       T localInst = null;
       try {
         localInst = SolrCore.createInstance(pluginInfo.className, clazz, pluginMeta.getCleanTag(), null, resourceLoader);
       } catch (SolrException e) {
-        if (isRuntimeLib && !(resourceLoader instanceof MemClassLoader)) {
-          throw new SolrException(SolrException.ErrorCode.getErrorCode(e.code()),
-              e.getMessage() + ". runtime library loading is not enabled, start Solr with -Denable.runtime.lib=true",
-              e.getCause());
-        }
         throw e;
-
-
       }
       initInstance(localInst, pluginInfo);
       if (localInst instanceof SolrCoreAware) {
@@ -511,165 +470,6 @@
     }
   }
 
-  /**
-   * This represents a Runtime Jar. A jar requires two details , name and version
-   */
-  public static class RuntimeLib implements PluginInfoInitialized, AutoCloseable {
-    private String name, version, sig, sha512, url;
-    private BlobRepository.BlobContentRef<ByteBuffer> jarContent;
-    private final CoreContainer coreContainer;
-    private boolean verified = false;
-
-    @Override
-    public void init(PluginInfo info) {
-      name = info.attributes.get(NAME);
-      url = info.attributes.get("url");
-      sig = info.attributes.get("sig");
-      if(url == null) {
-        Object v = info.attributes.get("version");
-        if (name == null || v == null) {
-          throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "runtimeLib must have name and version");
-        }
-        version = String.valueOf(v);
-      } else {
-        sha512 = info.attributes.get("sha512");
-        if(sha512 == null){
-          throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "runtimeLib with url must have a 'sha512' attribute");
-        }
-        ByteBuffer buf = null;
-        buf = coreContainer.getBlobRepository().fetchFromUrl(name, url);
-
-        String digest = BlobRepository.sha512Digest(buf);
-        if(!sha512.equals(digest))  {
-          throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, StrUtils.formatString(BlobRepository.INVALID_JAR_MSG, url, sha512, digest)  );
-        }
-        log.info("dynamic library verified {}, sha512: {}", url, sha512);
-      }
-    }
-
-    public RuntimeLib(SolrCore core) {
-      coreContainer = core.getCoreContainer();
-    }
-
-    public String getUrl(){
-      return url;
-    }
-
-    @SuppressWarnings({"unchecked"})
-    void loadJar() {
-      if (jarContent != null) return;
-      synchronized (this) {
-        if (jarContent != null) return;
-
-        jarContent = url == null?
-            coreContainer.getBlobRepository().getBlobIncRef(name + "/" + version):
-            coreContainer.getBlobRepository().getBlobIncRef(name, null,url,sha512);
-
-      }
-    }
-
-    public static boolean isEnabled() {
-      return Boolean.getBoolean("enable.runtime.lib");
-    }
-
-    public String getName() {
-      return name;
-    }
-
-    public String getVersion() {
-      return version;
-    }
-
-    public String getSig() {
-      return sig;
-
-    }
-
-    public ByteBuffer getFileContent(String entryName) throws IOException {
-      if (jarContent == null)
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "jar not available: " + name  );
-      return getFileContent(jarContent.blob, entryName);
-
-    }
-
-    public ByteBuffer getFileContent(BlobRepository.BlobContent<ByteBuffer> blobContent,  String entryName) throws IOException {
-      ByteBuffer buff = blobContent.get();
-      ByteArrayInputStream zipContents = new ByteArrayInputStream(buff.array(), buff.arrayOffset(), buff.limit());
-      ZipInputStream zis = new ZipInputStream(zipContents);
-      try {
-        ZipEntry entry;
-        while ((entry = zis.getNextEntry()) != null) {
-          if (entryName == null || entryName.equals(entry.getName())) {
-            SimplePostTool.BAOS out = new SimplePostTool.BAOS();
-            byte[] buffer = new byte[2048];
-            int size;
-            while ((size = zis.read(buffer, 0, buffer.length)) != -1) {
-              out.write(buffer, 0, size);
-            }
-            out.close();
-            return out.getByteBuffer();
-          }
-        }
-      } finally {
-        zis.closeEntry();
-      }
-      return null;
-    }
-
-
-    @Override
-    public void close() {
-      if (jarContent != null) coreContainer.getBlobRepository().decrementBlobRefCount(jarContent);
-    }
-
-    public static List<RuntimeLib> getLibObjects(SolrCore core, List<PluginInfo> libs) {
-      List<RuntimeLib> l = new ArrayList<>(libs.size());
-      for (PluginInfo lib : libs) {
-        RuntimeLib rtl = new RuntimeLib(core);
-        try {
-          rtl.init(lib);
-        } catch (Exception e) {
-          log.error("error loading runtime library", e);
-        }
-        l.add(rtl);
-      }
-      return l;
-    }
-
-    public void verify() throws Exception {
-      if (verified) return;
-      if (jarContent == null) {
-        log.error("Calling verify before loading the jar");
-        return;
-      }
-
-      if (!coreContainer.isZooKeeperAware())
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Signing jar is possible only in cloud");
-      Map<String, byte[]> keys = CloudUtil.getTrustedKeys(coreContainer.getZkController().getZkClient(), "exe");
-      if (keys.isEmpty()) {
-        if (sig == null) {
-          verified = true;
-          return;
-        } else {
-          throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "No public keys are available in ZK to verify signature for runtime lib  " + name);
-        }
-      } else if (sig == null) {
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, StrUtils.formatString("runtimelib {0} should be signed with one of the keys in ZK /keys/exe ", name));
-      }
-
-      try {
-        String matchedKey = new CryptoKeys(keys).verify(sig, jarContent.blob.get());
-        if (matchedKey == null)
-          throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "No key matched signature for jar : " + name + " version: " + version);
-        log.info("Jar {} signed with {} successfully verified", name, matchedKey);
-      } catch (Exception e) {
-        if (e instanceof SolrException) throw e;
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Error verifying key ", e);
-      }
-    }
-  }
-
-
   public Api v2lookup(String path, String method, Map<String, String> parts) {
     if (apiBag == null) {
       throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "this should not happen, looking up for v2 API at the wrong place");
diff --git a/solr/core/src/java/org/apache/solr/core/SchemaCodecFactory.java b/solr/core/src/java/org/apache/solr/core/SchemaCodecFactory.java
index 6fc3629..edad01e 100644
--- a/solr/core/src/java/org/apache/solr/core/SchemaCodecFactory.java
+++ b/solr/core/src/java/org/apache/solr/core/SchemaCodecFactory.java
@@ -23,8 +23,8 @@
 import org.apache.lucene.codecs.Codec;
 import org.apache.lucene.codecs.DocValuesFormat;
 import org.apache.lucene.codecs.PostingsFormat;
-import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.Mode;
-import org.apache.lucene.codecs.lucene86.Lucene86Codec;
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.Mode;
+import org.apache.lucene.codecs.lucene87.Lucene87Codec;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrException.ErrorCode;
 import org.apache.solr.common.util.NamedList;
@@ -92,7 +92,7 @@
       compressionMode = SOLR_DEFAULT_COMPRESSION_MODE;
       log.debug("Using default compressionMode: {}", compressionMode);
     }
-    codec = new Lucene86Codec(compressionMode) {
+    codec = new Lucene87Codec(compressionMode) {
       @Override
       public PostingsFormat getPostingsFormatForField(String field) {
         final SchemaField schemaField = core.getLatestSchema().getFieldOrNull(field);
diff --git a/solr/core/src/java/org/apache/solr/core/SolrClassLoader.java b/solr/core/src/java/org/apache/solr/core/SolrClassLoader.java
deleted file mode 100644
index 7973b63..0000000
--- a/solr/core/src/java/org/apache/solr/core/SolrClassLoader.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.core;
-
-
-/** A generic interface to load plugin classes */
-public interface SolrClassLoader {
-
-    <T> T newInstance(String cname, Class<T> expectedType, String... subpackages);
-
-    @SuppressWarnings({"rawtypes"})
-    <T> T newInstance(String cName, Class<T> expectedType, String[] subPackages, Class[] params, Object[] args);
-
-    <T> Class<? extends T> findClass(String cname, Class<T> expectedType);
-}
\ No newline at end of file
diff --git a/solr/core/src/java/org/apache/solr/core/SolrConfig.java b/solr/core/src/java/org/apache/solr/core/SolrConfig.java
index 8861799..bc1f055 100644
--- a/solr/core/src/java/org/apache/solr/core/SolrConfig.java
+++ b/solr/core/src/java/org/apache/solr/core/SolrConfig.java
@@ -78,6 +78,7 @@
 import org.apache.solr.update.processor.UpdateRequestProcessorChain;
 import org.apache.solr.update.processor.UpdateRequestProcessorFactory;
 import org.apache.solr.util.DOMUtil;
+import org.apache.solr.util.circuitbreaker.CircuitBreakerManager;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.w3c.dom.Node;
@@ -227,14 +228,7 @@
     queryResultWindowSize = Math.max(1, getInt("query/queryResultWindowSize", 1));
     queryResultMaxDocsCached = getInt("query/queryResultMaxDocsCached", Integer.MAX_VALUE);
     enableLazyFieldLoading = getBool("query/enableLazyFieldLoading", false);
-
-    useCircuitBreakers = getBool("circuitBreaker/useCircuitBreakers", false);
-    memoryCircuitBreakerThresholdPct = getInt("circuitBreaker/memoryCircuitBreakerThresholdPct", 95);
-
-    validateMemoryBreakerThreshold();
     
-    useRangeVersionsForPeerSync = getBool("peerSync/useRangeVersions", true);
-
     filterCacheConfig = CacheConfig.getConfig(this, "query/filterCache");
     queryResultCacheConfig = CacheConfig.getConfig(this, "query/queryResultCache");
     documentCacheConfig = CacheConfig.getConfig(this, "query/documentCache");
@@ -347,7 +341,6 @@
           // and even then -- only if there is a single SpellCheckComponent
           // because of queryConverter.setIndexAnalyzer
       .add(new SolrPluginInfo(QueryConverter.class, "queryConverter", REQUIRE_NAME, REQUIRE_CLASS))
-      .add(new SolrPluginInfo(PluginBag.RuntimeLib.class, "runtimeLib", REQUIRE_NAME, MULTI_OK))
           // this is hackish, since it picks up all SolrEventListeners,
           // regardless of when/how/why they are used (or even if they are
           // declared outside of the appropriate context) but there's no nice
@@ -365,6 +358,7 @@
       .add(new SolrPluginInfo(IndexSchemaFactory.class, "schemaFactory", REQUIRE_CLASS))
       .add(new SolrPluginInfo(RestManager.class, "restManager"))
       .add(new SolrPluginInfo(StatsCache.class, "statsCache", REQUIRE_CLASS))
+      .add(new SolrPluginInfo(CircuitBreakerManager.class, "circuitBreaker"))
       .build();
   public static final Map<String, SolrPluginInfo> classVsSolrPluginInfo;
 
@@ -531,12 +525,6 @@
   public final int queryResultMaxDocsCached;
   public final boolean enableLazyFieldLoading;
 
-  // Circuit Breaker Configuration
-  public final boolean useCircuitBreakers;
-  public final int memoryCircuitBreakerThresholdPct;
-  
-  public final boolean useRangeVersionsForPeerSync;
-  
   // IndexConfig settings
   public final SolrIndexConfig indexConfig;
 
@@ -816,14 +804,6 @@
     loader.reloadLuceneSPI();
   }
 
-  private void validateMemoryBreakerThreshold() {
-    if (useCircuitBreakers) {
-      if (memoryCircuitBreakerThresholdPct > 95 || memoryCircuitBreakerThresholdPct < 50) {
-        throw new IllegalArgumentException("Valid value range of memoryCircuitBreakerThresholdPct is 50 -  95");
-      }
-    }
-  }
-
   public int getMultipartUploadLimitKB() {
     return multipartUploadLimitKB;
   }
@@ -884,7 +864,7 @@
   @SuppressWarnings({"unchecked", "rawtypes"})
   public Map<String, Object> toMap(Map<String, Object> result) {
     if (getZnodeVersion() > -1) result.put(ZNODEVER, getZnodeVersion());
-    result.put(IndexSchema.LUCENE_MATCH_VERSION_PARAM, luceneMatchVersion);
+    if(luceneMatchVersion != null) result.put(IndexSchema.LUCENE_MATCH_VERSION_PARAM, luceneMatchVersion.toString());
     result.put("updateHandler", getUpdateHandlerInfo());
     Map m = new LinkedHashMap();
     result.put("query", m);
@@ -893,8 +873,7 @@
     m.put("queryResultMaxDocsCached", queryResultMaxDocsCached);
     m.put("enableLazyFieldLoading", enableLazyFieldLoading);
     m.put("maxBooleanClauses", booleanQueryMaxClauseCount);
-    m.put("useCircuitBreakers", useCircuitBreakers);
-    m.put("memoryCircuitBreakerThresholdPct", memoryCircuitBreakerThresholdPct);
+
     for (SolrPluginInfo plugin : plugins) {
       List<PluginInfo> infos = getPluginInfos(plugin.clazz.getName());
       if (infos == null || infos.isEmpty()) continue;
@@ -933,10 +912,6 @@
         "addHttpRequestToContext", addHttpRequestToContext));
     if (indexConfig != null) result.put("indexConfig", indexConfig);
 
-    m = new LinkedHashMap();
-    result.put("peerSync", m);
-    m.put("useRangeVersions", useRangeVersionsForPeerSync);
-
     //TODO there is more to add
 
     return result;
diff --git a/solr/core/src/java/org/apache/solr/core/SolrCore.java b/solr/core/src/java/org/apache/solr/core/SolrCore.java
index dc6ef6e..b44866e 100644
--- a/solr/core/src/java/org/apache/solr/core/SolrCore.java
+++ b/solr/core/src/java/org/apache/solr/core/SolrCore.java
@@ -225,7 +225,6 @@
   private final RecoveryStrategy.Builder recoveryStrategyBuilder;
   private IndexReaderFactory indexReaderFactory;
   private final Codec codec;
-  private final MemClassLoader memClassLoader;
   //singleton listener for all packages used in schema
   private final PackageListeningClassLoader schemaPluginsLoader;
 
@@ -242,7 +241,6 @@
   private Counter newSearcherMaxReachedCounter;
   private Counter newSearcherOtherErrorsCounter;
 
-  private Set<String> metricNames = ConcurrentHashMap.newKeySet();
   private final String metricTag = SolrMetricProducer.getUniqueMetricTag(this, null);
   private final SolrMetricsContext solrMetricsContext;
 
@@ -252,10 +250,6 @@
 
   private PackageListeners packageListeners = new PackageListeners(this);
 
-  public Set<String> getMetricNames() {
-    return metricNames;
-  }
-
   public Date getStartTimeStamp() {
     return startTime;
   }
@@ -950,10 +944,9 @@
 
       this.solrConfig = configSet.getSolrConfig();
       this.resourceLoader = configSet.getSolrConfig().getResourceLoader();
-      this.resourceLoader.core = this;
       schemaPluginsLoader = new PackageListeningClassLoader(coreContainer, resourceLoader,
               solrConfig::maxPackageVersion,
-              () -> setLatestSchema(configSet.getIndexSchema(true)));
+              () -> setLatestSchema(configSet.getIndexSchema()));
       this.packageListeners.addListener(schemaPluginsLoader);
       IndexSchema schema = configSet.getIndexSchema();
 
@@ -1003,10 +996,6 @@
       this.solrDelPolicy = initDeletionPolicy(delPolicy);
 
       this.codec = initCodec(solrConfig, this.schema);
-
-      memClassLoader = new MemClassLoader(
-          PluginBag.RuntimeLib.getLibObjects(this, solrConfig.getPluginInfos(PluginBag.RuntimeLib.class.getName())),
-          getResourceLoader());
       initIndex(prev != null, reload);
 
       initWriters();
@@ -1036,9 +1025,6 @@
       // Initialize the RestManager
       restManager = initRestManager();
 
-      // at this point we can load jars loaded from remote urls.
-      memClassLoader.loadRemoteJars();
-
       // Finally tell anyone who wants to know
       resourceLoader.inform(resourceLoader);
       resourceLoader.inform(this); // last call before the latch is released.
@@ -1188,7 +1174,8 @@
   }
 
   private CircuitBreakerManager initCircuitBreakerManager() {
-    CircuitBreakerManager circuitBreakerManager = CircuitBreakerManager.build(solrConfig);
+    final PluginInfo info = solrConfig.getPluginInfo(CircuitBreakerManager.class.getName());
+    CircuitBreakerManager circuitBreakerManager = CircuitBreakerManager.build(info);
 
     return circuitBreakerManager;
   }
@@ -1611,14 +1598,6 @@
     valueSourceParsers.close();
     transformerFactories.close();
 
-    if (memClassLoader != null) {
-      try {
-        memClassLoader.close();
-      } catch (Exception e) {
-      }
-    }
-
-
     try {
       if (null != updateHandler) {
         updateHandler.close();
@@ -2789,9 +2768,6 @@
     };
   }
 
-  public MemClassLoader getMemClassLoader() {
-    return memClassLoader;
-  }
 
   public interface RawWriter {
     default String getContentType() {
@@ -3085,6 +3061,7 @@
 
   public static Runnable getConfListener(SolrCore core, ZkSolrResourceLoader zkSolrResourceLoader) {
     final String coreName = core.getName();
+    final UUID coreId = core.uniqueId;
     final CoreContainer cc = core.getCoreContainer();
     final String overlayPath = zkSolrResourceLoader.getConfigSetZkPath() + "/" + ConfigOverlay.RESOURCE_NAME;
     final String solrConfigPath = zkSolrResourceLoader.getConfigSetZkPath() + "/" + core.getSolrConfig().getName();
@@ -3120,7 +3097,7 @@
         if (configHandler.getReloadLock().tryLock()) {
 
           try {
-            cc.reload(coreName);
+            cc.reload(coreName, coreId);
           } catch (SolrCoreState.CoreIsClosedException e) {
             /*no problem this core is already closed*/
           } finally {
diff --git a/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java b/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
index cfd3690..c5ad5fc 100644
--- a/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
+++ b/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
@@ -43,6 +43,7 @@
 import org.apache.lucene.codecs.PostingsFormat;
 import org.apache.lucene.util.IOUtils;
 import org.apache.solr.common.SolrException;
+import org.apache.solr.common.cloud.SolrClassLoader;
 import org.apache.solr.handler.component.SearchComponent;
 import org.apache.solr.handler.component.ShardHandlerFactory;
 import org.apache.solr.request.SolrRequestHandler;
@@ -66,7 +67,7 @@
   private static final String base = "org.apache.solr";
   private static final String[] packages = {
       "", "analysis.", "schema.", "handler.", "handler.tagger.", "search.", "update.", "core.", "response.", "request.",
-      "update.processor.", "util.", "spelling.", "handler.component.", "handler.dataimport.",
+      "update.processor.", "util.", "spelling.", "handler.component.",
       "spelling.suggest.", "spelling.suggest.fst.", "rest.schema.analysis.", "security.", "handler.admin."
   };
   private static final Charset UTF_8 = StandardCharsets.UTF_8;
@@ -76,11 +77,7 @@
   protected URLClassLoader classLoader;
   private final Path instanceDir;
 
-  /**
-   * this is set  by the {@link SolrCore}
-   * This could be null if the core is not yet initialized
-   */
-  SolrCore core;
+
 
   private final List<SolrCoreAware> waitingForCore = Collections.synchronizedList(new ArrayList<SolrCoreAware>());
   private final List<SolrInfoBean> infoMBeans = Collections.synchronizedList(new ArrayList<SolrInfoBean>());
@@ -204,11 +201,6 @@
     TokenizerFactory.reloadTokenizers(this.classLoader);
   }
 
-  public SolrCore getCore(){
-    return core;
-  }
-
-
   private static URLClassLoader addURLsToClassLoader(final URLClassLoader oldLoader, List<URL> urls) {
     if (urls.size() == 0) {
       return oldLoader;
diff --git a/solr/core/src/java/org/apache/solr/filestore/DistribPackageStore.java b/solr/core/src/java/org/apache/solr/filestore/DistribPackageStore.java
index 021729f..37aaeb2 100644
--- a/solr/core/src/java/org/apache/solr/filestore/DistribPackageStore.java
+++ b/solr/core/src/java/org/apache/solr/filestore/DistribPackageStore.java
@@ -47,6 +47,7 @@
 import org.apache.solr.common.cloud.ZkStateReader;
 import org.apache.solr.common.params.CommonParams;
 import org.apache.solr.common.util.Utils;
+import org.apache.solr.common.annotation.SolrThreadUnsafe;
 import org.apache.solr.core.CoreContainer;
 import org.apache.solr.filestore.PackageStoreAPI.MetaData;
 import org.apache.solr.util.SimplePostTool;
@@ -60,7 +61,7 @@
 import static org.apache.solr.common.SolrException.ErrorCode.BAD_REQUEST;
 import static org.apache.solr.common.SolrException.ErrorCode.SERVER_ERROR;
 
-
+@SolrThreadUnsafe
 public class DistribPackageStore implements PackageStore {
   static final long MAX_PKG_SIZE = Long.parseLong(System.getProperty("max.file.store.size", String.valueOf(100 * 1024 * 1024)));
   /**
@@ -636,4 +637,4 @@
       log.error("", e);
     }
   }
-}
\ No newline at end of file
+}
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrBufferManager.java b/solr/core/src/java/org/apache/solr/handler/CdcrBufferManager.java
deleted file mode 100644
index 8696379..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrBufferManager.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.update.CdcrUpdateLog;
-
-/**
- * This manager is responsible in enabling or disabling the buffering of the update logs. Currently, buffer
- * is always activated for non-leader nodes. For leader nodes, it is enabled only if the user explicitly
- * enabled it with the action {@link org.apache.solr.handler.CdcrParams.CdcrAction#ENABLEBUFFER}.
- */
-class CdcrBufferManager implements CdcrStateManager.CdcrStateObserver {
-
-  private CdcrLeaderStateManager leaderStateManager;
-  private CdcrBufferStateManager bufferStateManager;
-
-  private final SolrCore core;
-
-  CdcrBufferManager(SolrCore core) {
-    this.core = core;
-  }
-
-  void setLeaderStateManager(final CdcrLeaderStateManager leaderStateManager) {
-    this.leaderStateManager = leaderStateManager;
-    this.leaderStateManager.register(this);
-  }
-
-  void setBufferStateManager(final CdcrBufferStateManager bufferStateManager) {
-    this.bufferStateManager = bufferStateManager;
-    this.bufferStateManager.register(this);
-  }
-
-  /**
-   * This method is synchronised as it can both be called by the leaderStateManager and the bufferStateManager.
-   */
-  @Override
-  public synchronized void stateUpdate() {
-    CdcrUpdateLog ulog = (CdcrUpdateLog) core.getUpdateHandler().getUpdateLog();
-
-    // If I am not the leader, I should always buffer my updates
-    if (!leaderStateManager.amILeader()) {
-      ulog.enableBuffer();
-      return;
-    }
-    // If I am the leader, I should buffer my updates only if buffer is enabled
-    else if (bufferStateManager.getState().equals(CdcrParams.BufferState.ENABLED)) {
-      ulog.enableBuffer();
-      return;
-    }
-
-    // otherwise, disable the buffer
-    ulog.disableBuffer();
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrBufferStateManager.java b/solr/core/src/java/org/apache/solr/handler/CdcrBufferStateManager.java
deleted file mode 100644
index 49d19f1..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrBufferStateManager.java
+++ /dev/null
@@ -1,178 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import org.apache.solr.common.cloud.SolrZkClient;
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.core.SolrCore;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.Charset;
-
-/**
- * Manage the state of the update log buffer. It is responsible of synchronising the state
- * through Zookeeper. The state of the buffer is stored in the zk node defined by {@link #getZnodePath()}.
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-class CdcrBufferStateManager extends CdcrStateManager {
-
-  private CdcrParams.BufferState state = DEFAULT_STATE;
-
-  private BufferStateWatcher wrappedWatcher;
-  private Watcher watcher;
-
-  private SolrCore core;
-
-  static CdcrParams.BufferState DEFAULT_STATE = CdcrParams.BufferState.ENABLED;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  CdcrBufferStateManager(final SolrCore core, SolrParams bufferConfiguration) {
-    this.core = core;
-
-    // Ensure that the state znode exists
-    this.createStateNode();
-
-    // set default state
-    if (bufferConfiguration != null) {
-      byte[] defaultState = bufferConfiguration.get(
-          CdcrParams.DEFAULT_STATE_PARAM, DEFAULT_STATE.toLower()).getBytes(Charset.forName("UTF-8"));
-      state = CdcrParams.BufferState.get(defaultState);
-    }
-    this.setState(state); // notify observers
-
-    // Startup and register the watcher at startup
-    try {
-      SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-      watcher = this.initWatcher(zkClient);
-      this.setState(CdcrParams.BufferState.get(zkClient.getData(this.getZnodePath(), watcher, null, true)));
-    } catch (KeeperException | InterruptedException e) {
-      log.warn("Failed fetching initial state", e);
-    }
-  }
-
-  /**
-   * SolrZkClient does not guarantee that a watch object will only be triggered once for a given notification
-   * if we does not wrap the watcher - see SOLR-6621.
-   */
-  private Watcher initWatcher(SolrZkClient zkClient) {
-    wrappedWatcher = new BufferStateWatcher();
-    return zkClient.wrapWatcher(wrappedWatcher);
-  }
-
-  private String getZnodeBase() {
-    return "/collections/" + core.getCoreDescriptor().getCloudDescriptor().getCollectionName() + "/cdcr/state";
-  }
-
-  private String getZnodePath() {
-    return getZnodeBase() + "/buffer";
-  }
-
-  void setState(CdcrParams.BufferState state) {
-    if (this.state != state) {
-      this.state = state;
-      this.callback(); // notify the observers of a state change
-    }
-  }
-
-  CdcrParams.BufferState getState() {
-    return state;
-  }
-
-  /**
-   * Synchronise the state to Zookeeper. This method must be called only by the handler receiving the
-   * action.
-   */
-  void synchronize() {
-    SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-    try {
-      zkClient.setData(this.getZnodePath(), this.getState().getBytes(), true);
-      // check if nobody changed it in the meantime, and set a new watcher
-      this.setState(CdcrParams.BufferState.get(zkClient.getData(this.getZnodePath(), watcher, null, true)));
-    } catch (KeeperException | InterruptedException e) {
-      log.warn("Failed synchronising new state", e);
-    }
-  }
-
-  private void createStateNode() {
-    SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-    try {
-      if (!zkClient.exists(this.getZnodePath(), true)) {
-        if (!zkClient.exists(this.getZnodeBase(), true)) {
-          zkClient.makePath(this.getZnodeBase(), null, CreateMode.PERSISTENT, null, false, true); // Should be a no-op if node exists
-        }
-        zkClient.create(this.getZnodePath(), DEFAULT_STATE.getBytes(), CreateMode.PERSISTENT, true);
-        if (log.isInfoEnabled()) {
-          log.info("Created znode {}", this.getZnodePath());
-        }
-      }
-    } catch (KeeperException.NodeExistsException ne) {
-      // Someone got in first and created the node.
-    }  catch (KeeperException | InterruptedException e) {
-      log.warn("Failed to create CDCR buffer state node", e);
-    }
-  }
-
-  void shutdown() {
-    if (wrappedWatcher != null) {
-      wrappedWatcher.cancel(); // cancel the watcher to avoid spurious warn messages during shutdown
-    }
-  }
-
-  private class BufferStateWatcher implements Watcher {
-
-    private boolean isCancelled = false;
-
-    /**
-     * Cancel the watcher to avoid spurious warn messages during shutdown.
-     */
-    void cancel() {
-      isCancelled = true;
-    }
-
-    @Override
-    public void process(WatchedEvent event) {
-      if (isCancelled) return; // if the watcher is cancelled, do nothing.
-      String collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-      String shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-
-      log.info("The CDCR buffer state has changed: {} @ {}:{}", event, collectionName, shard);
-      // session events are not change events, and do not remove the watcher
-      if (Event.EventType.None.equals(event.getType())) {
-        return;
-      }
-      SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-      try {
-        CdcrParams.BufferState state = CdcrParams.BufferState.get(zkClient.getData(CdcrBufferStateManager.this.getZnodePath(), watcher, null, true));
-        log.info("Received new CDCR buffer state from watcher: {} @ {}:{}", state, collectionName, shard);
-        CdcrBufferStateManager.this.setState(state);
-      } catch (KeeperException | InterruptedException e) {
-        log.warn("Failed synchronising new state @ {}:{}", collectionName, shard, e);
-      }
-    }
-
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrLeaderStateManager.java b/solr/core/src/java/org/apache/solr/handler/CdcrLeaderStateManager.java
deleted file mode 100644
index c9bc5fd..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrLeaderStateManager.java
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.lang.invoke.MethodHandles;
-
-import org.apache.solr.common.cloud.ClusterState;
-import org.apache.solr.common.cloud.SolrZkClient;
-import org.apache.solr.common.cloud.ZkNodeProps;
-import org.apache.solr.core.SolrCore;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * Manage the leader state of the CDCR nodes.
- * </p>
- * <p>
- * It takes care of notifying the {@link CdcrReplicatorManager} in case
- * of a leader state change.
- * </p>
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-class CdcrLeaderStateManager extends CdcrStateManager {
-
-  private boolean amILeader = false;
-
-  private LeaderStateWatcher wrappedWatcher;
-  private Watcher watcher;
-
-  private SolrCore core;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  CdcrLeaderStateManager(final SolrCore core) {
-    this.core = core;
-
-    // Fetch leader state and register the watcher at startup
-    try {
-      SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-      ClusterState clusterState = core.getCoreContainer().getZkController().getClusterState();
-
-      watcher = this.initWatcher(zkClient);
-      // if the node does not exist, it means that the leader was not yet registered. This can happen
-      // when the cluster is starting up. The core is not yet fully loaded, and the leader election process
-      // is waiting for it.
-      if (this.isLeaderRegistered(zkClient, clusterState)) {
-        this.checkIfIAmLeader();
-      }
-    } catch (KeeperException | InterruptedException e) {
-      log.warn("Failed fetching initial leader state and setting watch", e);
-    }
-  }
-
-  /**
-   * Checks if the leader is registered. If it is not registered, we are probably at the
-   * initialisation phase of the cluster. In this case, we must attach a watcher to
-   * be notified when the leader is registered.
-   */
-  private boolean isLeaderRegistered(SolrZkClient zkClient, ClusterState clusterState)
-      throws KeeperException, InterruptedException {
-    // First check if the znode exists, and register the watcher at the same time
-    return zkClient.exists(this.getZnodePath(), watcher, true) != null;
-  }
-
-  /**
-   * SolrZkClient does not guarantee that a watch object will only be triggered once for a given notification
-   * if we does not wrap the watcher - see SOLR-6621.
-   */
-  private Watcher initWatcher(SolrZkClient zkClient) {
-    wrappedWatcher = new LeaderStateWatcher();
-    return zkClient.wrapWatcher(wrappedWatcher);
-  }
-
-  private void checkIfIAmLeader() throws KeeperException, InterruptedException {
-    SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-    ZkNodeProps props = ZkNodeProps.load(zkClient.getData(CdcrLeaderStateManager.this.getZnodePath(), null, null, true));
-    if (props != null) {
-      CdcrLeaderStateManager.this.setAmILeader(props.get("core").equals(core.getName()));
-    }
-  }
-
-  private String getZnodePath() {
-    String myShardId = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-    String myCollection = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    return "/collections/" + myCollection + "/leaders/" + myShardId + "/leader";
-  }
-
-  void setAmILeader(boolean amILeader) {
-    if (this.amILeader != amILeader) {
-      this.amILeader = amILeader;
-      this.callback(); // notify the observers of a state change
-    }
-  }
-
-  boolean amILeader() {
-    return amILeader;
-  }
-
-  void shutdown() {
-    if (wrappedWatcher != null) {
-      wrappedWatcher.cancel(); // cancel the watcher to avoid spurious warn messages during shutdown
-    }
-  }
-
-  private class LeaderStateWatcher implements Watcher {
-
-    private boolean isCancelled = false;
-
-    /**
-     * Cancel the watcher to avoid spurious warn messages during shutdown.
-     */
-    void cancel() {
-      isCancelled = true;
-    }
-
-    @Override
-    public void process(WatchedEvent event) {
-      if (isCancelled) return; // if the watcher is cancelled, do nothing.
-      String collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-      String shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-
-      log.debug("The leader state has changed: {} @ {}:{}", event, collectionName, shard);
-      // session events are not change events, and do not remove the watcher
-      if (Event.EventType.None.equals(event.getType())) {
-        return;
-      }
-
-      try {
-        log.info("Received new leader state @ {}:{}", collectionName, shard);
-        SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-        ClusterState clusterState = core.getCoreContainer().getZkController().getClusterState();
-        if (CdcrLeaderStateManager.this.isLeaderRegistered(zkClient, clusterState)) {
-          CdcrLeaderStateManager.this.checkIfIAmLeader();
-        }
-      } catch (KeeperException | InterruptedException e) {
-        log.warn("Failed updating leader state and setting watch @ {}: {}", collectionName, shard, e);
-      }
-    }
-
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrParams.java b/solr/core/src/java/org/apache/solr/handler/CdcrParams.java
deleted file mode 100644
index 3f65b90..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrParams.java
+++ /dev/null
@@ -1,256 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.nio.charset.Charset;
-import java.util.Locale;
-
-public class CdcrParams {
-
-  /**
-   * The definition of a replica configuration *
-   */
-  public static final String REPLICA_PARAM = "replica";
-
-  /**
-   * The source collection of a replica *
-   */
-  public static final String SOURCE_COLLECTION_PARAM = "source";
-
-  /**
-   * The target collection of a replica *
-   */
-  public static final String TARGET_COLLECTION_PARAM = "target";
-
-  /**
-   * The Zookeeper host of the target cluster hosting the replica *
-   */
-  public static final String ZK_HOST_PARAM = "zkHost";
-
-  /**
-   * The definition of the {@link org.apache.solr.handler.CdcrReplicatorScheduler} configuration *
-   */
-  public static final String REPLICATOR_PARAM = "replicator";
-
-  /**
-   * The thread pool size of the replicator *
-   */
-  public static final String THREAD_POOL_SIZE_PARAM = "threadPoolSize";
-
-  /**
-   * The time schedule (in ms) of the replicator *
-   */
-  public static final String SCHEDULE_PARAM = "schedule";
-
-  /**
-   * The batch size of the replicator *
-   */
-  public static final String BATCH_SIZE_PARAM = "batchSize";
-
-  /**
-   * The definition of the {@link org.apache.solr.handler.CdcrUpdateLogSynchronizer} configuration *
-   */
-  public static final String UPDATE_LOG_SYNCHRONIZER_PARAM = "updateLogSynchronizer";
-
-  /**
-   * The definition of the {@link org.apache.solr.handler.CdcrBufferManager} configuration *
-   */
-  public static final String BUFFER_PARAM = "buffer";
-
-  /**
-   * The default state at startup of the buffer *
-   */
-  public static final String DEFAULT_STATE_PARAM = "defaultState";
-
-  /**
-   * The latest update checkpoint on a target cluster *
-   */
-  public final static String CHECKPOINT = "checkpoint";
-
-  /**
-   * The last processed version on a source cluster *
-   */
-  public final static String LAST_PROCESSED_VERSION = "lastProcessedVersion";
-
-  /**
-   * A list of replica queues on a source cluster *
-   */
-  public final static String QUEUES = "queues";
-
-  /**
-   * The size of a replica queue on a source cluster *
-   */
-  public final static String QUEUE_SIZE = "queueSize";
-
-  /**
-   * The timestamp of the last processed operation in a replica queue *
-   */
-  public final static String LAST_TIMESTAMP = "lastTimestamp";
-
-  /**
-   * A list of qps statistics per collection *
-   */
-  public final static String OPERATIONS_PER_SECOND = "operationsPerSecond";
-
-  /**
-   * Overall counter *
-   */
-  public final static String COUNTER_ALL = "all";
-
-  /**
-   * Counter for Adds *
-   */
-  public final static String COUNTER_ADDS = "adds";
-
-  /**
-   * Counter for Deletes *
-   */
-  public final static String COUNTER_DELETES = "deletes";
-
-  /**
-   * Counter for Bootstrap operations *
-   */
-  public final static String COUNTER_BOOTSTRAP = "bootstraps";
-
-  /**
-   * A list of errors per target collection *
-   */
-  public final static String ERRORS = "errors";
-
-  /**
-   * Counter for consecutive errors encountered by a replicator thread *
-   */
-  public final static String CONSECUTIVE_ERRORS = "consecutiveErrors";
-
-  /**
-   * A list of the last errors encountered by a replicator thread *
-   */
-  public final static String LAST = "last";
-
-  /**
-   * Total size of transaction logs *
-   */
-  public final static String TLOG_TOTAL_SIZE = "tlogTotalSize";
-
-  /**
-   * Total count of transaction logs *
-   */
-  public final static String TLOG_TOTAL_COUNT = "tlogTotalCount";
-
-  /**
-   * The state of the update log synchronizer *
-   */
-  public final static String UPDATE_LOG_SYNCHRONIZER = "updateLogSynchronizer";
-
-  /**
-   * The actions supported by the CDCR API
-   */
-  public enum CdcrAction {
-    START,
-    STOP,
-    STATUS,
-    COLLECTIONCHECKPOINT,
-    SHARDCHECKPOINT,
-    ENABLEBUFFER,
-    DISABLEBUFFER,
-    LASTPROCESSEDVERSION,
-    QUEUES,
-    OPS,
-    ERRORS,
-    BOOTSTRAP,
-    BOOTSTRAP_STATUS,
-    CANCEL_BOOTSTRAP;
-
-    public static CdcrAction get(String p) {
-      if (p != null) {
-        try {
-          return CdcrAction.valueOf(p.toUpperCase(Locale.ROOT));
-        } catch (Exception e) {
-        }
-      }
-      return null;
-    }
-
-    public String toLower() {
-      return toString().toLowerCase(Locale.ROOT);
-    }
-
-  }
-
-  /**
-   * The possible states of the CDCR process
-   */
-  public enum ProcessState {
-    STARTED,
-    STOPPED;
-
-    public static ProcessState get(byte[] state) {
-      if (state != null) {
-        try {
-          return ProcessState.valueOf(new String(state, Charset.forName("UTF-8")).toUpperCase(Locale.ROOT));
-        } catch (Exception e) {
-        }
-      }
-      return null;
-    }
-
-    public String toLower() {
-      return toString().toLowerCase(Locale.ROOT);
-    }
-
-    public byte[] getBytes() {
-      return toLower().getBytes(Charset.forName("UTF-8"));
-    }
-
-    public static String getParam() {
-      return "process";
-    }
-
-  }
-
-  /**
-   * The possible states of the CDCR buffer
-   */
-  public enum BufferState {
-    ENABLED,
-    DISABLED;
-
-    public static BufferState get(byte[] state) {
-      if (state != null) {
-        try {
-          return BufferState.valueOf(new String(state, Charset.forName("UTF-8")).toUpperCase(Locale.ROOT));
-        } catch (Exception e) {
-        }
-      }
-      return null;
-    }
-
-    public String toLower() {
-      return toString().toLowerCase(Locale.ROOT);
-    }
-
-    public byte[] getBytes() {
-      return toLower().getBytes(Charset.forName("UTF-8"));
-    }
-
-    public static String getParam() {
-      return "buffer";
-    }
-
-  }
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrProcessStateManager.java b/solr/core/src/java/org/apache/solr/handler/CdcrProcessStateManager.java
deleted file mode 100644
index 6506030..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrProcessStateManager.java
+++ /dev/null
@@ -1,178 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.lang.invoke.MethodHandles;
-
-import org.apache.solr.common.cloud.SolrZkClient;
-import org.apache.solr.core.SolrCore;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * Manage the life-cycle state of the CDCR process. It is responsible of synchronising the state
- * through Zookeeper. The state of the CDCR process is stored in the zk node defined by {@link #getZnodePath()}.
- * </p>
- * <p>
- * It takes care of notifying the {@link CdcrReplicatorManager} in case
- * of a process state change.
- * </p>
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-class CdcrProcessStateManager extends CdcrStateManager {
-
-  private CdcrParams.ProcessState state = DEFAULT_STATE;
-
-  private ProcessStateWatcher wrappedWatcher;
-  private Watcher watcher;
-
-  private SolrCore core;
-
-  /**
-   * The default state must be STOPPED. See comments in
-   * {@link #setState(org.apache.solr.handler.CdcrParams.ProcessState)}.
-   */
-  static CdcrParams.ProcessState DEFAULT_STATE = CdcrParams.ProcessState.STOPPED;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  CdcrProcessStateManager(final SolrCore core) {
-    this.core = core;
-
-    // Ensure that the status znode exists
-    this.createStateNode();
-
-    // Register the watcher at startup
-    try {
-      SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-      watcher = this.initWatcher(zkClient);
-      this.setState(CdcrParams.ProcessState.get(zkClient.getData(this.getZnodePath(), watcher, null, true)));
-    } catch (KeeperException | InterruptedException e) {
-      log.warn("Failed fetching initial state", e);
-    }
-  }
-
-  /**
-   * SolrZkClient does not guarantee that a watch object will only be triggered once for a given notification
-   * if we does not wrap the watcher - see SOLR-6621.
-   */
-  private Watcher initWatcher(SolrZkClient zkClient) {
-    wrappedWatcher = new ProcessStateWatcher();
-    return zkClient.wrapWatcher(wrappedWatcher);
-  }
-
-  private String getZnodeBase() {
-    return "/collections/" + core.getCoreDescriptor().getCloudDescriptor().getCollectionName() + "/cdcr/state";
-  }
-
-  private String getZnodePath() {
-    return getZnodeBase() + "/process";
-  }
-
-  void setState(CdcrParams.ProcessState state) {
-    if (this.state != state) {
-      this.state = state;
-      this.callback(); // notify the observers of a state change
-    }
-  }
-
-  CdcrParams.ProcessState getState() {
-    return state;
-  }
-
-  /**
-   * Synchronise the state to Zookeeper. This method must be called only by the handler receiving the
-   * action.
-   */
-  void synchronize() {
-    SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-    try {
-      zkClient.setData(this.getZnodePath(), this.getState().getBytes(), true);
-      // check if nobody changed it in the meantime, and set a new watcher
-      this.setState(CdcrParams.ProcessState.get(zkClient.getData(this.getZnodePath(), watcher, null, true)));
-    } catch (KeeperException | InterruptedException e) {
-      log.warn("Failed synchronising new state", e);
-    }
-  }
-
-  private void createStateNode() {
-    SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-    try {
-      if (!zkClient.exists(this.getZnodePath(), true)) {
-        if (!zkClient.exists(this.getZnodeBase(), true)) { // Should be a no-op if the node exists
-          zkClient.makePath(this.getZnodeBase(), null, CreateMode.PERSISTENT, null, false, true);
-        }
-        zkClient.create(this.getZnodePath(), DEFAULT_STATE.getBytes(), CreateMode.PERSISTENT, true);
-        if (log.isInfoEnabled()) {
-          log.info("Created znode {}", this.getZnodePath());
-        }
-      }
-    } catch (KeeperException.NodeExistsException ne) {
-      // Someone got in first and created the node.
-    } catch (KeeperException | InterruptedException e) {
-      log.warn("Failed to create CDCR process state node", e);
-    }
-  }
-
-  void shutdown() {
-    if (wrappedWatcher != null) {
-      wrappedWatcher.cancel(); // cancel the watcher to avoid spurious warn messages during shutdown
-    }
-  }
-
-  private class ProcessStateWatcher implements Watcher {
-
-    private boolean isCancelled = false;
-
-    /**
-     * Cancel the watcher to avoid spurious warn messages during shutdown.
-     */
-    void cancel() {
-      isCancelled = true;
-    }
-
-    @Override
-    public void process(WatchedEvent event) {
-      if (isCancelled) return; // if the watcher is cancelled, do nothing.
-      String collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-      String shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-
-      log.info("The CDCR process state has changed: {} @ {}:{}", event, collectionName, shard);
-      // session events are not change events, and do not remove the watcher
-      if (Event.EventType.None.equals(event.getType())) {
-        return;
-      }
-      SolrZkClient zkClient = core.getCoreContainer().getZkController().getZkClient();
-      try {
-        CdcrParams.ProcessState state = CdcrParams.ProcessState.get(zkClient.getData(CdcrProcessStateManager.this.getZnodePath(), watcher, null, true));
-        log.info("Received new CDCR process state from watcher: {} @ {}:{}", state, collectionName, shard);
-        CdcrProcessStateManager.this.setState(state);
-      } catch (KeeperException | InterruptedException e) {
-        log.warn("Failed synchronising new state @ {}: {}", collectionName, shard, e);
-      }
-    }
-
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrReplicator.java b/solr/core/src/java/org/apache/solr/handler/CdcrReplicator.java
deleted file mode 100644
index 936750e..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrReplicator.java
+++ /dev/null
@@ -1,258 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.nio.charset.Charset;
-import java.util.List;
-
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.response.UpdateResponse;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.update.CdcrUpdateLog;
-import org.apache.solr.update.UpdateLog;
-import org.apache.solr.update.processor.CdcrUpdateProcessor;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.common.params.CommonParams.VERSION_FIELD;
-
-/**
- * The replication logic. Given a {@link org.apache.solr.handler.CdcrReplicatorState}, it reads all the new entries
- * in the update log and forward them to the target cluster. If an error occurs, the replication is stopped and
- * will be tried again later.
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-public class CdcrReplicator implements Runnable {
-
-  private final CdcrReplicatorState state;
-  private final int batchSize;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  public CdcrReplicator(CdcrReplicatorState state, int batchSize) {
-    this.state = state;
-    this.batchSize = batchSize;
-  }
-
-  @Override
-  public void run() {
-    CdcrUpdateLog.CdcrLogReader logReader = state.getLogReader();
-    CdcrUpdateLog.CdcrLogReader subReader = null;
-    if (logReader == null) {
-      log.warn("Log reader for target {} is not initialised, it will be ignored.", state.getTargetCollection());
-      return;
-    }
-
-    try {
-      // create update request
-      UpdateRequest req = new UpdateRequest();
-      // Add the param to indicate the {@link CdcrUpdateProcessor} to keep the provided version number
-      req.setParam(CdcrUpdateProcessor.CDCR_UPDATE, "");
-
-      // Start the benchmark timer
-      state.getBenchmarkTimer().start();
-
-      long counter = 0;
-      subReader = logReader.getSubReader();
-
-      for (int i = 0; i < batchSize; i++) {
-        Object o = subReader.next();
-        if (o == null) break; // we have reached the end of the update logs, we should close the batch
-
-        if (isTargetCluster(o)) {
-          continue;
-        }
-
-        if (isDelete(o)) {
-
-          /*
-          * Deletes are sent one at a time.
-          */
-
-          // First send out current batch of SolrInputDocument, the non-deletes.
-          List<SolrInputDocument> docs = req.getDocuments();
-
-          if (docs != null && docs.size() > 0) {
-            subReader.resetToLastPosition(); // Push back the delete for now.
-            this.sendRequest(req); // Send the batch update request
-            logReader.forwardSeek(subReader); // Advance the main reader to just before the delete.
-            o = subReader.next(); // Read the delete again
-            counter += docs.size();
-            req.clear();
-          }
-
-          // Process Delete
-          this.processUpdate(o, req);
-          this.sendRequest(req);
-          logReader.forwardSeek(subReader);
-          counter++;
-          req.clear();
-
-        } else {
-
-          this.processUpdate(o, req);
-
-        }
-      }
-
-      //Send the final batch out.
-      List<SolrInputDocument> docs = req.getDocuments();
-
-      if ((docs != null && docs.size() > 0)) {
-        this.sendRequest(req);
-        counter += docs.size();
-      }
-
-      // we might have read a single commit operation and reached the end of the update logs
-      logReader.forwardSeek(subReader);
-
-      if (log.isInfoEnabled()) {
-        log.info("Forwarded {} updates to target {}", counter, state.getTargetCollection());
-      }
-    } catch (Exception e) {
-      // report error and update error stats
-      this.handleException(e);
-    } finally {
-      // stop the benchmark timer
-      state.getBenchmarkTimer().stop();
-      // ensure that the subreader is closed and the associated pointer is removed
-      if (subReader != null) subReader.close();
-    }
-  }
-
-  private void sendRequest(UpdateRequest req) throws IOException, SolrServerException, CdcrReplicatorException {
-    UpdateResponse rsp = req.process(state.getClient());
-    if (rsp.getStatus() != 0) {
-      throw new CdcrReplicatorException(req, rsp);
-    }
-    state.resetConsecutiveErrors();
-  }
-
-  /** check whether the update read from TLog is received from source
-   *  or received via solr client
-   */
-  private boolean isTargetCluster(Object o) {
-    @SuppressWarnings({"rawtypes"})
-    List entry = (List) o;
-    int operationAndFlags = (Integer) entry.get(0);
-    int oper = operationAndFlags & UpdateLog.OPERATION_MASK;
-    Boolean isTarget = false;
-    if (oper == UpdateLog.DELETE_BY_QUERY ||  oper == UpdateLog.DELETE) {
-      if (entry.size() == 4) { //back-combat - skip for previous versions
-        isTarget = (Boolean) entry.get(entry.size() - 1);
-      }
-    } else if (oper == UpdateLog.UPDATE_INPLACE) {
-      if (entry.size() == 6) { //back-combat - skip for previous versions
-        isTarget = (Boolean) entry.get(entry.size() - 2);
-      }
-    } else if (oper == UpdateLog.ADD) {
-      if (entry.size() == 4) { //back-combat - skip for previous versions
-        isTarget = (Boolean) entry.get(entry.size() - 2);
-      }
-    }
-    return isTarget;
-  }
-
-  private boolean isDelete(Object o) {
-    @SuppressWarnings({"rawtypes"})
-    List entry = (List) o;
-    int operationAndFlags = (Integer) entry.get(0);
-    int oper = operationAndFlags & UpdateLog.OPERATION_MASK;
-    return oper == UpdateLog.DELETE_BY_QUERY || oper == UpdateLog.DELETE;
-  }
-
-  private void handleException(Exception e) {
-    if (e instanceof CdcrReplicatorException) {
-      UpdateRequest req = ((CdcrReplicatorException) e).req;
-      UpdateResponse rsp = ((CdcrReplicatorException) e).rsp;
-      log.warn("Failed to forward update request {} to target: {}. Got response {}", req, state.getTargetCollection(), rsp);
-      state.reportError(CdcrReplicatorState.ErrorType.BAD_REQUEST);
-    } else if (e instanceof CloudSolrClient.RouteException) {
-      log.warn("Failed to forward update request to target: {}", state.getTargetCollection(), e);
-      state.reportError(CdcrReplicatorState.ErrorType.BAD_REQUEST);
-    } else {
-      log.warn("Failed to forward update request to target: {}", state.getTargetCollection(), e);
-      state.reportError(CdcrReplicatorState.ErrorType.INTERNAL);
-    }
-  }
-
-  private UpdateRequest processUpdate(Object o, UpdateRequest req) {
-
-    // should currently be a List<Oper,Ver,Doc/Id>
-    @SuppressWarnings({"rawtypes"})
-    List entry = (List) o;
-
-    int operationAndFlags = (Integer) entry.get(0);
-    int oper = operationAndFlags & UpdateLog.OPERATION_MASK;
-    long version = (Long) entry.get(1);
-
-    // record the operation in the benchmark timer
-    state.getBenchmarkTimer().incrementCounter(oper);
-
-    switch (oper) {
-      case UpdateLog.ADD: {
-        // the version is already attached to the document
-        SolrInputDocument sdoc = (SolrInputDocument) entry.get(entry.size() - 1);
-        req.add(sdoc);
-        return req;
-      }
-      case UpdateLog.DELETE: {
-        byte[] idBytes = (byte[]) entry.get(2);
-        req.deleteById(new String(idBytes, Charset.forName("UTF-8")));
-        req.setParam(VERSION_FIELD, Long.toString(version));
-        return req;
-      }
-
-      case UpdateLog.DELETE_BY_QUERY: {
-        String query = (String) entry.get(2);
-        req.deleteByQuery(query);
-        req.setParam(VERSION_FIELD, Long.toString(version));
-        return req;
-      }
-
-      case UpdateLog.COMMIT: {
-        return null;
-      }
-
-      default:
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Unknown Operation! " + oper);
-    }
-  }
-
-  /**
-   * Exception to catch update request issues with the target cluster.
-   */
-  public static class CdcrReplicatorException extends Exception {
-
-    private final UpdateRequest req;
-    private final UpdateResponse rsp;
-
-    public CdcrReplicatorException(UpdateRequest req, UpdateResponse rsp) {
-      this.req = req;
-      this.rsp = rsp;
-    }
-
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorManager.java b/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorManager.java
deleted file mode 100644
index 1f9d1f9..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorManager.java
+++ /dev/null
@@ -1,441 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.io.Closeable;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Optional;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.http.client.HttpClient;
-import org.apache.solr.client.solrj.SolrClient;
-import org.apache.solr.client.solrj.SolrRequest;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.impl.CloudSolrClient.Builder;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.QueryRequest;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.cloud.Replica;
-import org.apache.solr.common.cloud.ZkCoreNodeProps;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.common.util.ExecutorUtil;
-import org.apache.solr.common.util.IOUtils;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.apache.solr.common.util.TimeSource;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.update.CdcrUpdateLog;
-import org.apache.solr.util.TimeOut;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.handler.admin.CoreAdminHandler.RESPONSE_STATUS;
-
-@Deprecated(since = "8.6")
-class CdcrReplicatorManager implements CdcrStateManager.CdcrStateObserver {
-
-  private static final int MAX_BOOTSTRAP_ATTEMPTS = 5;
-  private static final int BOOTSTRAP_RETRY_DELAY_MS = 2000;
-  // 6 hours is hopefully long enough for most indexes
-  private static final long BOOTSTRAP_TIMEOUT_SECONDS = 6L * 3600L * 3600L;
-
-  private List<CdcrReplicatorState> replicatorStates;
-
-  private final CdcrReplicatorScheduler scheduler;
-  private CdcrProcessStateManager processStateManager;
-  private CdcrLeaderStateManager leaderStateManager;
-
-  private SolrCore core;
-  private String path;
-
-  private ExecutorService bootstrapExecutor;
-  private volatile BootstrapStatusRunnable bootstrapStatusRunnable;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  CdcrReplicatorManager(final SolrCore core, String path,
-                        SolrParams replicatorConfiguration,
-                        Map<String, List<SolrParams>> replicasConfiguration) {
-    this.core = core;
-    this.path = path;
-
-    // create states
-    replicatorStates = new ArrayList<>();
-    String myCollection = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    List<SolrParams> targets = replicasConfiguration.get(myCollection);
-    if (targets != null) {
-      for (SolrParams params : targets) {
-        String zkHost = params.get(CdcrParams.ZK_HOST_PARAM);
-        String targetCollection = params.get(CdcrParams.TARGET_COLLECTION_PARAM);
-
-        CloudSolrClient client = new Builder(Collections.singletonList(zkHost), Optional.empty())
-            .withSocketTimeout(30000).withConnectionTimeout(15000)
-            .sendUpdatesOnlyToShardLeaders()
-            .build();
-        client.setDefaultCollection(targetCollection);
-        replicatorStates.add(new CdcrReplicatorState(targetCollection, zkHost, client));
-      }
-    }
-
-    this.scheduler = new CdcrReplicatorScheduler(this, replicatorConfiguration);
-  }
-
-  void setProcessStateManager(final CdcrProcessStateManager processStateManager) {
-    this.processStateManager = processStateManager;
-    this.processStateManager.register(this);
-  }
-
-  void setLeaderStateManager(final CdcrLeaderStateManager leaderStateManager) {
-    this.leaderStateManager = leaderStateManager;
-    this.leaderStateManager.register(this);
-  }
-
-  /**
-   * <p>
-   * Inform the replicator manager of a change of state, and tell him to update its own state.
-   * </p>
-   * <p>
-   * If we are the leader and the process state is STARTED, we need to initialise the log readers and start the
-   * scheduled thread poll.
-   * Otherwise, if the process state is STOPPED or if we are not the leader, we need to close the log readers and stop
-   * the thread pool.
-   * </p>
-   * <p>
-   * This method is synchronised as it can both be called by the leaderStateManager and the processStateManager.
-   * </p>
-   */
-  @Override
-  public synchronized void stateUpdate() {
-    if (leaderStateManager.amILeader() && processStateManager.getState().equals(CdcrParams.ProcessState.STARTED)) {
-      if (replicatorStates.size() > 0)  {
-        this.bootstrapExecutor = ExecutorUtil.newMDCAwareFixedThreadPool(replicatorStates.size(),
-            new SolrNamedThreadFactory("cdcr-bootstrap-status"));
-      }
-      this.initLogReaders();
-      this.scheduler.start();
-      return;
-    }
-
-    this.scheduler.shutdown();
-    if (bootstrapExecutor != null)  {
-      IOUtils.closeQuietly(bootstrapStatusRunnable);
-      ExecutorUtil.shutdownAndAwaitTermination(bootstrapExecutor);
-    }
-    this.closeLogReaders();
-    @SuppressWarnings({"rawtypes"})
-    Callable callable = core.getSolrCoreState().getCdcrBootstrapCallable();
-    if (callable != null)  {
-      CdcrRequestHandler.BootstrapCallable bootstrapCallable = (CdcrRequestHandler.BootstrapCallable) callable;
-      IOUtils.closeQuietly(bootstrapCallable);
-    }
-  }
-
-  List<CdcrReplicatorState> getReplicatorStates() {
-    return replicatorStates;
-  }
-
-  private void initLogReaders() {
-    String collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    String shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-    CdcrUpdateLog ulog = (CdcrUpdateLog) core.getUpdateHandler().getUpdateLog();
-
-    for (CdcrReplicatorState state : replicatorStates) {
-      state.closeLogReader();
-      try {
-        long checkpoint = this.getCheckpoint(state);
-        if (log.isInfoEnabled()) {
-          log.info("Create new update log reader for target {} with checkpoint {} @ {}:{}", state.getTargetCollection(),
-              checkpoint, collectionName, shard);
-        }
-        CdcrUpdateLog.CdcrLogReader reader = ulog.newLogReader();
-        boolean seek = reader.seek(checkpoint);
-        state.init(reader);
-        if (!seek) {
-          // targetVersion is lower than the oldest known entry.
-          // In this scenario, it probably means that there is a gap in the updates log.
-          // the best we can do here is to bootstrap the target leader by replicating the full index
-          final String targetCollection = state.getTargetCollection();
-          state.setBootstrapInProgress(true);
-          log.info("Attempting to bootstrap target collection: {}, shard: {}", targetCollection, shard);
-          bootstrapStatusRunnable = new BootstrapStatusRunnable(core, state);
-          log.info("Submitting bootstrap task to executor");
-          try {
-            bootstrapExecutor.submit(bootstrapStatusRunnable);
-          } catch (Exception e) {
-            log.error("Unable to submit bootstrap call to executor", e);
-          }
-        }
-      } catch (IOException | SolrServerException | SolrException e) {
-        log.warn("Unable to instantiate the log reader for target collection {}", state.getTargetCollection(), e);
-      } catch (InterruptedException e) {
-        log.warn("Thread interrupted while instantiate the log reader for target collection {}", state.getTargetCollection(), e);
-        Thread.currentThread().interrupt();
-      }
-    }
-  }
-
-  private long getCheckpoint(CdcrReplicatorState state) throws IOException, SolrServerException {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set(CommonParams.ACTION, CdcrParams.CdcrAction.COLLECTIONCHECKPOINT.toString());
-
-    @SuppressWarnings({"rawtypes"})
-    SolrRequest request = new QueryRequest(params);
-    request.setPath(path);
-
-    @SuppressWarnings({"rawtypes"})
-    NamedList response = state.getClient().request(request);
-    return (Long) response.get(CdcrParams.CHECKPOINT);
-  }
-
-  void closeLogReaders() {
-    for (CdcrReplicatorState state : replicatorStates) {
-      state.closeLogReader();
-    }
-  }
-
-  /**
-   * Shutdown all the {@link org.apache.solr.handler.CdcrReplicatorState} by closing their
-   * {@link org.apache.solr.client.solrj.impl.CloudSolrClient} and
-   * {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader}.
-   */
-  void shutdown() {
-    this.scheduler.shutdown();
-    if (bootstrapExecutor != null)  {
-      IOUtils.closeQuietly(bootstrapStatusRunnable);
-      ExecutorUtil.shutdownAndAwaitTermination(bootstrapExecutor);
-    }
-    for (CdcrReplicatorState state : replicatorStates) {
-      state.shutdown();
-    }
-    replicatorStates.clear();
-  }
-
-  private class BootstrapStatusRunnable implements Runnable, Closeable {
-    private final CdcrReplicatorState state;
-    private final String targetCollection;
-    private final String shard;
-    private final String collectionName;
-    private final CdcrUpdateLog ulog;
-    private final String myCoreUrl;
-
-    private volatile boolean closed = false;
-
-    BootstrapStatusRunnable(SolrCore core, CdcrReplicatorState state) {
-      this.collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-      this.shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-      this.ulog = (CdcrUpdateLog) core.getUpdateHandler().getUpdateLog();
-      this.state = state;
-      this.targetCollection = state.getTargetCollection();
-      String baseUrl = core.getCoreContainer().getZkController().getBaseUrl();
-      this.myCoreUrl = ZkCoreNodeProps.getCoreUrl(baseUrl, core.getName());
-    }
-
-    @Override
-    public void close() throws IOException {
-      closed = true;
-      try {
-        Replica leader = state.getClient().getZkStateReader().getLeaderRetry(targetCollection, shard, 30000); // assume same shard exists on target
-        String leaderCoreUrl = leader.getCoreUrl();
-        HttpClient httpClient = state.getClient().getLbClient().getHttpClient();
-        try (HttpSolrClient client = new HttpSolrClient.Builder(leaderCoreUrl).withHttpClient(httpClient).build()) {
-          sendCdcrCommand(client, CdcrParams.CdcrAction.CANCEL_BOOTSTRAP);
-        } catch (SolrServerException e) {
-          log.error("Error sending cancel bootstrap message to target collection: {} shard: {} leader: {}",
-              targetCollection, shard, leaderCoreUrl);
-        }
-      } catch (InterruptedException e) {
-        log.error("Interrupted while closing BootstrapStatusRunnable", e);
-        Thread.currentThread().interrupt();
-      }
-    }
-
-    @Override
-    public void run() {
-      int retries = 1;
-      boolean success = false;
-      try {
-        while (!closed && sendBootstrapCommand() != BootstrapStatus.SUBMITTED)  {
-          Thread.sleep(BOOTSTRAP_RETRY_DELAY_MS);
-        }
-        TimeOut timeOut = new TimeOut(BOOTSTRAP_TIMEOUT_SECONDS, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-        while (!timeOut.hasTimedOut()) {
-          if (closed) {
-            log.warn("Cancelling waiting for bootstrap on target: {} shard: {} to complete", targetCollection, shard);
-            state.setBootstrapInProgress(false);
-            break;
-          }
-          BootstrapStatus status = getBoostrapStatus();
-          if (status == BootstrapStatus.RUNNING) {
-            try {
-              log.info("CDCR bootstrap running for {} seconds, sleeping for {} ms",
-                  BOOTSTRAP_TIMEOUT_SECONDS - timeOut.timeLeft(TimeUnit.SECONDS), BOOTSTRAP_RETRY_DELAY_MS);
-              timeOut.sleep(BOOTSTRAP_RETRY_DELAY_MS);
-            } catch (InterruptedException e) {
-              Thread.currentThread().interrupt();
-            }
-          } else if (status == BootstrapStatus.COMPLETED) {
-            log.info("CDCR bootstrap successful in {} seconds", BOOTSTRAP_TIMEOUT_SECONDS - timeOut.timeLeft(TimeUnit.SECONDS));
-            long checkpoint = CdcrReplicatorManager.this.getCheckpoint(state);
-            if (log.isInfoEnabled()) {
-              log.info("Create new update log reader for target {} with checkpoint {} @ {}:{}", state.getTargetCollection(),
-                  checkpoint, collectionName, shard);
-            }
-            CdcrUpdateLog.CdcrLogReader reader1 = ulog.newLogReader();
-            reader1.seek(checkpoint);
-            success = true;
-            break;
-          } else if (status == BootstrapStatus.FAILED) {
-            log.warn("CDCR bootstrap failed in {} seconds", BOOTSTRAP_TIMEOUT_SECONDS - timeOut.timeLeft(TimeUnit.SECONDS));
-            // let's retry a fixed number of times before giving up
-            if (retries >= MAX_BOOTSTRAP_ATTEMPTS) {
-              log.error("Unable to bootstrap the target collection: {}, shard: {} even after {} retries", targetCollection, shard, retries);
-              break;
-            } else {
-              log.info("Retry: {} - Attempting to bootstrap target collection: {} shard: {}", retries, targetCollection, shard);
-              while (!closed && sendBootstrapCommand() != BootstrapStatus.SUBMITTED)  {
-                Thread.sleep(BOOTSTRAP_RETRY_DELAY_MS);
-              }
-              timeOut = new TimeOut(BOOTSTRAP_TIMEOUT_SECONDS, TimeUnit.SECONDS, TimeSource.NANO_TIME); // reset the timer
-              retries++;
-            }
-          } else if (status == BootstrapStatus.NOTFOUND || status == BootstrapStatus.CANCELLED) {
-            if (log.isInfoEnabled()) {
-              log.info("CDCR bootstrap {} in {} seconds"
-                  , (status == BootstrapStatus.NOTFOUND ? "not found" : "cancelled")
-                  , BOOTSTRAP_TIMEOUT_SECONDS - timeOut.timeLeft(TimeUnit.SECONDS));
-            }
-            // the leader of the target shard may have changed and therefore there is no record of the
-            // bootstrap process so we must retry the operation
-            while (!closed && sendBootstrapCommand() != BootstrapStatus.SUBMITTED)  {
-              Thread.sleep(BOOTSTRAP_RETRY_DELAY_MS);
-            }
-            retries = 1;
-            timeOut = new TimeOut(6L * 3600L * 3600L, TimeUnit.SECONDS, TimeSource.NANO_TIME); // reset the timer
-          } else if (status == BootstrapStatus.UNKNOWN || status == BootstrapStatus.SUBMITTED) {
-            if (log.isInfoEnabled()) {
-              log.info("CDCR bootstrap is {} {}", (status == BootstrapStatus.UNKNOWN ? "unknown" : "submitted"),
-                  BOOTSTRAP_TIMEOUT_SECONDS - timeOut.timeLeft(TimeUnit.SECONDS));
-            }
-            // we were not able to query the status on the remote end
-            // so just sleep for a bit and try again
-            timeOut.sleep(BOOTSTRAP_RETRY_DELAY_MS);
-          }
-        }
-      } catch (InterruptedException e) {
-        log.info("Bootstrap thread interrupted");
-        state.reportError(CdcrReplicatorState.ErrorType.INTERNAL);
-        Thread.currentThread().interrupt();
-      } catch (IOException | SolrServerException | SolrException e) {
-        log.error("Unable to bootstrap the target collection {} shard: {}", targetCollection, shard, e);
-        state.reportError(CdcrReplicatorState.ErrorType.BAD_REQUEST);
-      } finally {
-        if (success) {
-          log.info("Bootstrap successful, giving the go-ahead to replicator");
-          state.setBootstrapInProgress(false);
-        }
-      }
-    }
-
-    private BootstrapStatus sendBootstrapCommand() throws InterruptedException {
-      Replica leader = state.getClient().getZkStateReader().getLeaderRetry(targetCollection, shard, 30000); // assume same shard exists on target
-      String leaderCoreUrl = leader.getCoreUrl();
-      HttpClient httpClient = state.getClient().getLbClient().getHttpClient();
-      try (HttpSolrClient client = new HttpSolrClient.Builder(leaderCoreUrl).withHttpClient(httpClient).build()) {
-        log.info("Attempting to bootstrap target collection: {} shard: {} leader: {}", targetCollection, shard, leaderCoreUrl);
-        try {
-          @SuppressWarnings({"rawtypes"})
-          NamedList response = sendCdcrCommand(client, CdcrParams.CdcrAction.BOOTSTRAP, ReplicationHandler.MASTER_URL, myCoreUrl);
-          log.debug("CDCR Bootstrap response: {}", response);
-          String status = response.get(RESPONSE_STATUS).toString();
-          return BootstrapStatus.valueOf(status.toUpperCase(Locale.ROOT));
-        } catch (Exception e) {
-          log.error("Exception submitting bootstrap request", e);
-          return BootstrapStatus.UNKNOWN;
-        }
-      } catch (IOException e) {
-        log.error("There shouldn't be an IOException while closing but there was!", e);
-      }
-      return BootstrapStatus.UNKNOWN;
-    }
-
-    private BootstrapStatus getBoostrapStatus() throws InterruptedException {
-      try {
-        Replica leader = state.getClient().getZkStateReader().getLeaderRetry(targetCollection, shard, 30000); // assume same shard exists on target
-        String leaderCoreUrl = leader.getCoreUrl();
-        HttpClient httpClient = state.getClient().getLbClient().getHttpClient();
-        try (HttpSolrClient client = new HttpSolrClient.Builder(leaderCoreUrl).withHttpClient(httpClient).build()) {
-          @SuppressWarnings({"rawtypes"})
-          NamedList response = sendCdcrCommand(client, CdcrParams.CdcrAction.BOOTSTRAP_STATUS);
-          String status = (String) response.get(RESPONSE_STATUS);
-          BootstrapStatus bootstrapStatus = BootstrapStatus.valueOf(status.toUpperCase(Locale.ROOT));
-          if (bootstrapStatus == BootstrapStatus.RUNNING) {
-            return BootstrapStatus.RUNNING;
-          } else if (bootstrapStatus == BootstrapStatus.COMPLETED) {
-            return BootstrapStatus.COMPLETED;
-          } else if (bootstrapStatus == BootstrapStatus.FAILED) {
-            return BootstrapStatus.FAILED;
-          } else if (bootstrapStatus == BootstrapStatus.NOTFOUND) {
-            log.warn("Bootstrap process was not found on target collection: {} shard: {}, leader: {}", targetCollection, shard, leaderCoreUrl);
-            return BootstrapStatus.NOTFOUND;
-          } else if (bootstrapStatus == BootstrapStatus.CANCELLED) {
-            return BootstrapStatus.CANCELLED;
-          } else {
-            throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-                "Unknown status: " + status + " returned by BOOTSTRAP_STATUS command");
-          }
-        }
-      } catch (Exception e) {
-        log.error("Exception during bootstrap status request", e);
-        return BootstrapStatus.UNKNOWN;
-      }
-    }
-  }
-
-  @SuppressWarnings({"rawtypes"})
-  private NamedList sendCdcrCommand(SolrClient client, CdcrParams.CdcrAction action, String... params) throws SolrServerException, IOException {
-    ModifiableSolrParams solrParams = new ModifiableSolrParams();
-    solrParams.set(CommonParams.QT, "/cdcr");
-    solrParams.set(CommonParams.ACTION, action.toString());
-    for (int i = 0; i < params.length - 1; i+=2) {
-      solrParams.set(params[i], params[i + 1]);
-    }
-    SolrRequest request = new QueryRequest(solrParams);
-    return client.request(request);
-  }
-
-  private enum BootstrapStatus  {
-    SUBMITTED,
-    RUNNING,
-    COMPLETED,
-    FAILED,
-    NOTFOUND,
-    CANCELLED,
-    UNKNOWN
-  }
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorScheduler.java b/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorScheduler.java
deleted file mode 100644
index 1418465..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorScheduler.java
+++ /dev/null
@@ -1,116 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.common.util.ExecutorUtil;
-import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.lang.invoke.MethodHandles;
-import java.util.concurrent.*;
-
-/**
- * Schedule the execution of the {@link org.apache.solr.handler.CdcrReplicator} threads at
- * regular time interval. It relies on a queue of {@link org.apache.solr.handler.CdcrReplicatorState} in
- * order to avoid that one {@link org.apache.solr.handler.CdcrReplicatorState} is used by two threads at the same
- * time.
- */
-class CdcrReplicatorScheduler {
-
-  private boolean isStarted = false;
-
-  private ScheduledExecutorService scheduler;
-  private ExecutorService replicatorsPool;
-
-  private final CdcrReplicatorManager replicatorManager;
-  private final ConcurrentLinkedQueue<CdcrReplicatorState> statesQueue;
-
-  private int poolSize = DEFAULT_POOL_SIZE;
-  private int timeSchedule = DEFAULT_TIME_SCHEDULE;
-  private int batchSize = DEFAULT_BATCH_SIZE;
-
-  private static final int DEFAULT_POOL_SIZE = 2;
-  private static final int DEFAULT_TIME_SCHEDULE = 10;
-  private static final int DEFAULT_BATCH_SIZE = 128;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  CdcrReplicatorScheduler(final CdcrReplicatorManager replicatorStatesManager, final SolrParams replicatorConfiguration) {
-    this.replicatorManager = replicatorStatesManager;
-    this.statesQueue = new ConcurrentLinkedQueue<>(replicatorManager.getReplicatorStates());
-    if (replicatorConfiguration != null) {
-      poolSize = replicatorConfiguration.getInt(CdcrParams.THREAD_POOL_SIZE_PARAM, DEFAULT_POOL_SIZE);
-      timeSchedule = replicatorConfiguration.getInt(CdcrParams.SCHEDULE_PARAM, DEFAULT_TIME_SCHEDULE);
-      batchSize = replicatorConfiguration.getInt(CdcrParams.BATCH_SIZE_PARAM, DEFAULT_BATCH_SIZE);
-    }
-  }
-
-  void start() {
-    if (!isStarted) {
-      scheduler = Executors.newSingleThreadScheduledExecutor(new SolrNamedThreadFactory("cdcr-scheduler"));
-      replicatorsPool = ExecutorUtil.newMDCAwareFixedThreadPool(poolSize, new SolrNamedThreadFactory("cdcr-replicator"));
-
-      // the scheduler thread is executed every second and submits one replication task
-      // per available state in the queue
-      scheduler.scheduleWithFixedDelay(() -> {
-        int nCandidates = statesQueue.size();
-        for (int i = 0; i < nCandidates; i++) {
-          // a thread that poll one state from the queue, execute the replication task, and push back
-          // the state in the queue when the task is completed
-          replicatorsPool.execute(() -> {
-            CdcrReplicatorState state = statesQueue.poll();
-            assert state != null; // Should never happen
-            try {
-              if (!state.isBootstrapInProgress()) {
-                new CdcrReplicator(state, batchSize).run();
-              } else  {
-                if (log.isDebugEnabled()) {
-                  log.debug("Replicator state is bootstrapping, skipping replication for target collection {}", state.getTargetCollection());
-                }
-              }
-            } finally {
-              statesQueue.offer(state);
-            }
-          });
-
-        }
-      }, 0, timeSchedule, TimeUnit.MILLISECONDS);
-      isStarted = true;
-    }
-  }
-
-  void shutdown() {
-    if (isStarted) {
-      // interrupts are often dangerous in Lucene / Solr code, but the
-      // test for this will leak threads without
-      replicatorsPool.shutdown();
-      try {
-        replicatorsPool.awaitTermination(60, TimeUnit.SECONDS);
-      } catch (InterruptedException e) {
-        log.warn("Thread interrupted while waiting for CDCR replicator threadpool close.");
-        Thread.currentThread().interrupt();
-      } finally {
-        scheduler.shutdownNow();
-        isStarted = false;
-      }
-    }
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorState.java b/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorState.java
deleted file mode 100644
index af9020a..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrReplicatorState.java
+++ /dev/null
@@ -1,299 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.time.Instant;
-import java.util.ArrayList;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.concurrent.atomic.AtomicBoolean;
-import java.util.concurrent.atomic.AtomicInteger;
-
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.update.CdcrUpdateLog;
-import org.apache.solr.update.UpdateLog;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * The state of the replication with a target cluster.
- */
-class CdcrReplicatorState {
-
-  private final String targetCollection;
-  private final String zkHost;
-  private final CloudSolrClient targetClient;
-
-  private CdcrUpdateLog.CdcrLogReader logReader;
-
-  private long consecutiveErrors = 0;
-  private final Map<ErrorType, Long> errorCounters = new HashMap<>();
-  private final FixedQueue<ErrorQueueEntry> errorsQueue = new FixedQueue<>(100); // keep the last 100 errors
-
-  private BenchmarkTimer benchmarkTimer;
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private final AtomicBoolean bootstrapInProgress = new AtomicBoolean(false);
-  private final AtomicInteger numBootstraps = new AtomicInteger();
-
-  CdcrReplicatorState(final String targetCollection, final String zkHost, final CloudSolrClient targetClient) {
-    this.targetCollection = targetCollection;
-    this.targetClient = targetClient;
-    this.zkHost = zkHost;
-    this.benchmarkTimer = new BenchmarkTimer();
-  }
-
-  /**
-   * Initialise the replicator state with a {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader}
-   * that is positioned at the last target cluster checkpoint.
-   */
-  void init(final CdcrUpdateLog.CdcrLogReader logReader) {
-    this.logReader = logReader;
-  }
-
-  void closeLogReader() {
-    if (logReader != null) {
-      logReader.close();
-      logReader = null;
-    }
-  }
-
-  CdcrUpdateLog.CdcrLogReader getLogReader() {
-    return logReader;
-  }
-
-  String getTargetCollection() {
-    return targetCollection;
-  }
-
-  String getZkHost() {
-    return zkHost;
-  }
-
-  CloudSolrClient getClient() {
-    return targetClient;
-  }
-
-  void shutdown() {
-    try {
-      targetClient.close();
-    } catch (IOException ioe) {
-      log.warn("Caught exception trying to close server: ", ioe);
-    }
-    this.closeLogReader();
-  }
-
-  void reportError(ErrorType error) {
-    if (!errorCounters.containsKey(error)) {
-      errorCounters.put(error, 0l);
-    }
-    errorCounters.put(error, errorCounters.get(error) + 1);
-    errorsQueue.add(new ErrorQueueEntry(error, new Date()));
-    consecutiveErrors++;
-  }
-
-  void resetConsecutiveErrors() {
-    consecutiveErrors = 0;
-  }
-
-  /**
-   * Returns the number of consecutive errors encountered while trying to forward updates to the target.
-   */
-  long getConsecutiveErrors() {
-    return consecutiveErrors;
-  }
-
-  /**
-   * Gets the number of errors of a particular type.
-   */
-  long getErrorCount(ErrorType type) {
-    if (errorCounters.containsKey(type)) {
-      return errorCounters.get(type);
-    } else {
-      return 0;
-    }
-  }
-
-  /**
-   * Gets the last errors ordered by timestamp (most recent first)
-   */
-  List<String[]> getLastErrors() {
-    List<String[]> lastErrors = new ArrayList<>();
-    synchronized (errorsQueue) {
-      Iterator<ErrorQueueEntry> it = errorsQueue.iterator();
-      while (it.hasNext()) {
-        ErrorQueueEntry entry = it.next();
-        lastErrors.add(new String[]{entry.timestamp.toInstant().toString(), entry.type.toLower()});
-      }
-    }
-    return lastErrors;
-  }
-
-  /**
-   * Return the timestamp of the last processed operations
-   */
-  String getTimestampOfLastProcessedOperation() {
-    if (logReader != null && logReader.getLastVersion() != -1) {
-      // Shift back to the right by 20 bits the version number - See VersionInfo#getNewClock
-      return Instant.ofEpochMilli(logReader.getLastVersion() >> 20).toString();
-    }
-    return "";
-  }
-
-  /**
-   * Gets the benchmark timer.
-   */
-  BenchmarkTimer getBenchmarkTimer() {
-    return this.benchmarkTimer;
-  }
-
-  /**
-   * @return true if a bootstrap operation is in progress, false otherwise
-   */
-  boolean isBootstrapInProgress() {
-    return bootstrapInProgress.get();
-  }
-
-  void setBootstrapInProgress(boolean inProgress) {
-    if (bootstrapInProgress.compareAndSet(true, false)) {
-      numBootstraps.incrementAndGet();
-    }
-    bootstrapInProgress.set(inProgress);
-  }
-
-  public int getNumBootstraps() {
-    return numBootstraps.get();
-  }
-
-  enum ErrorType {
-    INTERNAL,
-    BAD_REQUEST;
-
-    public String toLower() {
-      return toString().toLowerCase(Locale.ROOT);
-    }
-
-  }
-
-  static class BenchmarkTimer {
-
-    private long startTime;
-    private long runTime = 0;
-    private Map<Integer, Long> opCounters = new HashMap<>();
-
-    /**
-     * Start recording time.
-     */
-    void start() {
-      startTime = System.nanoTime();
-    }
-
-    /**
-     * Stop recording time.
-     */
-    void stop() {
-      runTime += System.nanoTime() - startTime;
-      startTime = -1;
-    }
-
-    void incrementCounter(final int operationType) {
-      switch (operationType) {
-        case UpdateLog.ADD:
-        case UpdateLog.DELETE:
-        case UpdateLog.DELETE_BY_QUERY: {
-          if (!opCounters.containsKey(operationType)) {
-            opCounters.put(operationType, 0l);
-          }
-          opCounters.put(operationType, opCounters.get(operationType) + 1);
-          return;
-        }
-
-        default:
-      }
-    }
-
-    long getRunTime() {
-      long totalRunTime = runTime;
-      if (startTime != -1) { // we are currently recording the time
-        totalRunTime += System.nanoTime() - startTime;
-      }
-      return totalRunTime;
-    }
-
-    double getOperationsPerSecond() {
-      long total = 0;
-      for (long counter : opCounters.values()) {
-        total += counter;
-      }
-      double elapsedTimeInSeconds = ((double) this.getRunTime() / 1E9);
-      return total / elapsedTimeInSeconds;
-    }
-
-    double getAddsPerSecond() {
-      long total = opCounters.get(UpdateLog.ADD) != null ? opCounters.get(UpdateLog.ADD) : 0;
-      double elapsedTimeInSeconds = ((double) this.getRunTime() / 1E9);
-      return total / elapsedTimeInSeconds;
-    }
-
-    double getDeletesPerSecond() {
-      long total = opCounters.get(UpdateLog.DELETE) != null ? opCounters.get(UpdateLog.DELETE) : 0;
-      total += opCounters.get(UpdateLog.DELETE_BY_QUERY) != null ? opCounters.get(UpdateLog.DELETE_BY_QUERY) : 0;
-      double elapsedTimeInSeconds = ((double) this.getRunTime() / 1E9);
-      return total / elapsedTimeInSeconds;
-    }
-
-  }
-
-  private static class ErrorQueueEntry {
-
-    private ErrorType type;
-    private Date timestamp;
-
-    private ErrorQueueEntry(ErrorType type, Date timestamp) {
-      this.type = type;
-      this.timestamp = timestamp;
-    }
-  }
-
-  private static class FixedQueue<E> extends LinkedList<E> {
-
-    private int maxSize;
-
-    public FixedQueue(int maxSize) {
-      this.maxSize = maxSize;
-    }
-
-    @Override
-    public synchronized boolean add(E e) {
-      super.addFirst(e);
-      if (size() > maxSize) {
-        removeLast();
-      }
-      return true;
-    }
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
deleted file mode 100644
index 8e77a84..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java
+++ /dev/null
@@ -1,880 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.io.Closeable;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.concurrent.Callable;
-import java.util.concurrent.CancellationException;
-import java.util.concurrent.CountDownLatch;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Future;
-import java.util.concurrent.RejectedExecutionException;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.locks.Lock;
-import java.util.stream.Collectors;
-
-import org.apache.solr.client.solrj.SolrRequest;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.AbstractUpdateRequest;
-import org.apache.solr.client.solrj.request.QueryRequest;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.cloud.ZkController;
-import org.apache.solr.cloud.ZkShardTerms;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.cloud.ClusterState;
-import org.apache.solr.common.cloud.DocCollection;
-import org.apache.solr.common.cloud.Replica;
-import org.apache.solr.common.cloud.Slice;
-import org.apache.solr.common.cloud.ZkCoreNodeProps;
-import org.apache.solr.common.cloud.ZkNodeProps;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.common.params.UpdateParams;
-import org.apache.solr.common.util.ExecutorUtil;
-import org.apache.solr.common.util.IOUtils;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.apache.solr.core.CloseHook;
-import org.apache.solr.core.PluginBag;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.request.SolrRequestHandler;
-import org.apache.solr.response.SolrQueryResponse;
-import org.apache.solr.update.CdcrUpdateLog;
-import org.apache.solr.update.SolrCoreState;
-import org.apache.solr.update.UpdateLog;
-import org.apache.solr.update.VersionInfo;
-import org.apache.solr.update.processor.DistributedUpdateProcessor;
-import org.apache.solr.util.plugin.SolrCoreAware;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.handler.admin.CoreAdminHandler.COMPLETED;
-import static org.apache.solr.handler.admin.CoreAdminHandler.FAILED;
-import static org.apache.solr.handler.admin.CoreAdminHandler.RESPONSE;
-import static org.apache.solr.handler.admin.CoreAdminHandler.RESPONSE_MESSAGE;
-import static org.apache.solr.handler.admin.CoreAdminHandler.RESPONSE_STATUS;
-import static org.apache.solr.handler.admin.CoreAdminHandler.RUNNING;
-
-/**
- * <p>
- * This request handler implements the CDCR API and is responsible of the execution of the
- * {@link CdcrReplicator} threads.
- * </p>
- * <p>
- * It relies on three classes, {@link org.apache.solr.handler.CdcrLeaderStateManager},
- * {@link org.apache.solr.handler.CdcrBufferStateManager} and {@link org.apache.solr.handler.CdcrProcessStateManager}
- * to synchronise the state of the CDCR across all the nodes.
- * </p>
- * <p>
- * The CDCR process can be either {@link org.apache.solr.handler.CdcrParams.ProcessState#STOPPED} or {@link org.apache.solr.handler.CdcrParams.ProcessState#STARTED} by using the
- * actions {@link org.apache.solr.handler.CdcrParams.CdcrAction#STOP} and {@link org.apache.solr.handler.CdcrParams.CdcrAction#START} respectively. If a node is leader and the process
- * state is {@link org.apache.solr.handler.CdcrParams.ProcessState#STARTED}, the {@link CdcrReplicatorManager} will
- * start the {@link CdcrReplicator} threads. If a node becomes non-leader or if the process state becomes
- * {@link org.apache.solr.handler.CdcrParams.ProcessState#STOPPED}, the {@link CdcrReplicator} threads are stopped.
- * </p>
- * <p>
- * The CDCR can be switched to a "buffering" mode, in which the update log will never delete old transaction log
- * files. Such a mode can be enabled or disabled using the action {@link org.apache.solr.handler.CdcrParams.CdcrAction#ENABLEBUFFER} and
- * {@link org.apache.solr.handler.CdcrParams.CdcrAction#DISABLEBUFFER} respectively.
- * </p>
- * <p>
- * Known limitations: The source and target clusters must have the same topology. Replication between clusters
- * with a different number of shards will likely results in an inconsistent index.
- * </p>
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-public class CdcrRequestHandler extends RequestHandlerBase implements SolrCoreAware {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private SolrCore core;
-  private String collection;
-  private String shard;
-  private String path;
-
-  private SolrParams updateLogSynchronizerConfiguration;
-  private SolrParams replicatorConfiguration;
-  private SolrParams bufferConfiguration;
-  private Map<String, List<SolrParams>> replicasConfiguration;
-
-  private CdcrProcessStateManager processStateManager;
-  private CdcrBufferStateManager bufferStateManager;
-  private CdcrReplicatorManager replicatorManager;
-  private CdcrLeaderStateManager leaderStateManager;
-  private CdcrUpdateLogSynchronizer updateLogSynchronizer;
-  private CdcrBufferManager bufferManager;
-
-  @Override
-  public void init(@SuppressWarnings({"rawtypes"})NamedList args) {
-    super.init(args);
-
-    log.warn("CDCR (in its current form) is deprecated as of 8.6 and shall be removed in 9.0. See SOLR-14022 for details.");
-
-    if (args != null) {
-      // Configuration of the Update Log Synchronizer
-      Object updateLogSynchonizerParam = args.get(CdcrParams.UPDATE_LOG_SYNCHRONIZER_PARAM);
-      if (updateLogSynchonizerParam != null && updateLogSynchonizerParam instanceof NamedList) {
-        updateLogSynchronizerConfiguration = ((NamedList) updateLogSynchonizerParam).toSolrParams();
-      }
-
-      // Configuration of the Replicator
-      Object replicatorParam = args.get(CdcrParams.REPLICATOR_PARAM);
-      if (replicatorParam != null && replicatorParam instanceof NamedList) {
-        replicatorConfiguration = ((NamedList) replicatorParam).toSolrParams();
-      }
-
-      // Configuration of the Buffer
-      Object bufferParam = args.get(CdcrParams.BUFFER_PARAM);
-      if (bufferParam != null && bufferParam instanceof NamedList) {
-        bufferConfiguration = ((NamedList) bufferParam).toSolrParams();
-      }
-
-      // Configuration of the Replicas
-      replicasConfiguration = new HashMap<>();
-      @SuppressWarnings({"rawtypes"})
-      List replicas = args.getAll(CdcrParams.REPLICA_PARAM);
-      for (Object replica : replicas) {
-        if (replica != null && replica instanceof NamedList) {
-          SolrParams params = ((NamedList) replica).toSolrParams();
-          if (!replicasConfiguration.containsKey(params.get(CdcrParams.SOURCE_COLLECTION_PARAM))) {
-            replicasConfiguration.put(params.get(CdcrParams.SOURCE_COLLECTION_PARAM), new ArrayList<>());
-          }
-          replicasConfiguration.get(params.get(CdcrParams.SOURCE_COLLECTION_PARAM)).add(params);
-        }
-      }
-    }
-  }
-
-  @Override
-  public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp) throws Exception {
-    // Pick the action
-    SolrParams params = req.getParams();
-    CdcrParams.CdcrAction action = null;
-    String a = params.get(CommonParams.ACTION);
-    if (a != null) {
-      action = CdcrParams.CdcrAction.get(a);
-    }
-    if (action == null) {
-      throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Unknown action: " + a);
-    }
-
-    switch (action) {
-      case START: {
-        this.handleStartAction(req, rsp);
-        break;
-      }
-      case STOP: {
-        this.handleStopAction(req, rsp);
-        break;
-      }
-      case STATUS: {
-        this.handleStatusAction(req, rsp);
-        break;
-      }
-      case COLLECTIONCHECKPOINT: {
-        this.handleCollectionCheckpointAction(req, rsp);
-        break;
-      }
-      case SHARDCHECKPOINT: {
-        this.handleShardCheckpointAction(req, rsp);
-        break;
-      }
-      case ENABLEBUFFER: {
-        this.handleEnableBufferAction(req, rsp);
-        break;
-      }
-      case DISABLEBUFFER: {
-        this.handleDisableBufferAction(req, rsp);
-        break;
-      }
-      case LASTPROCESSEDVERSION: {
-        this.handleLastProcessedVersionAction(req, rsp);
-        break;
-      }
-      case QUEUES: {
-        this.handleQueuesAction(req, rsp);
-        break;
-      }
-      case OPS: {
-        this.handleOpsAction(req, rsp);
-        break;
-      }
-      case ERRORS: {
-        this.handleErrorsAction(req, rsp);
-        break;
-      }
-      case BOOTSTRAP: {
-        this.handleBootstrapAction(req, rsp);
-        break;
-      }
-      case BOOTSTRAP_STATUS:  {
-        this.handleBootstrapStatus(req, rsp);
-        break;
-      }
-      case CANCEL_BOOTSTRAP:  {
-        this.handleCancelBootstrap(req, rsp);
-        break;
-      }
-      default: {
-        throw new RuntimeException("Unknown action: " + action);
-      }
-    }
-
-    rsp.setHttpCaching(false);
-  }
-
-  @Override
-  public void inform(SolrCore core) {
-    this.core = core;
-    collection = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-
-    // Make sure that the core is ZKAware
-    if (!core.getCoreContainer().isZooKeeperAware()) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "Solr instance is not running in SolrCloud mode.");
-    }
-
-    // Make sure that the core is using the CdcrUpdateLog implementation
-    if (!(core.getUpdateHandler().getUpdateLog() instanceof CdcrUpdateLog)) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "Solr instance is not configured with the cdcr update log.");
-    }
-
-    // Find the registered path of the handler
-    path = null;
-    for (Map.Entry<String, PluginBag.PluginHolder<SolrRequestHandler>> entry : core.getRequestHandlers().getRegistry().entrySet()) {
-      if (core.getRequestHandlers().isLoaded(entry.getKey()) && entry.getValue().get() == this) {
-        path = entry.getKey();
-        break;
-      }
-    }
-    if (path == null) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "The CdcrRequestHandler is not registered with the current core.");
-    }
-    if (!path.startsWith("/")) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "The CdcrRequestHandler needs to be registered to a path. Typically this is '/cdcr'");
-    }
-
-    // Initialisation phase
-    // If the Solr cloud is being initialised, each CDCR node will start up in its default state, i.e., STOPPED
-    // and non-leader. The leader state will be updated later, when all the Solr cores have been loaded.
-    // If the Solr cloud has already been initialised, and the core is reloaded (i.e., because a node died or a new node
-    // is added to the cluster), the CDCR node will synchronise its state with the global CDCR state that is stored
-    // in zookeeper.
-
-    // Initialise the buffer state manager
-    bufferStateManager = new CdcrBufferStateManager(core, bufferConfiguration);
-    // Initialise the process state manager
-    processStateManager = new CdcrProcessStateManager(core);
-    // Initialise the leader state manager
-    leaderStateManager = new CdcrLeaderStateManager(core);
-
-    // Initialise the replicator states manager
-    replicatorManager = new CdcrReplicatorManager(core, path, replicatorConfiguration, replicasConfiguration);
-    replicatorManager.setProcessStateManager(processStateManager);
-    replicatorManager.setLeaderStateManager(leaderStateManager);
-    // we need to inform it of a state event since the process and leader state
-    // may have been synchronised during the initialisation
-    replicatorManager.stateUpdate();
-
-    // Initialise the update log synchronizer
-    updateLogSynchronizer = new CdcrUpdateLogSynchronizer(core, path, updateLogSynchronizerConfiguration);
-    updateLogSynchronizer.setLeaderStateManager(leaderStateManager);
-    // we need to inform it of a state event since the leader state
-    // may have been synchronised during the initialisation
-    updateLogSynchronizer.stateUpdate();
-
-    // Initialise the buffer manager
-    bufferManager = new CdcrBufferManager(core);
-    bufferManager.setLeaderStateManager(leaderStateManager);
-    bufferManager.setBufferStateManager(bufferStateManager);
-    // we need to inform it of a state event since the leader state
-    // may have been synchronised during the initialisation
-    bufferManager.stateUpdate();
-
-    // register the close hook
-    this.registerCloseHook(core);
-  }
-
-  /**
-   * register a close hook to properly shutdown the state manager and scheduler
-   */
-  private void registerCloseHook(SolrCore core) {
-    core.addCloseHook(new CloseHook() {
-
-      @Override
-      public void preClose(SolrCore core) {
-        log.info("Solr core is being closed - shutting down CDCR handler @ {}:{}", collection, shard);
-
-        updateLogSynchronizer.shutdown();
-        replicatorManager.shutdown();
-        bufferStateManager.shutdown();
-        processStateManager.shutdown();
-        leaderStateManager.shutdown();
-      }
-
-      @Override
-      public void postClose(SolrCore core) {
-      }
-
-    });
-  }
-
-  /**
-   * <p>
-   * Update and synchronize the process state.
-   * </p>
-   * <p>
-   * The process state manager must notify the replicator states manager of the change of state.
-   * </p>
-   */
-  private void handleStartAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    if (processStateManager.getState() == CdcrParams.ProcessState.STOPPED) {
-      processStateManager.setState(CdcrParams.ProcessState.STARTED);
-      processStateManager.synchronize();
-    }
-
-    rsp.add(CdcrParams.CdcrAction.STATUS.toLower(), this.getStatus());
-  }
-
-  private void handleStopAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    if (processStateManager.getState() == CdcrParams.ProcessState.STARTED) {
-      processStateManager.setState(CdcrParams.ProcessState.STOPPED);
-      processStateManager.synchronize();
-    }
-
-    rsp.add(CdcrParams.CdcrAction.STATUS.toLower(), this.getStatus());
-  }
-
-  private void handleStatusAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    rsp.add(CdcrParams.CdcrAction.STATUS.toLower(), this.getStatus());
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  private NamedList getStatus() {
-    NamedList status = new NamedList();
-    status.add(CdcrParams.ProcessState.getParam(), processStateManager.getState().toLower());
-    status.add(CdcrParams.BufferState.getParam(), bufferStateManager.getState().toLower());
-    return status;
-  }
-
-  /**
-   * This action is generally executed on the target cluster in order to retrieve the latest update checkpoint.
-   * This checkpoint is used on the source cluster to setup the
-   * {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader} of a shard leader. <br/>
-   * This method will execute in parallel one
-   * {@link org.apache.solr.handler.CdcrParams.CdcrAction#SHARDCHECKPOINT} request per shard leader. It will
-   * then pick the lowest version number as checkpoint. Picking the lowest amongst all shards will ensure that we do not
-   * pick a checkpoint that is ahead of the source cluster. This can occur when other shard leaders are sending new
-   * updates to the target cluster while we are currently instantiating the
-   * {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader}.
-   * This solution only works in scenarios where the topology of the source and target clusters are identical.
-   */
-  private void handleCollectionCheckpointAction(SolrQueryRequest req, SolrQueryResponse rsp)
-      throws IOException, SolrServerException {
-    ZkController zkController = core.getCoreContainer().getZkController();
-    try {
-      zkController.getZkStateReader().forceUpdateCollection(collection);
-    } catch (Exception e) {
-      log.warn("Error when updating cluster state", e);
-    }
-    ClusterState cstate = zkController.getClusterState();
-    DocCollection docCollection = cstate.getCollectionOrNull(collection);
-    Collection<Slice> shards = docCollection == null? null : docCollection.getActiveSlices();
-
-    ExecutorService parallelExecutor = ExecutorUtil.newMDCAwareCachedThreadPool(new SolrNamedThreadFactory("parallelCdcrExecutor"));
-
-    long checkpoint = Long.MAX_VALUE;
-    try {
-      List<Callable<Long>> callables = new ArrayList<>();
-      for (Slice shard : shards) {
-        ZkNodeProps leaderProps = zkController.getZkStateReader().getLeaderRetry(collection, shard.getName());
-        ZkCoreNodeProps nodeProps = new ZkCoreNodeProps(leaderProps);
-        callables.add(new SliceCheckpointCallable(nodeProps.getCoreUrl(), path));
-      }
-
-      for (final Future<Long> future : parallelExecutor.invokeAll(callables)) {
-        long version = future.get();
-        if (version < checkpoint) { // we must take the lowest checkpoint from all the shards
-          checkpoint = version;
-        }
-      }
-    } catch (InterruptedException e) {
-      Thread.currentThread().interrupt();
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "Error while requesting shard's checkpoints", e);
-    } catch (ExecutionException e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "Error while requesting shard's checkpoints", e);
-    } finally {
-      parallelExecutor.shutdown();
-    }
-
-    rsp.add(CdcrParams.CHECKPOINT, checkpoint);
-  }
-
-  /**
-   * Retrieve the version number of the latest entry of the {@link org.apache.solr.update.UpdateLog}.
-   */
-  private void handleShardCheckpointAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    if (!leaderStateManager.amILeader()) {
-      throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Action '" + CdcrParams.CdcrAction.SHARDCHECKPOINT +
-          "' sent to non-leader replica");
-    }
-
-    UpdateLog ulog = core.getUpdateHandler().getUpdateLog();
-    VersionInfo versionInfo = ulog.getVersionInfo();
-    try (UpdateLog.RecentUpdates recentUpdates = ulog.getRecentUpdates()) {
-      long maxVersionFromRecent = recentUpdates.getMaxRecentVersion();
-      long maxVersionFromIndex = versionInfo.getMaxVersionFromIndex(req.getSearcher());
-      log.info("Found maxVersionFromRecent {} maxVersionFromIndex {}", maxVersionFromRecent, maxVersionFromIndex);
-      // there is no race with ongoing bootstrap because we don't expect any updates to come from the source
-      long maxVersion = Math.max(maxVersionFromIndex, maxVersionFromRecent);
-      if (maxVersion == 0L) {
-        maxVersion = -1;
-      }
-      rsp.add(CdcrParams.CHECKPOINT, maxVersion);
-    } catch (IOException e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Action '" + CdcrParams.CdcrAction.SHARDCHECKPOINT +
-          "' could not read max version");
-    }
-  }
-
-  private void handleEnableBufferAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    if (bufferStateManager.getState() == CdcrParams.BufferState.DISABLED) {
-      bufferStateManager.setState(CdcrParams.BufferState.ENABLED);
-      bufferStateManager.synchronize();
-    }
-
-    rsp.add(CdcrParams.CdcrAction.STATUS.toLower(), this.getStatus());
-  }
-
-  private void handleDisableBufferAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    if (bufferStateManager.getState() == CdcrParams.BufferState.ENABLED) {
-      bufferStateManager.setState(CdcrParams.BufferState.DISABLED);
-      bufferStateManager.synchronize();
-    }
-
-    rsp.add(CdcrParams.CdcrAction.STATUS.toLower(), this.getStatus());
-  }
-
-  /**
-   * <p>
-   * We have to take care of four cases:
-   * <ul>
-   * <li>Replication & Buffering</li>
-   * <li>Replication & No Buffering</li>
-   * <li>No Replication & Buffering</li>
-   * <li>No Replication & No Buffering</li>
-   * </ul>
-   * In the first three cases, at least one log reader should have been initialised. We should take the lowest
-   * last processed version across all the initialised readers. In the last case, there isn't a log reader
-   * initialised. We should instantiate one and get the version of the first entries.
-   * </p>
-   */
-  private void handleLastProcessedVersionAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    String collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    String shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-
-    if (!leaderStateManager.amILeader()) {
-      log.warn("Action {} sent to non-leader replica @ {}:{}", CdcrParams.CdcrAction.LASTPROCESSEDVERSION, collectionName, shard);
-      throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Action " + CdcrParams.CdcrAction.LASTPROCESSEDVERSION +
-          " sent to non-leader replica");
-    }
-
-    // take care of the first three cases
-    // first check the log readers from the replicator states
-    long lastProcessedVersion = Long.MAX_VALUE;
-    for (CdcrReplicatorState state : replicatorManager.getReplicatorStates()) {
-      long version = Long.MAX_VALUE;
-      if (state.getLogReader() != null) {
-        version = state.getLogReader().getLastVersion();
-      }
-      lastProcessedVersion = Math.min(lastProcessedVersion, version);
-    }
-
-    // next check the log reader of the buffer
-    CdcrUpdateLog.CdcrLogReader bufferLogReader = ((CdcrUpdateLog) core.getUpdateHandler().getUpdateLog()).getBufferToggle();
-    if (bufferLogReader != null) {
-      lastProcessedVersion = Math.min(lastProcessedVersion, bufferLogReader.getLastVersion());
-    }
-
-    // the fourth case: no cdc replication, no buffering: all readers were null
-    if (processStateManager.getState().equals(CdcrParams.ProcessState.STOPPED) &&
-        bufferStateManager.getState().equals(CdcrParams.BufferState.DISABLED)) {
-      CdcrUpdateLog.CdcrLogReader logReader = ((CdcrUpdateLog) core.getUpdateHandler().getUpdateLog()).newLogReader();
-      try {
-        // let the reader initialize lastVersion
-        logReader.next();
-        lastProcessedVersion = Math.min(lastProcessedVersion, logReader.getLastVersion());
-      } catch (InterruptedException e) {
-        Thread.currentThread().interrupt();
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-            "Error while fetching the last processed version", e);
-      } catch (IOException e) {
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-            "Error while fetching the last processed version", e);
-      } finally {
-        logReader.close();
-      }
-    }
-
-    log.debug("Returning the lowest last processed version {}  @ {}:{}", lastProcessedVersion, collectionName, shard);
-    rsp.add(CdcrParams.LAST_PROCESSED_VERSION, lastProcessedVersion);
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  private void handleQueuesAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    NamedList hosts = new NamedList();
-
-    for (CdcrReplicatorState state : replicatorManager.getReplicatorStates()) {
-      NamedList queueStats = new NamedList();
-
-      CdcrUpdateLog.CdcrLogReader logReader = state.getLogReader();
-      if (logReader == null) {
-        String collectionName = req.getCore().getCoreDescriptor().getCloudDescriptor().getCollectionName();
-        String shard = req.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
-        log.warn("The log reader for target collection {} is not initialised @ {}:{}",
-            state.getTargetCollection(), collectionName, shard);
-        queueStats.add(CdcrParams.QUEUE_SIZE, -1l);
-      } else {
-        queueStats.add(CdcrParams.QUEUE_SIZE, logReader.getNumberOfRemainingRecords());
-      }
-      queueStats.add(CdcrParams.LAST_TIMESTAMP, state.getTimestampOfLastProcessedOperation());
-
-      if (hosts.get(state.getZkHost()) == null) {
-        hosts.add(state.getZkHost(), new NamedList());
-      }
-      ((NamedList) hosts.get(state.getZkHost())).add(state.getTargetCollection(), queueStats);
-    }
-
-    rsp.add(CdcrParams.QUEUES, hosts);
-    UpdateLog updateLog = core.getUpdateHandler().getUpdateLog();
-    rsp.add(CdcrParams.TLOG_TOTAL_SIZE, updateLog.getTotalLogsSize());
-    rsp.add(CdcrParams.TLOG_TOTAL_COUNT, updateLog.getTotalLogsNumber());
-    rsp.add(CdcrParams.UPDATE_LOG_SYNCHRONIZER,
-        updateLogSynchronizer.isStarted() ? CdcrParams.ProcessState.STARTED.toLower() : CdcrParams.ProcessState.STOPPED.toLower());
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  private void handleOpsAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    NamedList hosts = new NamedList();
-
-    for (CdcrReplicatorState state : replicatorManager.getReplicatorStates()) {
-      NamedList ops = new NamedList();
-      ops.add(CdcrParams.COUNTER_ALL, state.getBenchmarkTimer().getOperationsPerSecond());
-      ops.add(CdcrParams.COUNTER_ADDS, state.getBenchmarkTimer().getAddsPerSecond());
-      ops.add(CdcrParams.COUNTER_DELETES, state.getBenchmarkTimer().getDeletesPerSecond());
-
-      if (hosts.get(state.getZkHost()) == null) {
-        hosts.add(state.getZkHost(), new NamedList());
-      }
-      ((NamedList) hosts.get(state.getZkHost())).add(state.getTargetCollection(), ops);
-    }
-
-    rsp.add(CdcrParams.OPERATIONS_PER_SECOND, hosts);
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  private void handleErrorsAction(SolrQueryRequest req, SolrQueryResponse rsp) {
-    NamedList hosts = new NamedList();
-
-    for (CdcrReplicatorState state : replicatorManager.getReplicatorStates()) {
-      NamedList errors = new NamedList();
-
-      errors.add(CdcrParams.CONSECUTIVE_ERRORS, state.getConsecutiveErrors());
-      errors.add(CdcrReplicatorState.ErrorType.BAD_REQUEST.toLower(), state.getErrorCount(CdcrReplicatorState.ErrorType.BAD_REQUEST));
-      errors.add(CdcrReplicatorState.ErrorType.INTERNAL.toLower(), state.getErrorCount(CdcrReplicatorState.ErrorType.INTERNAL));
-
-      NamedList lastErrors = new NamedList();
-      for (String[] lastError : state.getLastErrors()) {
-        lastErrors.add(lastError[0], lastError[1]);
-      }
-      errors.add(CdcrParams.LAST, lastErrors);
-
-      if (hosts.get(state.getZkHost()) == null) {
-        hosts.add(state.getZkHost(), new NamedList());
-      }
-      ((NamedList) hosts.get(state.getZkHost())).add(state.getTargetCollection(), errors);
-    }
-
-    rsp.add(CdcrParams.ERRORS, hosts);
-  }
-
-  private void handleBootstrapAction(SolrQueryRequest req, SolrQueryResponse rsp) throws IOException, InterruptedException, SolrServerException {
-    String collectionName = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    String shard = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-    if (!leaderStateManager.amILeader()) {
-      log.warn("Action {} sent to non-leader replica @ {}:{}", CdcrParams.CdcrAction.BOOTSTRAP, collectionName, shard);
-      throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Action " + CdcrParams.CdcrAction.BOOTSTRAP +
-          " sent to non-leader replica");
-    }
-    CountDownLatch latch = new CountDownLatch(1); // latch to make sure BOOTSTRAP_STATUS gives correct response
-
-    Runnable runnable = () -> {
-      Lock recoveryLock = req.getCore().getSolrCoreState().getRecoveryLock();
-      boolean locked = recoveryLock.tryLock();
-      SolrCoreState coreState = core.getSolrCoreState();
-      try {
-        if (!locked)  {
-          handleCancelBootstrap(req, rsp);
-        } else if (leaderStateManager.amILeader())  {
-          coreState.setCdcrBootstrapRunning(true);
-          latch.countDown(); // free the latch as current bootstrap is executing
-          //running.set(true);
-          String masterUrl = req.getParams().get(ReplicationHandler.MASTER_URL);
-          BootstrapCallable bootstrapCallable = new BootstrapCallable(masterUrl, core);
-          coreState.setCdcrBootstrapCallable(bootstrapCallable);
-          Future<Boolean> bootstrapFuture = core.getCoreContainer().getUpdateShardHandler().getRecoveryExecutor()
-              .submit(bootstrapCallable);
-          coreState.setCdcrBootstrapFuture(bootstrapFuture);
-          try {
-            bootstrapFuture.get();
-          } catch (InterruptedException e) {
-            Thread.currentThread().interrupt();
-            log.warn("Bootstrap was interrupted", e);
-          } catch (ExecutionException e) {
-            log.error("Bootstrap operation failed", e);
-          }
-        } else  {
-          log.error("Action {} sent to non-leader replica @ {}:{}. Aborting bootstrap.", CdcrParams.CdcrAction.BOOTSTRAP, collectionName, shard);
-        }
-      } finally {
-        if (locked) {
-          coreState.setCdcrBootstrapRunning(false);
-          recoveryLock.unlock();
-        } else {
-          latch.countDown(); // free the latch as current bootstrap is executing
-        }
-      }
-    };
-
-    try {
-      core.getCoreContainer().getUpdateShardHandler().getUpdateExecutor().submit(runnable);
-      rsp.add(RESPONSE_STATUS, "submitted");
-      latch.await(10000, TimeUnit.MILLISECONDS); // put the latch for current bootstrap command
-    } catch (RejectedExecutionException ree)  {
-      // no problem, we're probably shutting down
-      rsp.add(RESPONSE_STATUS, "failed");
-    }
-  }
-
-  private void handleCancelBootstrap(SolrQueryRequest req, SolrQueryResponse rsp) {
-    BootstrapCallable callable = (BootstrapCallable)core.getSolrCoreState().getCdcrBootstrapCallable();
-    IOUtils.closeQuietly(callable);
-    rsp.add(RESPONSE_STATUS, "cancelled");
-  }
-
-  private void handleBootstrapStatus(SolrQueryRequest req, SolrQueryResponse rsp) throws IOException, SolrServerException {
-    SolrCoreState coreState = core.getSolrCoreState();
-    if (coreState.getCdcrBootstrapRunning()) {
-      rsp.add(RESPONSE_STATUS, RUNNING);
-      return;
-    }
-
-    Future<Boolean> future = coreState.getCdcrBootstrapFuture();
-    BootstrapCallable callable = (BootstrapCallable)coreState.getCdcrBootstrapCallable();
-    if (future == null) {
-      rsp.add(RESPONSE_STATUS, "notfound");
-      rsp.add(RESPONSE_MESSAGE, "No bootstrap found in running, completed or failed states");
-    } else if (future.isCancelled() || callable.isClosed()) {
-      rsp.add(RESPONSE_STATUS, "cancelled");
-    } else if (future.isDone()) {
-      // could be a normal termination or an exception
-      try {
-        Boolean result = future.get();
-        if (result) {
-          rsp.add(RESPONSE_STATUS, COMPLETED);
-        } else {
-          rsp.add(RESPONSE_STATUS, FAILED);
-        }
-      } catch (InterruptedException e) {
-        // should not happen?
-      } catch (ExecutionException e) {
-        rsp.add(RESPONSE_STATUS, FAILED);
-        rsp.add(RESPONSE, e);
-      } catch (CancellationException ce) {
-        rsp.add(RESPONSE_STATUS, FAILED);
-        rsp.add(RESPONSE_MESSAGE, "Bootstrap was cancelled");
-      }
-    } else {
-      rsp.add(RESPONSE_STATUS, RUNNING);
-    }
-  }
-
-  static class BootstrapCallable implements Callable<Boolean>, Closeable {
-    private final String masterUrl;
-    private final SolrCore core;
-    private volatile boolean closed = false;
-
-    BootstrapCallable(String masterUrl, SolrCore core) {
-      this.masterUrl = masterUrl;
-      this.core = core;
-    }
-
-    @Override
-    public void close() throws IOException {
-      closed = true;
-      SolrRequestHandler handler = core.getRequestHandler(ReplicationHandler.PATH);
-      ReplicationHandler replicationHandler = (ReplicationHandler) handler;
-      replicationHandler.abortFetch();
-    }
-
-    public boolean isClosed() {
-      return closed;
-    }
-
-    @Override
-    public Boolean call() throws Exception {
-      boolean success = false;
-      UpdateLog ulog = core.getUpdateHandler().getUpdateLog();
-      // we start buffering updates as a safeguard however we do not expect
-      // to receive any updates from the source during bootstrap
-      ulog.bufferUpdates();
-      try {
-        commitOnLeader(masterUrl);
-        // use rep handler directly, so we can do this sync rather than async
-        SolrRequestHandler handler = core.getRequestHandler(ReplicationHandler.PATH);
-        ReplicationHandler replicationHandler = (ReplicationHandler) handler;
-
-        if (replicationHandler == null) {
-          throw new SolrException(SolrException.ErrorCode.SERVICE_UNAVAILABLE,
-              "Skipping recovery, no " + ReplicationHandler.PATH + " handler found");
-        }
-
-        ModifiableSolrParams solrParams = new ModifiableSolrParams();
-        solrParams.set(ReplicationHandler.MASTER_URL, masterUrl);
-        // we do not want the raw tlog files from the source
-        solrParams.set(ReplicationHandler.TLOG_FILES, false);
-
-        success = replicationHandler.doFetch(solrParams, false).getSuccessful();
-
-        Future<UpdateLog.RecoveryInfo> future = ulog.applyBufferedUpdates();
-        if (future == null) {
-          // no replay needed
-          log.info("No replay needed.");
-        } else {
-          log.info("Replaying buffered documents.");
-          // wait for replay
-          UpdateLog.RecoveryInfo report = future.get();
-          if (report.failed) {
-            SolrException.log(log, "Replay failed");
-            throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Replay failed");
-          }
-        }
-        if (success)  {
-          ZkController zkController = core.getCoreContainer().getZkController();
-          String collectionName = core.getCoreDescriptor().getCollectionName();
-          ClusterState clusterState = zkController.getZkStateReader().getClusterState();
-          DocCollection collection = clusterState.getCollection(collectionName);
-          Slice slice = collection.getSlice(core.getCoreDescriptor().getCloudDescriptor().getShardId());
-          ZkShardTerms terms = zkController.getShardTerms(collectionName, slice.getName());
-          String coreNodeName = core.getCoreDescriptor().getCloudDescriptor().getCoreNodeName();
-          Set<String> allExceptLeader = slice.getReplicas().stream().filter(replica -> !replica.getName().equals(coreNodeName)).map(Replica::getName).collect(Collectors.toSet());
-          terms.ensureTermsIsHigher(coreNodeName, allExceptLeader);
-        }
-        return success;
-      } finally {
-        if (closed || !success) {
-          // we cannot apply the buffer in this case because it will introduce newer versions in the
-          // update log and then the source cluster will get those versions via collectioncheckpoint
-          // causing the versions in between to be completely missed
-          boolean dropped = ulog.dropBufferedUpdates();
-          assert dropped;
-        }
-      }
-    }
-
-    private void commitOnLeader(String leaderUrl) throws SolrServerException,
-        IOException {
-      try (HttpSolrClient client = new HttpSolrClient.Builder(leaderUrl)
-          .withConnectionTimeout(30000)
-          .build()) {
-        UpdateRequest ureq = new UpdateRequest();
-        ureq.setParams(new ModifiableSolrParams());
-        ureq.getParams().set(DistributedUpdateProcessor.COMMIT_END_POINT, true);
-        ureq.getParams().set(UpdateParams.OPEN_SEARCHER, false);
-        ureq.setAction(AbstractUpdateRequest.ACTION.COMMIT, false, true).process(
-            client);
-      }
-    }
-  }
-
-  @Override
-  public String getDescription() {
-    return "Manage Cross Data Center Replication";
-  }
-
-  @Override
-  public Category getCategory() {
-    return Category.REPLICATION;
-  }
-
-  /**
-   * A thread subclass for executing a single
-   * {@link org.apache.solr.handler.CdcrParams.CdcrAction#SHARDCHECKPOINT} action.
-   */
-  private static final class SliceCheckpointCallable implements Callable<Long> {
-
-    final String baseUrl;
-    final String cdcrPath;
-
-    SliceCheckpointCallable(final String baseUrl, final String cdcrPath) {
-      this.baseUrl = baseUrl;
-      this.cdcrPath = cdcrPath;
-    }
-
-    @Override
-    public Long call() throws Exception {
-      try (HttpSolrClient server = new HttpSolrClient.Builder(baseUrl)
-          .withConnectionTimeout(15000)
-          .withSocketTimeout(60000)
-          .build()) {
-
-        ModifiableSolrParams params = new ModifiableSolrParams();
-        params.set(CommonParams.ACTION, CdcrParams.CdcrAction.SHARDCHECKPOINT.toString());
-
-        @SuppressWarnings({"rawtypes"})
-        SolrRequest request = new QueryRequest(params);
-        request.setPath(cdcrPath);
-
-        @SuppressWarnings({"rawtypes"})
-        NamedList response = server.request(request);
-        return (Long) response.get(CdcrParams.CHECKPOINT);
-      }
-    }
-
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrStateManager.java b/solr/core/src/java/org/apache/solr/handler/CdcrStateManager.java
deleted file mode 100644
index 151615e..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrStateManager.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.util.ArrayList;
-import java.util.List;
-
-/**
- * A state manager which implements an observer pattern to notify observers
- * of a state change.
- */
-abstract class CdcrStateManager {
-
-  private List<CdcrStateObserver> observers = new ArrayList<>();
-
-  void register(CdcrStateObserver observer) {
-    this.observers.add(observer);
-  }
-
-  void callback() {
-    for (CdcrStateObserver observer : observers) {
-      observer.stateUpdate();
-    }
-  }
-
-  interface CdcrStateObserver {
-
-    void stateUpdate();
-
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrUpdateLogSynchronizer.java b/solr/core/src/java/org/apache/solr/handler/CdcrUpdateLogSynchronizer.java
deleted file mode 100644
index 52590ee..0000000
--- a/solr/core/src/java/org/apache/solr/handler/CdcrUpdateLogSynchronizer.java
+++ /dev/null
@@ -1,192 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.handler;
-
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.solr.client.solrj.SolrRequest;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.QueryRequest;
-import org.apache.solr.cloud.ZkController;
-import org.apache.solr.common.cloud.ClusterState;
-import org.apache.solr.common.cloud.DocCollection;
-import org.apache.solr.common.cloud.ZkCoreNodeProps;
-import org.apache.solr.common.cloud.ZkNodeProps;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.update.CdcrUpdateLog;
-import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * Synchronize periodically the update log of non-leader nodes with their leaders.
- * </p>
- * <p>
- * Non-leader nodes must always buffer updates in case of leader failures. They have to periodically
- * synchronize their update logs with their leader to remove old transaction logs that will never be used anymore.
- * This is performed by a background thread that is scheduled with a fixed delay. The background thread is sending
- * the action {@link org.apache.solr.handler.CdcrParams.CdcrAction#LASTPROCESSEDVERSION} to the leader to retrieve
- * the lowest last version number processed. This version is then used to move forward the buffer log reader.
- * </p>
- */
-class CdcrUpdateLogSynchronizer implements CdcrStateManager.CdcrStateObserver {
-
-  private CdcrLeaderStateManager leaderStateManager;
-  private ScheduledExecutorService scheduler;
-
-  private final SolrCore core;
-  private final String collection;
-  private final String shardId;
-  private final String path;
-
-  private int timeSchedule = DEFAULT_TIME_SCHEDULE;
-
-  private static final int DEFAULT_TIME_SCHEDULE = 60000;  // by default, every minute
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  CdcrUpdateLogSynchronizer(SolrCore core, String path, SolrParams updateLogSynchonizerConfiguration) {
-    this.core = core;
-    this.path = path;
-    this.collection = core.getCoreDescriptor().getCloudDescriptor().getCollectionName();
-    this.shardId = core.getCoreDescriptor().getCloudDescriptor().getShardId();
-    if (updateLogSynchonizerConfiguration != null) {
-      this.timeSchedule = updateLogSynchonizerConfiguration.getInt(CdcrParams.SCHEDULE_PARAM, DEFAULT_TIME_SCHEDULE);
-    }
-  }
-
-  void setLeaderStateManager(final CdcrLeaderStateManager leaderStateManager) {
-    this.leaderStateManager = leaderStateManager;
-    this.leaderStateManager.register(this);
-  }
-
-  @Override
-  public void stateUpdate() {
-    // If I am not the leader, I need to synchronise periodically my update log with my leader.
-    if (!leaderStateManager.amILeader()) {
-      scheduler = Executors.newSingleThreadScheduledExecutor(new SolrNamedThreadFactory("cdcr-update-log-synchronizer"));
-      scheduler.scheduleWithFixedDelay(new UpdateLogSynchronisation(), 0, timeSchedule, TimeUnit.MILLISECONDS);
-      return;
-    }
-
-    this.shutdown();
-  }
-
-  boolean isStarted() {
-    return scheduler != null;
-  }
-
-  void shutdown() {
-    if (scheduler != null) {
-      // interrupts are often dangerous in Lucene / Solr code, but the
-      // test for this will leak threads without
-      scheduler.shutdownNow();
-      scheduler = null;
-    }
-  }
-
-  private class UpdateLogSynchronisation implements Runnable {
-
-    private String getLeaderUrl() {
-      ZkController zkController = core.getCoreContainer().getZkController();
-      ClusterState cstate = zkController.getClusterState();
-      DocCollection docCollection = cstate.getCollection(collection);
-      ZkNodeProps leaderProps = docCollection.getLeader(shardId);
-      if (leaderProps == null) { // we might not have a leader yet, returns null
-        return null;
-      }
-      ZkCoreNodeProps nodeProps = new ZkCoreNodeProps(leaderProps);
-      return nodeProps.getCoreUrl();
-    }
-
-    @Override
-    public void run() {
-      try {
-        String leaderUrl = getLeaderUrl();
-        if (leaderUrl == null) { // we might not have a leader yet, stop and try again later
-          return;
-        }
-
-        HttpSolrClient server = new HttpSolrClient.Builder(leaderUrl)
-            .withConnectionTimeout(15000)
-            .withSocketTimeout(60000)
-            .build();
-
-        ModifiableSolrParams params = new ModifiableSolrParams();
-        params.set(CommonParams.ACTION, CdcrParams.CdcrAction.LASTPROCESSEDVERSION.toString());
-
-        @SuppressWarnings({"rawtypes"})
-        SolrRequest request = new QueryRequest(params);
-        request.setPath(path);
-
-        long lastVersion;
-        try {
-          @SuppressWarnings({"rawtypes"})
-          NamedList response = server.request(request);
-          lastVersion = (Long) response.get(CdcrParams.LAST_PROCESSED_VERSION);
-          if (log.isDebugEnabled()) {
-            log.debug("My leader {} says its last processed _version_ number is: {}. I am {}", leaderUrl, lastVersion,
-                core.getCoreDescriptor().getCloudDescriptor().getCoreNodeName());
-          }
-        } catch (IOException | SolrServerException e) {
-          log.warn("Couldn't get last processed version from leader {}: ", leaderUrl, e);
-          return;
-        } finally {
-          try {
-            server.close();
-          } catch (IOException ioe) {
-            log.warn("Caught exception trying to close client to {}: ", leaderUrl, ioe);
-          }
-        }
-
-        // if we received -1, it means that the log reader on the leader has not yet started to read log entries
-        // do nothing
-        if (lastVersion == -1) {
-          return;
-        }
-
-        try {
-          CdcrUpdateLog ulog = (CdcrUpdateLog) core.getUpdateHandler().getUpdateLog();
-          if (ulog.isBuffering()) {
-            log.debug("Advancing replica buffering tlog reader to {} @ {}:{}", lastVersion, collection, shardId);
-            ulog.getBufferToggle().seek(lastVersion);
-          }
-        } catch (InterruptedException e) {
-          Thread.currentThread().interrupt();
-          log.warn("Couldn't advance replica buffering tlog reader to {} (to remove old tlogs): ", lastVersion, e);
-        } catch (IOException e) {
-          log.warn("Couldn't advance replica buffering tlog reader to {} (to remove old tlogs): ", lastVersion, e);
-        }
-      } catch (Throwable e) {
-        log.warn("Caught unexpected exception", e);
-        throw e;
-      }
-    }
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java b/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java
index e78028e..2bd2045 100644
--- a/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java
+++ b/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java
@@ -93,10 +93,7 @@
 import org.apache.solr.request.LocalSolrQueryRequest;
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.search.SolrIndexSearcher;
-import org.apache.solr.update.CdcrUpdateLog;
 import org.apache.solr.update.CommitUpdateCommand;
-import org.apache.solr.update.UpdateLog;
-import org.apache.solr.update.VersionInfo;
 import org.apache.solr.common.util.SolrNamedThreadFactory;
 import org.apache.solr.util.FileUtils;
 import org.apache.solr.util.PropertiesOutputStream;
@@ -112,7 +109,7 @@
 
 /**
  * <p> Provides functionality of downloading changed index files as well as config files and a timer for scheduling fetches from the
- * master. </p>
+ * leader. </p>
  *
  *
  * @since solr 1.4
@@ -124,7 +121,7 @@
 
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  private String masterUrl;
+  private String leaderUrl;
 
   final ReplicationHandler replicationHandler;
 
@@ -137,14 +134,10 @@
 
   private volatile List<Map<String, Object>> confFilesToDownload;
 
-  private volatile List<Map<String, Object>> tlogFilesToDownload;
-
   private volatile List<Map<String, Object>> filesDownloaded;
 
   private volatile List<Map<String, Object>> confFilesDownloaded;
 
-  private volatile List<Map<String, Object>> tlogFilesDownloaded;
-
   private volatile Map<String, Object> currentFile;
 
   private volatile DirectoryFileFetcher dirFileFetcher;
@@ -169,7 +162,7 @@
 
   private boolean downloadTlogFiles = false;
 
-  private boolean skipCommitOnMasterVersionZero = true;
+  private boolean skipCommitOnLeaderVersionZero = true;
 
   private boolean clearLocalIndexFirst = false;
 
@@ -189,7 +182,7 @@
     public static final IndexFetchResult INDEX_FETCH_SUCCESS = new IndexFetchResult("Fetching latest index is successful", true, null);
     public static final IndexFetchResult LOCK_OBTAIN_FAILED = new IndexFetchResult("Obtaining SnapPuller lock failed", false, null);
     public static final IndexFetchResult CONTAINER_IS_SHUTTING_DOWN = new IndexFetchResult("I was asked to replicate but CoreContainer is shutting down", false, null);
-    public static final IndexFetchResult MASTER_VERSION_ZERO = new IndexFetchResult("Index in peer is empty and never committed yet", true, null);
+    public static final IndexFetchResult LEADER_VERSION_ZERO = new IndexFetchResult("Index in peer is empty and never committed yet", true, null);
     public static final IndexFetchResult NO_INDEX_COMMIT_EXIST = new IndexFetchResult("No IndexCommit in local index", false, null);
     public static final IndexFetchResult PEER_INDEX_COMMIT_DELETED = new IndexFetchResult("No files to download because IndexCommit in peer was deleted", false, null);
     public static final IndexFetchResult LOCAL_ACTIVITY_DURING_REPLICATION = new IndexFetchResult("Local index modification during replication", false, null);
@@ -236,19 +229,19 @@
     if (fetchFromLeader != null && fetchFromLeader instanceof Boolean) {
       this.fetchFromLeader = (boolean) fetchFromLeader;
     }
-    Object skipCommitOnMasterVersionZero = initArgs.get(SKIP_COMMIT_ON_MASTER_VERSION_ZERO);
-    if (skipCommitOnMasterVersionZero != null && skipCommitOnMasterVersionZero instanceof Boolean) {
-      this.skipCommitOnMasterVersionZero = (boolean) skipCommitOnMasterVersionZero;
+    Object skipCommitOnLeaderVersionZero = ReplicationHandler.getObjectWithBackwardCompatibility(initArgs, SKIP_COMMIT_ON_LEADER_VERSION_ZERO, LEGACY_SKIP_COMMIT_ON_LEADER_VERSION_ZERO);
+    if (skipCommitOnLeaderVersionZero != null && skipCommitOnLeaderVersionZero instanceof Boolean) {
+      this.skipCommitOnLeaderVersionZero = (boolean) skipCommitOnLeaderVersionZero;
     }
-    String masterUrl = (String) initArgs.get(MASTER_URL);
-    if (masterUrl == null && !this.fetchFromLeader)
+    String leaderUrl = ReplicationHandler.getObjectWithBackwardCompatibility(initArgs, LEADER_URL, LEGACY_LEADER_URL);
+    if (leaderUrl == null && !this.fetchFromLeader)
       throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-              "'masterUrl' is required for a slave");
-    if (masterUrl != null && masterUrl.endsWith(ReplicationHandler.PATH)) {
-      masterUrl = masterUrl.substring(0, masterUrl.length()-12);
-      log.warn("'masterUrl' must be specified without the {} suffix", ReplicationHandler.PATH);
+              "'leaderUrl' is required for a follower");
+    if (leaderUrl != null && leaderUrl.endsWith(ReplicationHandler.PATH)) {
+      leaderUrl = leaderUrl.substring(0, leaderUrl.length()-12);
+      log.warn("'leaderUrl' must be specified without the {} suffix", ReplicationHandler.PATH);
     }
-    this.masterUrl = masterUrl;
+    this.leaderUrl = leaderUrl;
 
     this.replicationHandler = handler;
     String compress = (String) initArgs.get(COMPRESSION);
@@ -256,17 +249,13 @@
     useExternalCompression = EXTERNAL.equals(compress);
     connTimeout = getParameter(initArgs, HttpClientUtil.PROP_CONNECTION_TIMEOUT, 30000, null);
     
-    // allow a master override for tests - you specify this in /replication slave section of solrconfig and some 
+    // allow a leader override for tests - you specify this in /replication follower section of solrconfig and some
     // test don't want to define this
     soTimeout = Integer.getInteger("solr.indexfetcher.sotimeout", -1);
     if (soTimeout == -1) {
       soTimeout = getParameter(initArgs, HttpClientUtil.PROP_SO_TIMEOUT, 120000, null);
     }
 
-    if (initArgs.getBooleanArg(TLOG_FILES) != null) {
-      downloadTlogFiles = initArgs.getBooleanArg(TLOG_FILES);
-    }
-
     String httpBasicAuthUser = (String) initArgs.get(HttpClientUtil.PROP_BASIC_AUTH_USER);
     String httpBasicAuthPassword = (String) initArgs.get(HttpClientUtil.PROP_BASIC_AUTH_PASS);
     myHttpClient = createHttpClient(solrCore, httpBasicAuthUser, httpBasicAuthPassword, useExternalCompression);
@@ -284,7 +273,7 @@
   }
 
   /**
-   * Gets the latest commit version and generation from the master
+   * Gets the latest commit version and generation from the leader
    */
   @SuppressWarnings({"unchecked", "rawtypes"})
   NamedList getLatestVersion() throws IOException {
@@ -295,7 +284,7 @@
     QueryRequest req = new QueryRequest(params);
 
     // TODO modify to use shardhandler
-    try (HttpSolrClient client = new Builder(masterUrl)
+    try (HttpSolrClient client = new Builder(leaderUrl)
         .withHttpClient(myHttpClient)
         .withConnectionTimeout(connTimeout)
         .withSocketTimeout(soTimeout)
@@ -314,14 +303,13 @@
   private void fetchFileList(long gen) throws IOException {
     ModifiableSolrParams params = new ModifiableSolrParams();
     params.set(COMMAND,  CMD_GET_FILE_LIST);
-    params.set(TLOG_FILES, downloadTlogFiles);
     params.set(GENERATION, String.valueOf(gen));
     params.set(CommonParams.WT, JAVABIN);
     params.set(CommonParams.QT, ReplicationHandler.PATH);
     QueryRequest req = new QueryRequest(params);
 
     // TODO modify to use shardhandler
-    try (HttpSolrClient client = new HttpSolrClient.Builder(masterUrl)
+    try (HttpSolrClient client = new HttpSolrClient.Builder(leaderUrl)
         .withHttpClient(myHttpClient)
         .withConnectionTimeout(connTimeout)
         .withSocketTimeout(soTimeout)
@@ -340,11 +328,6 @@
       files = (List<Map<String,Object>>) response.get(CONF_FILES);
       if (files != null)
         confFilesToDownload = Collections.synchronizedList(files);
-
-      files = (List<Map<String, Object>>) response.get(TLOG_FILES);
-      if (files != null) {
-        tlogFilesToDownload = Collections.synchronizedList(files);
-      }
     } catch (SolrServerException e) {
       throw new IOException(e);
     }
@@ -355,12 +338,12 @@
   }
 
   /**
-   * This command downloads all the necessary files from master to install a index commit point. Only changed files are
+   * This command downloads all the necessary files from leader to install a index commit point. Only changed files are
    * downloaded. It also downloads the conf files (if they are modified).
    *
    * @param forceReplication force a replication in all cases
    * @param forceCoreReload force a core reload in all cases
-   * @return true on success, false if slave is already in sync
+   * @return true on success, false if follower is already in sync
    * @throws IOException if an exception occurs
    */
   IndexFetchResult fetchLatestIndex(boolean forceReplication, boolean forceCoreReload) throws IOException, InterruptedException {
@@ -404,15 +387,15 @@
           }
           return IndexFetchResult.LEADER_IS_NOT_ACTIVE;
         }
-        if (!replica.getCoreUrl().equals(masterUrl)) {
-          masterUrl = replica.getCoreUrl();
-          log.info("Updated masterUrl to {}", masterUrl);
+        if (!replica.getCoreUrl().equals(leaderUrl)) {
+          leaderUrl = replica.getCoreUrl();
+          log.info("Updated leaderUrl to {}", leaderUrl);
           // TODO: Do we need to set forceReplication = true?
         } else {
-          log.debug("masterUrl didn't change");
+          log.debug("leaderUrl didn't change");
         }
       }
-      //get the current 'replicateable' index version in the master
+      //get the current 'replicateable' index version in the leader
       @SuppressWarnings({"rawtypes"})
       NamedList response;
       try {
@@ -420,10 +403,10 @@
       } catch (Exception e) {
         final String errorMsg = e.toString();
         if (!Strings.isNullOrEmpty(errorMsg) && errorMsg.contains(INTERRUPT_RESPONSE_MESSAGE)) {
-            log.warn("Master at: {} is not available. Index fetch failed by interrupt. Exception: {}", masterUrl, errorMsg);
+            log.warn("Leader at: {} is not available. Index fetch failed by interrupt. Exception: {}", leaderUrl, errorMsg);
             return new IndexFetchResult(IndexFetchResult.FAILED_BY_INTERRUPT_MESSAGE, false, e);
         } else {
-            log.warn("Master at: {} is not available. Index fetch failed by exception: {}", masterUrl, errorMsg);
+            log.warn("Leader at: {} is not available. Index fetch failed by exception: {}", leaderUrl, errorMsg);
             return new IndexFetchResult(IndexFetchResult.FAILED_BY_EXCEPTION_MESSAGE, false, e);
         }
     }
@@ -431,8 +414,8 @@
       long latestVersion = (Long) response.get(CMD_INDEX_VERSION);
       long latestGeneration = (Long) response.get(GENERATION);
 
-      log.info("Master's generation: {}", latestGeneration);
-      log.info("Master's version: {}", latestVersion);
+      log.info("Leader's generation: {}", latestGeneration);
+      log.info("Leader's version: {}", latestVersion);
 
       // TODO: make sure that getLatestCommit only returns commit points for the main index (i.e. no side-car indexes)
       IndexCommit commit = solrCore.getDeletionPolicy().getLatestCommit();
@@ -453,23 +436,23 @@
       }
 
       if (log.isInfoEnabled()) {
-        log.info("Slave's generation: {}", commit.getGeneration());
-        log.info("Slave's version: {}", IndexDeletionPolicyWrapper.getCommitTimestamp(commit)); // logOK
+        log.info("Follower's generation: {}", commit.getGeneration());
+        log.info("Follower's version: {}", IndexDeletionPolicyWrapper.getCommitTimestamp(commit)); // logOK
       }
 
       if (latestVersion == 0L) {
         if (commit.getGeneration() != 0) {
           // since we won't get the files for an empty index,
           // we just clear ours and commit
-          log.info("New index in Master. Deleting mine...");
+          log.info("New index in Leader. Deleting mine...");
           RefCounted<IndexWriter> iw = solrCore.getUpdateHandler().getSolrCoreState().getIndexWriter(solrCore);
           try {
             iw.get().deleteAll();
           } finally {
             iw.decref();
           }
-          assert TestInjection.injectDelayBeforeSlaveCommitRefresh();
-          if (skipCommitOnMasterVersionZero) {
+          assert TestInjection.injectDelayBeforeFollowerCommitRefresh();
+          if (skipCommitOnLeaderVersionZero) {
             openNewSearcherAndUpdateCommitPoint();
           } else {
             SolrQueryRequest req = new LocalSolrQueryRequest(solrCore, new ModifiableSolrParams());
@@ -479,14 +462,14 @@
 
         //there is nothing to be replicated
         successfulInstall = true;
-        log.debug("Nothing to replicate, master's version is 0");
-        return IndexFetchResult.MASTER_VERSION_ZERO;
+        log.debug("Nothing to replicate, leader's version is 0");
+        return IndexFetchResult.LEADER_VERSION_ZERO;
       }
 
       // TODO: Should we be comparing timestamps (across machines) here?
       if (!forceReplication && IndexDeletionPolicyWrapper.getCommitTimestamp(commit) == latestVersion) {
-        //master and slave are already in sync just return
-        log.info("Slave in sync with master.");
+        //leader and follower are already in sync just return
+        log.info("Follower in sync with leader.");
         successfulInstall = true;
         return IndexFetchResult.ALREADY_IN_SYNC;
       }
@@ -498,19 +481,14 @@
         return IndexFetchResult.PEER_INDEX_COMMIT_DELETED;
       }
       if (log.isInfoEnabled()) {
-        log.info("Number of files in latest index in master: {}", filesToDownload.size());
-      }
-      if (tlogFilesToDownload != null) {
-        if (log.isInfoEnabled()) {
-          log.info("Number of tlog files in master: {}", tlogFilesToDownload.size());
-        }
+        log.info("Number of files in latest index in leader: {}", filesToDownload.size());
       }
 
       // Create the sync service
       fsyncService = ExecutorUtil.newMDCAwareSingleThreadExecutor(new SolrNamedThreadFactory("fsyncService"));
       // use a synchronized list because the list is read by other threads (to show details)
       filesDownloaded = Collections.synchronizedList(new ArrayList<Map<String, Object>>());
-      // if the generation of master is older than that of the slave , it means they are not compatible to be copied
+      // if the generation of leader is older than that of the follower , it means they are not compatible to be copied
       // then a new index directory to be created and all the files need to be copied
       boolean isFullCopyNeeded = IndexDeletionPolicyWrapper
           .getCommitTimestamp(commit) >= latestVersion
@@ -522,18 +500,13 @@
 
       tmpIndexDir = solrCore.getDirectoryFactory().get(tmpIndexDirPath, DirContext.DEFAULT, solrCore.getSolrConfig().indexConfig.lockType);
 
-      // tmp dir for tlog files
-      if (tlogFilesToDownload != null) {
-        tmpTlogDir = new File(solrCore.getUpdateHandler().getUpdateLog().getLogDir(), "tlog." + timestamp);
-      }
-
       // cindex dir...
       indexDirPath = solrCore.getIndexDir();
       indexDir = solrCore.getDirectoryFactory().get(indexDirPath, DirContext.DEFAULT, solrCore.getSolrConfig().indexConfig.lockType);
 
       try {
 
-        // We will compare all the index files from the master vs the index files on disk to see if there is a mismatch
+        // We will compare all the index files from the leader vs the index files on disk to see if there is a mismatch
         // in the metadata. If there is a mismatch for the same index file then we download the entire index
         // (except when differential copy is applicable) again.
         if (!isFullCopyNeeded && isIndexStale(indexDir)) {
@@ -588,10 +561,6 @@
 
           long bytesDownloaded = downloadIndexFiles(isFullCopyNeeded, indexDir,
               tmpIndexDir, indexDirPath, tmpIndexDirPath, latestGeneration);
-          if (tlogFilesToDownload != null) {
-            bytesDownloaded += downloadTlogFiles(tmpTlogDir, latestGeneration);
-            reloadCore = true; // reload update log
-          }
           final long timeTakenSeconds = getReplicationTimeElapsed();
           final Long bytesDownloadedPerSecond = (timeTakenSeconds != 0 ? Long.valueOf(bytesDownloaded / timeTakenSeconds) : null);
           log.info("Total time taken for download (fullCopy={},bytesDownloaded={}) : {} secs ({} bytes/sec) to {}",
@@ -607,10 +576,6 @@
             } else {
               successfulInstall = moveIndexFiles(tmpIndexDir, indexDir);
             }
-            if (tlogFilesToDownload != null) {
-              // move tlog files and refresh ulog only if we successfully installed a new index
-              successfulInstall &= moveTlogFiles(tmpTlogDir);
-            }
             if (successfulInstall) {
               if (isFullCopyNeeded) {
                 // let the system know we are changing dir's and the old one
@@ -637,10 +602,6 @@
             } else {
               successfulInstall = moveIndexFiles(tmpIndexDir, indexDir);
             }
-            if (tlogFilesToDownload != null) {
-              // move tlog files and refresh ulog only if we successfully installed a new index
-              successfulInstall &= moveTlogFiles(tmpTlogDir);
-            }
             if (successfulInstall) {
               logReplicationTimeAndConfFiles(modifiedConfFiles,
                   successfulInstall);
@@ -733,7 +694,7 @@
         core.getUpdateHandler().getSolrCoreState().setLastReplicateIndexSuccess(successfulInstall);
       }
 
-      filesToDownload = filesDownloaded = confFilesDownloaded = confFilesToDownload = tlogFilesToDownload = tlogFilesDownloaded = null;
+      filesToDownload = filesDownloaded = confFilesDownloaded = confFilesToDownload = null;
       markReplicationStop();
       dirFileFetcher = null;
       localFileFetcher = null;
@@ -964,7 +925,7 @@
   }
 
   private void downloadConfFiles(List<Map<String, Object>> confFilesToDownload, long latestGeneration) throws Exception {
-    log.info("Starting download of configuration files from master: {}", confFilesToDownload);
+    log.info("Starting download of configuration files from leader: {}", confFilesToDownload);
     confFilesDownloaded = Collections.synchronizedList(new ArrayList<>());
     File tmpconfDir = new File(solrCore.getResourceLoader().getConfigDir(), "conf." + getDateAsStr(new Date()));
     try {
@@ -990,30 +951,6 @@
   }
 
   /**
-   * Download all the tlog files to the temp tlog directory.
-   */
-  private long downloadTlogFiles(File tmpTlogDir, long latestGeneration) throws Exception {
-    log.info("Starting download of tlog files from master: {}", tlogFilesToDownload);
-    tlogFilesDownloaded = Collections.synchronizedList(new ArrayList<>());
-    long bytesDownloaded = 0;
-
-    boolean status = tmpTlogDir.mkdirs();
-    if (!status) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-          "Failed to create temporary tlog folder: " + tmpTlogDir.getName());
-    }
-    for (Map<String, Object> file : tlogFilesToDownload) {
-      String saveAs = (String) (file.get(ALIAS) == null ? file.get(NAME) : file.get(ALIAS));
-      localFileFetcher = new LocalFsFileFetcher(tmpTlogDir, file, saveAs, TLOG_FILE, latestGeneration);
-      currentFile = file;
-      localFileFetcher.fetchFile();
-      bytesDownloaded += localFileFetcher.getBytesDownloaded();
-      tlogFilesDownloaded.add(new HashMap<>(file));
-    }
-    return bytesDownloaded;
-  }
-
-  /**
    * Download the index files. If a new index is needed, download all the files.
    *
    * @param downloadCompleteIndex is it a fresh index copy
@@ -1146,7 +1083,7 @@
       // after considering the files actually available locally we really don't need to do any delete
       return;
     }
-    log.info("This disk does not have enough space to download the index from leader/master. So cleaning up the local index. "
+    log.info("This disk does not have enough space to download the index from leader. So cleaning up the local index. "
         + " This may lead to loss of data/or node if index replication fails in between");
     //now we should disable searchers and index writers because this core will not have all the required files
     this.clearLocalIndexFirst = true;
@@ -1247,7 +1184,7 @@
   }
 
   /**
-   * All the files which are common between master and slave must have same size and same checksum else we assume
+   * All the files which are common between leader and follower must have same size and same checksum else we assume
    * they are not compatible (stale).
    *
    * @return true if the index stale and we need to download a fresh copy, false otherwise.
@@ -1340,50 +1277,6 @@
   }
 
   /**
-   * <p>
-   *   Copy all the tlog files from the temp tlog dir to the actual tlog dir, and reset
-   *   the {@link UpdateLog}. The copy will try to preserve the original tlog directory
-   *   if the copy fails.
-   * </p>
-   * <p>
-   *   This assumes that the tlog files transferred from the leader are in synch with the
-   *   index files transferred from the leader. The reset of the update log relies on the version
-   *   of the latest operations found in the tlog files. If the tlogs are ahead of the latest commit
-   *   point, it will not copy all the needed buffered updates for the replay and it will miss
-   *   some operations.
-   * </p>
-   */
-  private boolean moveTlogFiles(File tmpTlogDir) {
-    UpdateLog ulog = solrCore.getUpdateHandler().getUpdateLog();
-
-    VersionInfo vinfo = ulog.getVersionInfo();
-    vinfo.blockUpdates(); // block updates until the new update log is initialised
-    try {
-      // reset the update log before copying the new tlog directory
-      CdcrUpdateLog.BufferedUpdates bufferedUpdates = ((CdcrUpdateLog) ulog).resetForRecovery();
-      // try to move the temp tlog files to the tlog directory
-      if (!copyTmpTlogFiles2Tlog(tmpTlogDir)) return false;
-      // reinitialise the update log and copy the buffered updates
-      if (bufferedUpdates.tlog != null) {
-        // map file path to its new backup location
-        File parentDir = FileSystems.getDefault().getPath(solrCore.getUpdateHandler().getUpdateLog().getLogDir()).getParent().toFile();
-        File backupTlogDir = new File(parentDir, tmpTlogDir.getName());
-        bufferedUpdates.tlog = new File(backupTlogDir, bufferedUpdates.tlog.getName());
-      }
-      // init the update log with the new set of tlog files, and copy the buffered updates
-      ((CdcrUpdateLog) ulog).initForRecovery(bufferedUpdates.tlog, bufferedUpdates.offset);
-    }
-    catch (Exception e) {
-      log.error("Unable to copy tlog files", e);
-      return false;
-    }
-    finally {
-      vinfo.unblockUpdates();
-    }
-    return true;
-  }
-
-  /**
    * Make file list
    */
   private List<File> makeTmpConfDirFileList(File dir, List<File> fileList) {
@@ -1480,11 +1373,11 @@
   private final Map<String, FileInfo> confFileInfoCache = new HashMap<>();
 
   /**
-   * The local conf files are compared with the conf files in the master. If they are same (by checksum) do not copy.
+   * The local conf files are compared with the conf files in the leader. If they are same (by checksum) do not copy.
    *
-   * @param confFilesToDownload The list of files obtained from master
+   * @param confFilesToDownload The list of files obtained from leader
    *
-   * @return a list of configuration files which have changed on the master and need to be downloaded.
+   * @return a list of configuration files which have changed on the leader and need to be downloaded.
    */
   @SuppressWarnings({"unchecked"})
   private Collection<Map<String, Object>> getModifiedConfFiles(List<Map<String, Object>> confFilesToDownload) {
@@ -1496,7 +1389,7 @@
     @SuppressWarnings({"rawtypes"})
     NamedList names = new NamedList();
     for (Map<String, Object> map : confFilesToDownload) {
-      //if alias is present that is the name the file may have in the slave
+      //if alias is present that is the name the file may have in the follower
       String name = (String) (map.get(ALIAS) == null ? map.get(NAME) : map.get(ALIAS));
       nameVsFile.put(name, map);
       names.add(name, null);
@@ -1570,20 +1463,7 @@
     return timeElapsed;
   }
 
-  List<Map<String, Object>> getTlogFilesToDownload() {
-    //make a copy first because it can be null later
-    List<Map<String, Object>> tmp = tlogFilesToDownload;
-    //create a new instance. or else iterator may fail
-    return tmp == null ? Collections.emptyList() : new ArrayList<>(tmp);
-  }
-
-  List<Map<String, Object>> getTlogFilesDownloaded() {
-    //make a copy first because it can be null later
-    List<Map<String, Object>> tmp = tlogFilesDownloaded;
-    // NOTE: it's safe to make a copy of a SynchronizedCollection(ArrayList)
-    return tmp == null ? Collections.emptyList() : new ArrayList<>(tmp);
-  }
-
+  @SuppressWarnings({"unchecked"})
   List<Map<String, Object>> getConfFilesToDownload() {
     //make a copy first because it can be null later
     List<Map<String, Object>> tmp = confFilesToDownload;
@@ -1752,7 +1632,7 @@
           }
           //then read the packet of bytes
           fis.readFully(buf, 0, packetSize);
-          //compare the checksum as sent from the master
+          //compare the checksum as sent from the leader
           if (includeChecksum) {
             checksum.reset();
             checksum.update(buf, 0, packetSize);
@@ -1870,7 +1750,7 @@
       InputStream is = null;
 
       // TODO use shardhandler
-      try (HttpSolrClient client = new Builder(masterUrl)
+      try (HttpSolrClient client = new Builder(leaderUrl)
           .withHttpClient(myHttpClient)
           .withResponseParser(null)
           .withConnectionTimeout(connTimeout)
@@ -1979,11 +1859,11 @@
   NamedList getDetails() throws IOException, SolrServerException {
     ModifiableSolrParams params = new ModifiableSolrParams();
     params.set(COMMAND, CMD_DETAILS);
-    params.set("slave", false);
+    params.set("follower", false);
     params.set(CommonParams.QT, ReplicationHandler.PATH);
 
     // TODO use shardhandler
-    try (HttpSolrClient client = new HttpSolrClient.Builder(masterUrl)
+    try (HttpSolrClient client = new HttpSolrClient.Builder(leaderUrl)
         .withHttpClient(myHttpClient)
         .withConnectionTimeout(connTimeout)
         .withSocketTimeout(soTimeout)
@@ -1998,8 +1878,8 @@
     HttpClientUtil.close(myHttpClient);
   }
 
-  String getMasterUrl() {
-    return masterUrl;
+  String getLeaderUrl() {
+    return leaderUrl;
   }
 
   private static final int MAX_RETRIES = 5;
diff --git a/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java b/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java
index 7f55a3f..e972103 100644
--- a/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java
@@ -100,10 +100,7 @@
   {
     SolrParams params = req.getParams();
 
-    long timeAllowed = (long)params.getInt( CommonParams.TIME_ALLOWED, -1 );
-    if(timeAllowed > 0) {
-      SolrQueryTimeoutImpl.set(timeAllowed);
-    }
+    SolrQueryTimeoutImpl.set(req);
       try {
 
         // Set field flags
diff --git a/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java b/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
index b1b16b0..1cf89a9 100644
--- a/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
@@ -96,7 +96,6 @@
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.response.SolrQueryResponse;
 import org.apache.solr.search.SolrIndexSearcher;
-import org.apache.solr.update.CdcrUpdateLog;
 import org.apache.solr.update.SolrIndexWriter;
 import org.apache.solr.update.VersionInfo;
 import org.apache.solr.common.util.SolrNamedThreadFactory;
@@ -111,15 +110,15 @@
 import static org.apache.solr.common.params.CommonParams.NAME;
 
 /**
- * <p> A Handler which provides a REST API for replication and serves replication requests from Slaves. </p>
- * <p>When running on the master, it provides the following commands <ol> <li>Get the current replicable index version
+ * <p> A Handler which provides a REST API for replication and serves replication requests from Followers. </p>
+ * <p>When running on the leader, it provides the following commands <ol> <li>Get the current replicable index version
  * (command=indexversion)</li> <li>Get the list of files for a given index version
  * (command=filelist&amp;indexversion=&lt;VERSION&gt;)</li> <li>Get full or a part (chunk) of a given index or a config
  * file (command=filecontent&amp;file=&lt;FILE_NAME&gt;) You can optionally specify an offset and length to get that
  * chunk of the file. You can request a configuration file by using "cf" parameter instead of the "file" parameter.</li>
- * <li>Get status/statistics (command=details)</li> </ol> <p>When running on the slave, it provides the following
+ * <li>Get status/statistics (command=details)</li> </ol> <p>When running on the follower, it provides the following
  * commands <ol> <li>Perform an index fetch now (command=snappull)</li> <li>Get status/statistics (command=details)</li>
- * <li>Abort an index fetch (command=abort)</li> <li>Enable/Disable polling the master for new versions (command=enablepoll
+ * <li>Abort an index fetch (command=abort)</li> <li>Enable/Disable polling the leader for new versions (command=enablepoll
  * or command=disablepoll)</li> </ol>
  *
  *
@@ -185,9 +184,9 @@
 
   private NamedList<String> confFileNameAlias = new NamedList<>();
 
-  private boolean isMaster = false;
+  private boolean isLeader = false;
 
-  private boolean isSlave = false;
+  private boolean isFollower = false;
 
   private boolean replicateOnOptimize = false;
 
@@ -240,7 +239,7 @@
     final SolrParams solrParams = req.getParams();
     String command = solrParams.required().get(COMMAND);
 
-    // This command does not give the current index version of the master
+    // This command does not give the current index version of the leader
     // It gives the current 'replicateable' index version
     if (command.equals(CMD_INDEX_VERSION)) {
       IndexCommit commitPoint = indexCommitPoint;  // make a copy so it won't change
@@ -291,12 +290,12 @@
       if (abortFetch()) {
         rsp.add(STATUS, OK_STATUS);
       } else {
-        reportErrorOnResponse(rsp, "No slave configured", null);
+        reportErrorOnResponse(rsp, "No follower configured", null);
       }
     } else if (command.equals(CMD_SHOW_COMMITS)) {
       populateCommitInfo(rsp);
     } else if (command.equals(CMD_DETAILS)) {
-      getReplicationDetails(rsp, solrParams.getBool("slave", true));
+      getReplicationDetails(rsp, getBoolWithBackwardCompatibility(solrParams, "follower", "slave", true));
     } else if (CMD_ENABLE_REPL.equalsIgnoreCase(command)) {
       replicationEnabled.set(true);
       rsp.add(STATUS, OK_STATUS);
@@ -305,6 +304,36 @@
       rsp.add(STATUS, OK_STATUS);
     }
   }
+  
+  static boolean getBoolWithBackwardCompatibility(SolrParams params, String preferredKey, String alternativeKey, boolean defaultValue) {
+    Boolean value = params.getBool(preferredKey);
+    if (value != null) {
+      return value;
+    }
+    return params.getBool(alternativeKey, defaultValue);
+  }
+  
+  @SuppressWarnings("unchecked")
+  static <T> T getObjectWithBackwardCompatibility(SolrParams params, String preferredKey, String alternativeKey, T defaultValue) {
+    Object value = params.get(preferredKey);
+    if (value != null) {
+      return (T) value;
+    }
+    value = params.get(alternativeKey);
+    if (value != null) {
+      return (T) value;
+    }
+    return defaultValue;
+  }
+  
+  @SuppressWarnings("unchecked")
+  static <T> T getObjectWithBackwardCompatibility(NamedList<?> params, String preferredKey, String alternativeKey) {
+    Object value = params.get(preferredKey);
+    if (value != null) {
+      return (T) value;
+    }
+    return (T) params.get(alternativeKey);
+  }
 
   private void reportErrorOnResponse(SolrQueryResponse response, String message, Exception e) {
     response.add(STATUS, ERR_STATUS);
@@ -337,9 +366,9 @@
   }
 
   private void fetchIndex(SolrParams solrParams, SolrQueryResponse rsp) throws InterruptedException {
-    String masterUrl = solrParams.get(MASTER_URL);
-    if (!isSlave && masterUrl == null) {
-      reportErrorOnResponse(rsp, "No slave configured or no 'masterUrl' specified", null);
+    String leaderUrl = getObjectWithBackwardCompatibility(solrParams, LEADER_URL, LEGACY_LEADER_URL, null);
+    if (!isFollower && leaderUrl == null) {
+      reportErrorOnResponse(rsp, "No follower configured or no 'leaderUrl' specified", null);
       return;
     }
     final SolrParams paramsCopy = new ModifiableSolrParams(solrParams);
@@ -406,7 +435,7 @@
   private volatile IndexFetcher currentIndexFetcher;
 
   public IndexFetchResult doFetch(SolrParams solrParams, boolean forceReplication) {
-    String masterUrl = solrParams == null ? null : solrParams.get(MASTER_URL);
+    String leaderUrl = solrParams == null ? null : ReplicationHandler.getObjectWithBackwardCompatibility(solrParams, LEADER_URL, LEGACY_LEADER_URL, null);
     if (!indexFetchLock.tryLock())
       return IndexFetchResult.LOCK_OBTAIN_FAILED;
     if (core.getCoreContainer().isShutDown()) {
@@ -414,7 +443,7 @@
       return IndexFetchResult.CONTAINER_IS_SHUTTING_DOWN; 
     }
     try {
-      if (masterUrl != null) {
+      if (leaderUrl != null) {
         if (currentIndexFetcher != null && currentIndexFetcher != pollingIndexFetcher) {
           currentIndexFetcher.destroy();
         }
@@ -701,19 +730,6 @@
       }
       rsp.add(CMD_GET_FILE_LIST, result);
       
-      if (solrParams.getBool(TLOG_FILES, false)) {
-        try {
-          List<Map<String, Object>> tlogfiles = getTlogFileList(commit);
-          log.info("Adding tlog files to list: {}", tlogfiles);
-          rsp.add(TLOG_FILES, tlogfiles);
-        }
-        catch (IOException e) {
-          log.error("Unable to get tlog file names for indexCommit generation: {}", commit.getGeneration(), e);
-          reportErrorOnResponse(rsp, "unable to get tlog file names for given index generation", e);
-          return;
-        }
-      }
-      
       if (confFileNameAlias.size() < 1 || core.getCoreContainer().isZooKeeperAware())
         return;
       log.debug("Adding config files to list: {}", includeConfFiles);
@@ -733,29 +749,6 @@
   }
 
   /**
-   * Retrieves the list of tlog files associated to a commit point.
-   * NOTE: The commit <b>MUST</b> be reserved before calling this method
-   */
-  List<Map<String, Object>> getTlogFileList(IndexCommit commit) throws IOException {
-    long maxVersion = this.getMaxVersion(commit);
-    CdcrUpdateLog ulog = (CdcrUpdateLog) core.getUpdateHandler().getUpdateLog();
-    String[] logList = ulog.getLogList(new File(ulog.getLogDir()));
-    List<Map<String, Object>> tlogFiles = new ArrayList<>();
-    for (String fileName : logList) {
-      // filter out tlogs that are older than the current index commit generation, so that the list of tlog files is
-      // in synch with the latest index commit point
-      long startVersion = Math.abs(Long.parseLong(fileName.substring(fileName.lastIndexOf('.') + 1)));
-      if (startVersion < maxVersion) {
-        Map<String, Object> fileMeta = new HashMap<>();
-        fileMeta.put(NAME, fileName);
-        fileMeta.put(SIZE, new File(ulog.getLogDir(), fileName).length());
-        tlogFiles.add(fileMeta);
-      }
-    }
-    return tlogFiles;
-  }
-
-  /**
    * Retrieves the maximum version number from an index commit.
    * NOTE: The commit <b>MUST</b> be reserved before calling this method
    */
@@ -826,7 +819,7 @@
       log.info("inside disable poll, value of pollDisabled = {}", pollDisabled);
       rsp.add(STATUS, OK_STATUS);
     } else {
-      reportErrorOnResponse(rsp, "No slave configured", null);
+      reportErrorOnResponse(rsp, "No follower configured", null);
     }
   }
 
@@ -836,7 +829,7 @@
       log.info("inside enable poll, value of pollDisabled = {}", pollDisabled);
       rsp.add(STATUS, OK_STATUS);
     } else {
-      reportErrorOnResponse(rsp, "No slave configured", null);
+      reportErrorOnResponse(rsp, "No follower configured", null);
     }
   }
 
@@ -871,7 +864,7 @@
 
   @Override
   public String getDescription() {
-    return "ReplicationHandler provides replication of index and configuration files from Master to Slaves";
+    return "ReplicationHandler provides replication of index and configuration files from Leader to Followers";
   }
 
   /**
@@ -886,6 +879,7 @@
     }
   }
 
+  //TODO: Handle compatibility in 8.x
   @Override
   public void initializeMetrics(SolrMetricsContext parentContext, String scope) {
     super.initializeMetrics(parentContext, scope);
@@ -897,14 +891,14 @@
         true, GENERATION, getCategory().toString(), scope);
     solrMetricsContext.gauge(() -> (core != null && !core.isClosed() ? core.getIndexDir() : ""),
         true, "indexPath", getCategory().toString(), scope);
-    solrMetricsContext.gauge(() -> isMaster,
-         true, "isMaster", getCategory().toString(), scope);
-    solrMetricsContext.gauge(() -> isSlave,
-         true, "isSlave", getCategory().toString(), scope);
+    solrMetricsContext.gauge(() -> isLeader,
+         true, "isLeader", getCategory().toString(), scope);
+    solrMetricsContext.gauge(() -> isFollower,
+         true, "isFollower", getCategory().toString(), scope);
     final MetricsMap fetcherMap = new MetricsMap((detailed, map) -> {
       IndexFetcher fetcher = currentIndexFetcher;
       if (fetcher != null) {
-        map.put(MASTER_URL, fetcher.getMasterUrl());
+        map.put(LEADER_URL, fetcher.getLeaderUrl());
         if (getPollInterval() != null) {
           map.put(POLL_INTERVAL, getPollInterval());
         }
@@ -930,11 +924,11 @@
       }
     });
     solrMetricsContext.gauge(fetcherMap, true, "fetcher", getCategory().toString(), scope);
-    solrMetricsContext.gauge(() -> isMaster && includeConfFiles != null ? includeConfFiles : "",
+    solrMetricsContext.gauge(() -> isLeader && includeConfFiles != null ? includeConfFiles : "",
          true, "confFilesToReplicate", getCategory().toString(), scope);
-    solrMetricsContext.gauge(() -> isMaster ? getReplicateAfterStrings() : Collections.<String>emptyList(),
+    solrMetricsContext.gauge(() -> isLeader ? getReplicateAfterStrings() : Collections.<String>emptyList(),
         true, REPLICATE_AFTER, getCategory().toString(), scope);
-    solrMetricsContext.gauge( () -> isMaster && replicationEnabled.get(),
+    solrMetricsContext.gauge( () -> isLeader && replicationEnabled.get(),
         true, "replicationEnabled", getCategory().toString(), scope);
   }
 
@@ -942,76 +936,76 @@
   /**
    * Used for showing statistics and progress information.
    */
-  private NamedList<Object> getReplicationDetails(SolrQueryResponse rsp, boolean showSlaveDetails) {
+  private NamedList<Object> getReplicationDetails(SolrQueryResponse rsp, boolean showFollowerDetails) {
     NamedList<Object> details = new SimpleOrderedMap<>();
-    NamedList<Object> master = new SimpleOrderedMap<>();
-    NamedList<Object> slave = new SimpleOrderedMap<>();
+    NamedList<Object> leader = new SimpleOrderedMap<>();
+    NamedList<Object> follower = new SimpleOrderedMap<>();
 
     details.add("indexSize", NumberUtils.readableSize(core.getIndexSize()));
     details.add("indexPath", core.getIndexDir());
     details.add(CMD_SHOW_COMMITS, getCommits());
-    details.add("isMaster", String.valueOf(isMaster));
-    details.add("isSlave", String.valueOf(isSlave));
+    details.add("isLeader", String.valueOf(isLeader));
+    details.add("isFollower", String.valueOf(isFollower));
     CommitVersionInfo vInfo = getIndexVersion();
     details.add("indexVersion", null == vInfo ? 0 : vInfo.version);
     details.add(GENERATION, null == vInfo ? 0 : vInfo.generation);
 
     IndexCommit commit = indexCommitPoint;  // make a copy so it won't change
 
-    if (isMaster) {
-      if (includeConfFiles != null) master.add(CONF_FILES, includeConfFiles);
-      master.add(REPLICATE_AFTER, getReplicateAfterStrings());
-      master.add("replicationEnabled", String.valueOf(replicationEnabled.get()));
+    if (isLeader) {
+      if (includeConfFiles != null) leader.add(CONF_FILES, includeConfFiles);
+      leader.add(REPLICATE_AFTER, getReplicateAfterStrings());
+      leader.add("replicationEnabled", String.valueOf(replicationEnabled.get()));
     }
 
-    if (isMaster && commit != null) {
+    if (isLeader && commit != null) {
       CommitVersionInfo repCommitInfo = CommitVersionInfo.build(commit);
-      master.add("replicableVersion", repCommitInfo.version);
-      master.add("replicableGeneration", repCommitInfo.generation);
+      leader.add("replicableVersion", repCommitInfo.version);
+      leader.add("replicableGeneration", repCommitInfo.generation);
     }
 
     IndexFetcher fetcher = currentIndexFetcher;
     if (fetcher != null) {
       Properties props = loadReplicationProperties();
-      if (showSlaveDetails) {
+      if (showFollowerDetails) {
         try {
           @SuppressWarnings({"rawtypes"})
           NamedList nl = fetcher.getDetails();
-          slave.add("masterDetails", nl.get(CMD_DETAILS));
+          follower.add("leaderDetails", nl.get(CMD_DETAILS));
         } catch (Exception e) {
           log.warn(
-              "Exception while invoking 'details' method for replication on master ",
+              "Exception while invoking 'details' method for replication on leader ",
               e);
-          slave.add(ERR_STATUS, "invalid_master");
+          follower.add(ERR_STATUS, "invalid_leader");
         }
       }
-      slave.add(MASTER_URL, fetcher.getMasterUrl());
+      follower.add(LEADER_URL, fetcher.getLeaderUrl());
       if (getPollInterval() != null) {
-        slave.add(POLL_INTERVAL, getPollInterval());
+        follower.add(POLL_INTERVAL, getPollInterval());
       }
       Date nextScheduled = getNextScheduledExecTime();
       if (nextScheduled != null && !isPollingDisabled()) {
-        slave.add(NEXT_EXECUTION_AT, nextScheduled.toString());
+        follower.add(NEXT_EXECUTION_AT, nextScheduled.toString());
       } else if (isPollingDisabled()) {
-        slave.add(NEXT_EXECUTION_AT, "Polling disabled");
+        follower.add(NEXT_EXECUTION_AT, "Polling disabled");
       }
-      addVal(slave, IndexFetcher.INDEX_REPLICATED_AT, props, Date.class);
-      addVal(slave, IndexFetcher.INDEX_REPLICATED_AT_LIST, props, List.class);
-      addVal(slave, IndexFetcher.REPLICATION_FAILED_AT_LIST, props, List.class);
-      addVal(slave, IndexFetcher.TIMES_INDEX_REPLICATED, props, Integer.class);
-      addVal(slave, IndexFetcher.CONF_FILES_REPLICATED, props, Integer.class);
-      addVal(slave, IndexFetcher.TIMES_CONFIG_REPLICATED, props, Integer.class);
-      addVal(slave, IndexFetcher.CONF_FILES_REPLICATED_AT, props, Integer.class);
-      addVal(slave, IndexFetcher.LAST_CYCLE_BYTES_DOWNLOADED, props, Long.class);
-      addVal(slave, IndexFetcher.TIMES_FAILED, props, Integer.class);
-      addVal(slave, IndexFetcher.REPLICATION_FAILED_AT, props, Date.class);
-      addVal(slave, IndexFetcher.PREVIOUS_CYCLE_TIME_TAKEN, props, Long.class);
-      addVal(slave, IndexFetcher.CLEARED_LOCAL_IDX, props, Long.class);
+      addVal(follower, IndexFetcher.INDEX_REPLICATED_AT, props, Date.class);
+      addVal(follower, IndexFetcher.INDEX_REPLICATED_AT_LIST, props, List.class);
+      addVal(follower, IndexFetcher.REPLICATION_FAILED_AT_LIST, props, List.class);
+      addVal(follower, IndexFetcher.TIMES_INDEX_REPLICATED, props, Integer.class);
+      addVal(follower, IndexFetcher.CONF_FILES_REPLICATED, props, Integer.class);
+      addVal(follower, IndexFetcher.TIMES_CONFIG_REPLICATED, props, Integer.class);
+      addVal(follower, IndexFetcher.CONF_FILES_REPLICATED_AT, props, Integer.class);
+      addVal(follower, IndexFetcher.LAST_CYCLE_BYTES_DOWNLOADED, props, Long.class);
+      addVal(follower, IndexFetcher.TIMES_FAILED, props, Integer.class);
+      addVal(follower, IndexFetcher.REPLICATION_FAILED_AT, props, Date.class);
+      addVal(follower, IndexFetcher.PREVIOUS_CYCLE_TIME_TAKEN, props, Long.class);
+      addVal(follower, IndexFetcher.CLEARED_LOCAL_IDX, props, Long.class);
 
-      slave.add("currentDate", new Date().toString());
-      slave.add("isPollingDisabled", String.valueOf(isPollingDisabled()));
+      follower.add("currentDate", new Date().toString());
+      follower.add("isPollingDisabled", String.valueOf(isPollingDisabled()));
       boolean isReplicating = isReplicating();
-      slave.add("isReplicating", String.valueOf(isReplicating));
+      follower.add("isReplicating", String.valueOf(isReplicating));
       if (isReplicating) {
         try {
           long bytesToDownload = 0;
@@ -1027,9 +1021,9 @@
             bytesToDownload += (Long) file.get(SIZE);
           }
 
-          slave.add("filesToDownload", filesToDownload);
-          slave.add("numFilesToDownload", String.valueOf(filesToDownload.size()));
-          slave.add("bytesToDownload", NumberUtils.readableSize(bytesToDownload));
+          follower.add("filesToDownload", filesToDownload);
+          follower.add("numFilesToDownload", String.valueOf(filesToDownload.size()));
+          follower.add("bytesToDownload", NumberUtils.readableSize(bytesToDownload));
 
           long bytesDownloaded = 0;
           List<String> filesDownloaded = new ArrayList<>();
@@ -1058,17 +1052,17 @@
                 percentDownloaded = (currFileSizeDownloaded * 100) / currFileSize;
             }
           }
-          slave.add("filesDownloaded", filesDownloaded);
-          slave.add("numFilesDownloaded", String.valueOf(filesDownloaded.size()));
+          follower.add("filesDownloaded", filesDownloaded);
+          follower.add("numFilesDownloaded", String.valueOf(filesDownloaded.size()));
 
           long estimatedTimeRemaining = 0;
 
           Date replicationStartTimeStamp = fetcher.getReplicationStartTimeStamp();
           if (replicationStartTimeStamp != null) {
-            slave.add("replicationStartTime", replicationStartTimeStamp.toString());
+            follower.add("replicationStartTime", replicationStartTimeStamp.toString());
           }
           long elapsed = fetcher.getReplicationTimeElapsed();
-          slave.add("timeElapsed", String.valueOf(elapsed) + "s");
+          follower.add("timeElapsed", String.valueOf(elapsed) + "s");
 
           if (bytesDownloaded > 0)
             estimatedTimeRemaining = ((bytesToDownload - bytesDownloaded) * elapsed) / bytesDownloaded;
@@ -1079,24 +1073,24 @@
           if (elapsed > 0)
             downloadSpeed = (bytesDownloaded / elapsed);
           if (currFile != null)
-            slave.add("currentFile", currFile);
-          slave.add("currentFileSize", NumberUtils.readableSize(currFileSize));
-          slave.add("currentFileSizeDownloaded", NumberUtils.readableSize(currFileSizeDownloaded));
-          slave.add("currentFileSizePercent", String.valueOf(percentDownloaded));
-          slave.add("bytesDownloaded", NumberUtils.readableSize(bytesDownloaded));
-          slave.add("totalPercent", String.valueOf(totalPercent));
-          slave.add("timeRemaining", String.valueOf(estimatedTimeRemaining) + "s");
-          slave.add("downloadSpeed", NumberUtils.readableSize(downloadSpeed));
+            follower.add("currentFile", currFile);
+          follower.add("currentFileSize", NumberUtils.readableSize(currFileSize));
+          follower.add("currentFileSizeDownloaded", NumberUtils.readableSize(currFileSizeDownloaded));
+          follower.add("currentFileSizePercent", String.valueOf(percentDownloaded));
+          follower.add("bytesDownloaded", NumberUtils.readableSize(bytesDownloaded));
+          follower.add("totalPercent", String.valueOf(totalPercent));
+          follower.add("timeRemaining", String.valueOf(estimatedTimeRemaining) + "s");
+          follower.add("downloadSpeed", NumberUtils.readableSize(downloadSpeed));
         } catch (Exception e) {
           log.error("Exception while writing replication details: ", e);
         }
       }
     }
 
-    if (isMaster)
-      details.add("master", master);
-    if (slave.size() > 0)
-      details.add("slave", slave);
+    if (isLeader)
+      details.add("leader", leader);
+    if (follower.size() > 0)
+      details.add("follower", follower);
 
     @SuppressWarnings({"rawtypes"})
     NamedList snapshotStats = snapShootDetails;
@@ -1230,7 +1224,7 @@
   }
 
   @Override
-  @SuppressWarnings({"unchecked", "resource"})
+  @SuppressWarnings({"resource"})
   public void inform(SolrCore core) {
     this.core = core;
     registerCloseHook();
@@ -1241,33 +1235,33 @@
       numberBackupsToKeep = 0;
     }
     @SuppressWarnings({"rawtypes"})
-    NamedList slave = (NamedList) initArgs.get("slave");
-    boolean enableSlave = isEnabled( slave );
-    if (enableSlave) {
-      currentIndexFetcher = pollingIndexFetcher = new IndexFetcher(slave, this, core);
-      setupPolling((String) slave.get(POLL_INTERVAL));
-      isSlave = true;
+    NamedList follower = getObjectWithBackwardCompatibility(initArgs,  "follower",  "slave");
+    boolean enableFollower = isEnabled( follower );
+    if (enableFollower) {
+      currentIndexFetcher = pollingIndexFetcher = new IndexFetcher(follower, this, core);
+      setupPolling((String) follower.get(POLL_INTERVAL));
+      isFollower = true;
     }
     @SuppressWarnings({"rawtypes"})
-    NamedList master = (NamedList) initArgs.get("master");
-    boolean enableMaster = isEnabled( master );
+    NamedList leader = getObjectWithBackwardCompatibility(initArgs, "leader", "master");
+    boolean enableLeader = isEnabled( leader );
 
-    if (enableMaster || (enableSlave && !currentIndexFetcher.fetchFromLeader)) {
+    if (enableLeader || (enableFollower && !currentIndexFetcher.fetchFromLeader)) {
       if (core.getCoreContainer().getZkController() != null) {
         log.warn("SolrCloud is enabled for core {} but so is old-style replication. "
                 + "Make sure you intend this behavior, it usually indicates a mis-configuration. "
-                + "Master setting is {} and slave setting is {}"
-        , core.getName(), enableMaster, enableSlave);
+                + "Leader setting is {} and follower setting is {}"
+        , core.getName(), enableLeader, enableFollower);
       }
     }
 
-    if (!enableSlave && !enableMaster) {
-      enableMaster = true;
-      master = new NamedList<>();
+    if (!enableFollower && !enableLeader) {
+      enableLeader = true;
+      leader = new NamedList<>();
     }
 
-    if (enableMaster) {
-      includeConfFiles = (String) master.get(CONF_FILES);
+    if (enableLeader) {
+      includeConfFiles = (String) leader.get(CONF_FILES);
       if (includeConfFiles != null && includeConfFiles.trim().length() > 0) {
         List<String> files = Arrays.asList(includeConfFiles.split(","));
         for (String file : files) {
@@ -1279,11 +1273,11 @@
         log.info("Replication enabled for following config files: {}", includeConfFiles);
       }
       @SuppressWarnings({"rawtypes"})
-      List backup = master.getAll("backupAfter");
+      List backup = leader.getAll("backupAfter");
       boolean backupOnCommit = backup.contains("commit");
       boolean backupOnOptimize = !backupOnCommit && backup.contains("optimize");
       @SuppressWarnings({"rawtypes"})
-      List replicateAfter = master.getAll(REPLICATE_AFTER);
+      List replicateAfter = leader.getAll(REPLICATE_AFTER);
       replicateOnCommit = replicateAfter.contains("commit");
       replicateOnOptimize = !replicateOnCommit && replicateAfter.contains("optimize");
 
@@ -1351,7 +1345,7 @@
           if (s!=null) s.decref();
         }
       }
-      isMaster = true;
+      isLeader = true;
     }
 
     {
@@ -1363,7 +1357,7 @@
     log.info("Commits will be reserved for {} ms", reserveCommitDuration);
   }
 
-  // check master or slave is enabled
+  // check leader or follower is enabled
   private boolean isEnabled( @SuppressWarnings({"rawtypes"})NamedList params ){
     if( params == null ) return false;
     Object enable = params.get( "enable" );
@@ -1768,13 +1762,19 @@
 
   private static final String EXCEPTION = "exception";
 
-  public static final String MASTER_URL = "masterUrl";
+  public static final String LEADER_URL = "leaderUrl";
+  @Deprecated
+  /** @deprecated: Only used for backwards compatibility. Use {@link #LEADER_URL} */
+  public static final String LEGACY_LEADER_URL = "masterUrl";
 
   public static final String FETCH_FROM_LEADER = "fetchFromLeader";
 
-  // in case of TLOG replica, if masterVersion = zero, don't do commit
+  // in case of TLOG replica, if leaderVersion = zero, don't do commit
   // otherwise updates from current tlog won't copied over properly to the new tlog, leading to data loss
-  public static final String SKIP_COMMIT_ON_MASTER_VERSION_ZERO = "skipCommitOnMasterVersionZero";
+  public static final String SKIP_COMMIT_ON_LEADER_VERSION_ZERO = "skipCommitOnLeaderVersionZero";
+  @Deprecated
+  /** @deprecated: Only used for backwards compatibility. Use {@link #SKIP_COMMIT_ON_LEADER_VERSION_ZERO} */
+  public static final String LEGACY_SKIP_COMMIT_ON_LEADER_VERSION_ZERO = "skipCommitOnMasterVersionZero";
 
   public static final String STATUS = "status";
 
@@ -1836,8 +1836,6 @@
 
   public static final String CONF_FILES = "confFiles";
 
-  public static final String TLOG_FILES = "tlogFiles";
-
   public static final String REPLICATE_AFTER = "replicateAfter";
 
   public static final String FILE_STREAM = "filestream";
diff --git a/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java b/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java
index b94064a..175c330 100644
--- a/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java
@@ -63,7 +63,6 @@
 import org.apache.solr.common.util.StrUtils;
 import org.apache.solr.common.util.Utils;
 import org.apache.solr.core.ConfigOverlay;
-import org.apache.solr.core.PluginBag;
 import org.apache.solr.core.PluginInfo;
 import org.apache.solr.core.RequestParams;
 import org.apache.solr.core.SolrConfig;
@@ -548,7 +547,7 @@
             latestVersion, 30);
       } else {
         SolrResourceLoader.persistConfLocally(loader, ConfigOverlay.RESOURCE_NAME, overlay.toByteArray());
-        req.getCore().getCoreContainer().reload(req.getCore().getName());
+        req.getCore().getCoreContainer().reload(req.getCore().getName(), req.getCore().uniqueId);
         log.info("Executed config commands successfully and persisted to File System {}", ops);
       }
 
@@ -572,21 +571,6 @@
       op.getMap(PluginInfo.INVARIANTS, null);
       op.getMap(PluginInfo.APPENDS, null);
       if (op.hasError()) return overlay;
-      if (info.clazz == PluginBag.RuntimeLib.class) {
-        if (!PluginBag.RuntimeLib.isEnabled()) {
-          op.addError("Solr not started with -Denable.runtime.lib=true");
-          return overlay;
-        }
-        try {
-          try (PluginBag.RuntimeLib rtl = new PluginBag.RuntimeLib(req.getCore())) {
-            rtl.init(new PluginInfo(info.tag, op.getDataMap()));
-          }
-        } catch (Exception e) {
-          op.addError(e.getMessage());
-          log.error("can't load this plugin ", e);
-          return overlay;
-        }
-      }
       if (!verifyClass(op, clz, info.clazz)) return overlay;
       if (pluginExists(info, overlay, name)) {
         if (isCeate) {
@@ -961,6 +945,11 @@
     protected SolrResponse createResponse(SolrClient client) {
       return null;
     }
+
+    @Override
+    public String getRequestType() {
+      return SolrRequest.SolrRequestType.ADMIN.toString();
+    }
   }
 
   @Override
diff --git a/solr/core/src/java/org/apache/solr/handler/StreamHandler.java b/solr/core/src/java/org/apache/solr/handler/StreamHandler.java
index d604616..0877c54 100644
--- a/solr/core/src/java/org/apache/solr/handler/StreamHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/StreamHandler.java
@@ -138,7 +138,7 @@
         streamFactory.withFunctionName(pluginInfo.name,
             () -> holder.getClazz());
       } else {
-        Class<? extends Expressible> clazz = core.getMemClassLoader().findClass(pluginInfo.className, Expressible.class);
+        Class<? extends Expressible> clazz = core.getResourceLoader().findClass(pluginInfo.className, Expressible.class);
         streamFactory.withFunctionName(pluginInfo.name, clazz);
       }
     }
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandlerApi.java b/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandlerApi.java
index 1a5f6f3..30450a8 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandlerApi.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandlerApi.java
@@ -42,6 +42,10 @@
     return configName + AUTOCREATED_CONFIGSET_SUFFIX;
   }
 
+  public static boolean isAutoGeneratedConfigSet(String configName) {
+    return configName != null && configName.endsWith(AUTOCREATED_CONFIGSET_SUFFIX);
+  }
+
   private static Collection<ApiCommand> createMapping() {
     Map<ConfigSetMeta, ApiCommand> result = new EnumMap<>(ConfigSetMeta.class);
 
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/HealthCheckHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/HealthCheckHandler.java
index ddf22ef..21a8d64 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/HealthCheckHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/HealthCheckHandler.java
@@ -70,10 +70,7 @@
 
   CoreContainer coreContainer;
 
-  public HealthCheckHandler() {}
-
   public HealthCheckHandler(final CoreContainer coreContainer) {
-    super();
     this.coreContainer = coreContainer;
   }
 
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/InfoHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/InfoHandler.java
index 98c320e..2d514b1 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/InfoHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/InfoHandler.java
@@ -50,7 +50,10 @@
     handlers.put("properties", new PropertiesRequestHandler());
     handlers.put("logging", new LoggingHandler(coreContainer));
     handlers.put("system", new SystemInfoHandler(coreContainer));
-    handlers.put("health", new HealthCheckHandler(coreContainer));
+    if (coreContainer.getHealthCheckHandler() == null) {
+      throw new IllegalStateException("HealthCheckHandler needs to be initialized before creating InfoHandler");
+    }
+    handlers.put("health", coreContainer.getHealthCheckHandler());
 
   }
 
diff --git a/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java
index 62171cb..8129e56 100644
--- a/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/admin/ZookeeperStatusHandler.java
@@ -156,8 +156,8 @@
         } else if ("leader".equals(state)) {
           leaders++;
           reportedFollowers = Math.max(
-              Integer.parseInt((String) stat.getOrDefault("zk_followers", "0")),
-              Integer.parseInt((String) stat.getOrDefault("zk_synced_followers", "0"))
+              (int) Float.parseFloat((String) stat.getOrDefault("zk_followers", "0")),
+              (int) Float.parseFloat((String) stat.getOrDefault("zk_synced_followers", "0"))
           );
         } else if ("standalone".equals(state)) {
           standalone++;
diff --git a/solr/core/src/java/org/apache/solr/handler/component/FacetComponent.java b/solr/core/src/java/org/apache/solr/handler/component/FacetComponent.java
index 698525a..ecce09f 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/FacetComponent.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/FacetComponent.java
@@ -1033,9 +1033,9 @@
       }
       
       for (Entry<String,List<NamedList<Object>>> pivotFacetResponseFromShard : pivotFacetResponsesFromShard) {
-        PivotFacet masterPivotFacet = fi.pivotFacets.get(pivotFacetResponseFromShard.getKey());
-        masterPivotFacet.mergeResponseFromShard(shardNumber, rb, pivotFacetResponseFromShard.getValue());  
-        masterPivotFacet.removeAllRefinementsForShard(shardNumber);
+        PivotFacet aggregatedPivotFacet = fi.pivotFacets.get(pivotFacetResponseFromShard.getKey());
+        aggregatedPivotFacet.mergeResponseFromShard(shardNumber, rb, pivotFacetResponseFromShard.getValue());
+        aggregatedPivotFacet.removeAllRefinementsForShard(shardNumber);
       }
     }
     
diff --git a/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java b/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java
index f33b783..5aa8012 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java
@@ -38,7 +38,7 @@
 import org.apache.solr.cloud.CloudDescriptor;
 import org.apache.solr.cloud.ZkController;
 import org.apache.solr.common.SolrException;
-import org.apache.solr.common.annotation.SolrSingleThreaded;
+import org.apache.solr.common.annotation.SolrThreadUnsafe;
 import org.apache.solr.common.cloud.Replica;
 import org.apache.solr.common.cloud.ZkCoreNodeProps;
 import org.apache.solr.common.params.CommonParams;
@@ -52,7 +52,7 @@
 import org.apache.solr.util.tracing.GlobalTracer;
 import org.apache.solr.util.tracing.SolrRequestCarrier;
 
-@SolrSingleThreaded
+@SolrThreadUnsafe
 public class HttpShardHandler extends ShardHandler {
   /**
    * If the request context map has an entry with this key and Boolean.TRUE as value,
diff --git a/solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java b/solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java
index 093c419..7956143 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java
@@ -82,7 +82,6 @@
 import org.apache.solr.search.SolrIndexSearcher;
 import org.apache.solr.search.SolrReturnFields;
 import org.apache.solr.search.SyntaxError;
-import org.apache.solr.update.CdcrUpdateLog;
 import org.apache.solr.update.DocumentBuilder;
 import org.apache.solr.update.IndexFingerprint;
 import org.apache.solr.update.PeerSync;
@@ -136,15 +135,17 @@
     if (!params.getBool(COMPONENT_NAME, true)) {
       return;
     }
-    
-    // This seems rather kludgey, may there is better way to indicate
-    // that replica can support handling version ranges
+
+    //TODO remove this at Solr 10
+    //After SOLR-14641 other nodes won't call RTG with this param.
+    //Just keeping here for backward-compatibility, if we remove this, nodes with older versions will
+    //assume that this node can't handle version ranges.
     String val = params.get("checkCanHandleVersionRanges");
     if(val != null) {
       rb.rsp.add("canHandleVersionRanges", true);
       return;
     }
-    
+
     val = params.get("getFingerprint");
     if(val != null) {
       processGetFingeprint(rb);
@@ -267,11 +268,7 @@
                if (oper == UpdateLog.ADD) {
                  doc = toSolrDoc((SolrInputDocument)entry.get(entry.size()-1), core.getLatestSchema());
                } else if (oper == UpdateLog.UPDATE_INPLACE) {
-                 if (ulog instanceof CdcrUpdateLog) {
-                   assert entry.size() == 6;
-                 } else {
-                   assert entry.size() == 5;
-                 }
+                 assert entry.size() == 5;
                  // For in-place update case, we have obtained the partial document till now. We need to
                  // resolve it to a full document to be returned to the user.
                  doc = resolveFullDocument(core, idBytes.get(), rsp.getReturnFields(), (SolrInputDocument)entry.get(entry.size()-1), entry, null);
@@ -569,12 +566,8 @@
         }
         switch (oper) {
           case UpdateLog.UPDATE_INPLACE:
-            if (ulog instanceof CdcrUpdateLog) {
-              assert entry.size() == 6;
-            } else {
-              assert entry.size() == 5;
-            }
-
+            assert entry.size() == 5;
+            
             if (resolveFullDocument) {
               SolrInputDocument doc = (SolrInputDocument)entry.get(entry.size()-1);
               try {
diff --git a/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java b/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
index c2bd1b6..24cc706 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
@@ -303,7 +303,7 @@
 
     final RTimerTree timer = rb.isDebug() ? req.getRequestTimer() : null;
 
-    if (req.getCore().getSolrConfig().useCircuitBreakers) {
+    if (req.getCore().getCircuitBreakerManager().isEnabled()) {
       List<CircuitBreaker> trippedCircuitBreakers;
 
       if (timer != null) {
@@ -350,10 +350,7 @@
     if (!rb.isDistrib) {
       // a normal non-distributed request
 
-      long timeAllowed = req.getParams().getLong(CommonParams.TIME_ALLOWED, -1L);
-      if (timeAllowed >= 0L) {
-        SolrQueryTimeoutImpl.set(timeAllowed);
-      }
+      SolrQueryTimeoutImpl.set(req);
       try {
         // The semantics of debugging vs not debugging are different enough that
         // it makes sense to have two control loops
diff --git a/solr/core/src/java/org/apache/solr/handler/component/StatsValuesFactory.java b/solr/core/src/java/org/apache/solr/handler/component/StatsValuesFactory.java
index 574cf05..67b1ee7 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/StatsValuesFactory.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/StatsValuesFactory.java
@@ -182,7 +182,7 @@
       // "NumericValueSourceStatsValues" which would have diff parent classes
       //
       // part of the complexity here being that the StatsValues API serves two
-      // masters: collecting concrete Values from things like DocValuesStats and
+      // leaders: collecting concrete Values from things like DocValuesStats and
       // the distributed aggregation logic, but also collecting docIds which it
       // then
       // uses to go out and pull concreate values from the ValueSource
diff --git a/solr/core/src/java/org/apache/solr/packagemanager/PackageManager.java b/solr/core/src/java/org/apache/solr/packagemanager/PackageManager.java
index c5f8c58..424f604 100644
--- a/solr/core/src/java/org/apache/solr/packagemanager/PackageManager.java
+++ b/solr/core/src/java/org/apache/solr/packagemanager/PackageManager.java
@@ -28,6 +28,7 @@
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Locale;
 import java.util.Map;
 import java.util.Scanner;
 import java.util.Set;
@@ -35,8 +36,14 @@
 
 import org.apache.commons.collections4.MultiValuedMap;
 import org.apache.commons.collections4.multimap.HashSetValuedHashMap;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.solr.client.solrj.SolrRequest;
+import org.apache.solr.client.solrj.SolrServerException;
 import org.apache.solr.client.solrj.impl.HttpSolrClient;
+import org.apache.solr.client.solrj.request.V2Request;
+import org.apache.solr.client.solrj.request.beans.Package;
 import org.apache.solr.client.solrj.request.beans.PluginMeta;
+import org.apache.solr.client.solrj.response.V2Response;
 import org.apache.solr.common.NavigableObject;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrException.ErrorCode;
@@ -44,6 +51,7 @@
 import org.apache.solr.common.cloud.ZkStateReader;
 import org.apache.solr.common.util.Pair;
 import org.apache.solr.common.util.Utils;
+import org.apache.solr.filestore.DistribPackageStore;
 import org.apache.solr.packagemanager.SolrPackage.Command;
 import org.apache.solr.packagemanager.SolrPackage.Manifest;
 import org.apache.solr.packagemanager.SolrPackage.Plugin;
@@ -85,6 +93,67 @@
     }
   }
 
+  public void uninstall(String packageName, String version) {
+    SolrPackageInstance packageInstance = getPackageInstance(packageName, version);
+    if (packageInstance == null) {
+      PackageUtils.printRed("Package " + packageName + ":" + version + " doesn't exist. Use the install command to install this package version first.");
+      System.exit(1);
+    }
+
+    // Make sure that this package instance is not deployed on any collection
+    Map<String, String> collectionsDeployedOn = getDeployedCollections(packageName);
+    for (String collection: collectionsDeployedOn.keySet()) {
+      if (version.equals(collectionsDeployedOn.get(collection))) {
+        PackageUtils.printRed("Package " + packageName + " is currently deployed on collection: " + collection + ". Undeploy the package with undeploy <package-name> -collections <collection1>[,<collection2>,...] before attempting to uninstall the package.");
+        System.exit(1);
+      }
+    }
+
+    // Make sure that no plugin from this package instance has been deployed as cluster level plugins
+    Map<String, SolrPackageInstance> clusterPackages = getPackagesDeployedAsClusterLevelPlugins();
+    for (String clusterPackageName: clusterPackages.keySet()) {
+      SolrPackageInstance clusterPackageInstance = clusterPackages.get(clusterPackageName);
+      if (packageName.equals(clusterPackageName) && version.equals(clusterPackageInstance.version)) {
+        PackageUtils.printRed("Package " + packageName + "is currently deployed as a cluster-level plugin (" + clusterPackageInstance.getCustomData() + "). Undeploy the package with undeploy <package-name> -collections <collection1>[,<collection2>,...] before uninstalling the package.");
+        System.exit(1);
+      }
+    }
+
+    // Delete the package by calling the Package API and remove the Jar
+
+    PackageUtils.printGreen("Executing Package API to remove this package...");
+    Package.DelVersion del = new Package.DelVersion();
+    del.version = version;
+    del.pkg = packageName;
+
+    V2Request req = new V2Request.Builder(PackageUtils.PACKAGE_PATH)
+            .forceV2(true)
+            .withMethod(SolrRequest.METHOD.POST)
+            .withPayload(Collections.singletonMap("delete", del))
+            .build();
+
+    try {
+      V2Response resp = req.process(solrClient);
+      PackageUtils.printGreen("Response: " + resp.jsonStr());
+    } catch (SolrServerException | IOException e) {
+      throw new SolrException(ErrorCode.BAD_REQUEST, e);
+    }
+
+    PackageUtils.printGreen("Executing Package Store API to remove the " + packageName + " package...");
+
+    List<String> filesToDelete = new ArrayList<>(packageInstance.files);
+    filesToDelete.add(String.format(Locale.ROOT, "/package/%s/%s/%s", packageName, version, "manifest.json"));
+    for (String filePath: filesToDelete) {
+      DistribPackageStore.deleteZKFileEntry(zkClient, filePath);
+      String path = solrClient.getBaseURL() + "/api/cluster/files" + filePath;
+      PackageUtils.printGreen("Deleting " + path);
+      HttpDelete httpDel = new HttpDelete(path);
+      Utils.executeHttpMethod(solrClient.getHttpClient(), path, Utils.JSONCONSUMER, httpDel);
+    }
+
+    PackageUtils.printGreen("Package uninstalled: " + packageName + ":" + version + ":-)");
+  }
+
   @SuppressWarnings({"unchecked", "rawtypes"})
   public List<SolrPackageInstance> fetchInstalledPackageInstances() throws SolrException {
     log.info("Getting packages from packages.json...");
@@ -100,10 +169,13 @@
           List pkg = (List)packagesZnodeMap.get(packageName);
           for (Map pkgVersion: (List<Map>)pkg) {
             Manifest manifest = PackageUtils.fetchManifest(solrClient, solrBaseUrl, pkgVersion.get("manifest").toString(), pkgVersion.get("manifestSHA512").toString());
-            List<Plugin> solrplugins = manifest.plugins;
-            SolrPackageInstance pkgInstance = new SolrPackageInstance(packageName.toString(), null, 
-                pkgVersion.get("version").toString(), manifest, solrplugins, manifest.parameterDefaults);
-            List<SolrPackageInstance> list = packages.containsKey(packageName)? packages.get(packageName): new ArrayList<SolrPackageInstance>();
+            List<Plugin> solrPlugins = manifest.plugins;
+            SolrPackageInstance pkgInstance = new SolrPackageInstance(packageName.toString(), null,
+                    pkgVersion.get("version").toString(), manifest, solrPlugins, manifest.parameterDefaults);
+            if (pkgVersion.containsKey("files")) {
+              pkgInstance.files = (List) pkgVersion.get("files");
+            }
+            List<SolrPackageInstance> list = packages.containsKey(packageName) ? packages.get(packageName) : new ArrayList<SolrPackageInstance>();
             list.add(pkgInstance);
             packages.put(packageName.toString(), list);
             ret.add(pkgInstance);
@@ -139,18 +211,28 @@
   }
 
   /**
-   * Get a list of packages that have their plugins deployed as cluster level plugins.
-   * The returned packages also contain the "pluginMeta" from "clusterprops.json" as custom data. 
+   * Get a map of packages (key: package name, value: package instance) that have their plugins deployed as cluster level plugins.
+   * The returned packages also contain the "pluginMeta" from "clusterprops.json" as custom data.
    */
+  @SuppressWarnings({"unchecked"})
   public Map<String, SolrPackageInstance> getPackagesDeployedAsClusterLevelPlugins() {
     Map<String, String> packageVersions = new HashMap<>();
     MultiValuedMap<String, PluginMeta> packagePlugins = new HashSetValuedHashMap<>(); // map of package name to multiple values of pluginMeta (Map<String, String>)
     @SuppressWarnings({"unchecked"})
-    Map<String, Object> result =  (Map<String, Object>)Utils.executeGET(solrClient.getHttpClient(),
-        solrBaseUrl + PackageUtils.CLUSTERPROPS_PATH, Utils.JSONCONSUMER);
+    Map<String, Object> result;
+    try {
+      result = (Map<String, Object>) Utils.executeGET(solrClient.getHttpClient(),
+               solrBaseUrl + PackageUtils.CLUSTERPROPS_PATH, Utils.JSONCONSUMER);
+    } catch (SolrException ex) {
+      if (ex.code() == ErrorCode.NOT_FOUND.code) {
+        result = Collections.emptyMap(); // Cluster props doesn't exist, that means there are no cluster level plugins installed.
+      } else {
+        throw ex;
+      }
+    }
     @SuppressWarnings({"unchecked"})
     Map<String, Object> clusterPlugins = (Map<String, Object>) result.getOrDefault("plugin", Collections.emptyMap());
-    for (String key: clusterPlugins.keySet()) {
+    for (String key : clusterPlugins.keySet()) {
       // Map<String, String> pluginMeta = (Map<String, String>) clusterPlugins.get(key);
       PluginMeta pluginMeta;
       try {
@@ -485,7 +567,7 @@
             }
             if (actualValue != null) {
               String expectedValue = PackageUtils.resolve(cmd.expected, pkg.parameterDefaults, overridesMap, systemParams);
-              PackageUtils.printGreen("Actual: " + actualValue+", expected: " + expectedValue);
+              PackageUtils.printGreen("Actual: " + actualValue + ", expected: " + expectedValue);
               if (!expectedValue.equals(actualValue)) {
                 PackageUtils.printRed("Failed to deploy plugin: " + plugin.name);
                 success = false;
@@ -516,7 +598,7 @@
               }
               if (actualValue != null) {
                 String expectedValue = PackageUtils.resolve(cmd.expected, pkg.parameterDefaults, collectionParameterOverrides, systemParams);
-                PackageUtils.printGreen("Actual: "+actualValue+", expected: "+expectedValue);
+                PackageUtils.printGreen("Actual: " + actualValue + ", expected: "+expectedValue);
                 if (!expectedValue.equals(actualValue)) {
                   PackageUtils.printRed("Failed to deploy plugin: " + plugin.name);
                   success = false;
@@ -542,7 +624,7 @@
     SolrPackageInstance latest = null;
     if (versions != null && !versions.isEmpty()) {
       latest = versions.get(0);
-      for (int i=0; i<versions.size(); i++) {
+      for (int i=0; i < versions.size(); i++) {
         SolrPackageInstance pkg = versions.get(i);
         if (pkg.version.equals(version)) {
           return pkg;
diff --git a/solr/core/src/java/org/apache/solr/packagemanager/SolrPackageInstance.java b/solr/core/src/java/org/apache/solr/packagemanager/SolrPackageInstance.java
index dcfa670..a387351 100644
--- a/solr/core/src/java/org/apache/solr/packagemanager/SolrPackageInstance.java
+++ b/solr/core/src/java/org/apache/solr/packagemanager/SolrPackageInstance.java
@@ -46,7 +46,9 @@
 
   final public Map<String, String> parameterDefaults;
 
-  @JsonIgnore
+  public List<String> files;
+
+    @JsonIgnore
   private Object customData;
   
   @JsonIgnore
diff --git a/solr/core/src/java/org/apache/solr/pkg/PackageListeningClassLoader.java b/solr/core/src/java/org/apache/solr/pkg/PackageListeningClassLoader.java
index ced4bd0..c10af0c 100644
--- a/solr/core/src/java/org/apache/solr/pkg/PackageListeningClassLoader.java
+++ b/solr/core/src/java/org/apache/solr/pkg/PackageListeningClassLoader.java
@@ -22,7 +22,7 @@
 import org.apache.solr.common.SolrException;
 import org.apache.solr.core.CoreContainer;
 import org.apache.solr.core.PluginInfo;
-import org.apache.solr.core.SolrClassLoader;
+import org.apache.solr.common.cloud.SolrClassLoader;
 import org.apache.solr.core.SolrResourceLoader;
 
 import java.io.IOException;
diff --git a/solr/core/src/java/org/apache/solr/schema/FieldTypePluginLoader.java b/solr/core/src/java/org/apache/solr/schema/FieldTypePluginLoader.java
index 1f18326..a784387 100644
--- a/solr/core/src/java/org/apache/solr/schema/FieldTypePluginLoader.java
+++ b/solr/core/src/java/org/apache/solr/schema/FieldTypePluginLoader.java
@@ -34,7 +34,7 @@
 import org.apache.lucene.util.Version;
 import org.apache.solr.analysis.TokenizerChain;
 import org.apache.solr.common.SolrException;
-import org.apache.solr.core.SolrClassLoader;
+import org.apache.solr.common.cloud.SolrClassLoader;
 import org.apache.solr.core.SolrConfig;
 import org.apache.solr.util.DOMUtil;
 import org.apache.solr.util.plugin.AbstractPluginLoader;
diff --git a/solr/core/src/java/org/apache/solr/schema/IndexSchema.java b/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
index af2a5b1..85b84a7 100644
--- a/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
+++ b/solr/core/src/java/org/apache/solr/schema/IndexSchema.java
@@ -62,7 +62,7 @@
 import org.apache.solr.common.util.NamedList;
 import org.apache.solr.common.util.Pair;
 import org.apache.solr.common.util.SimpleOrderedMap;
-import org.apache.solr.core.SolrClassLoader;
+import org.apache.solr.common.cloud.SolrClassLoader;
 import org.apache.solr.core.SolrCore;
 import org.apache.solr.core.SolrResourceLoader;
 import org.apache.solr.core.XmlConfigFile;
@@ -190,7 +190,7 @@
   protected IndexSchema(Version luceneVersion, SolrResourceLoader loader, Properties substitutableProperties) {
     this.luceneVersion = Objects.requireNonNull(luceneVersion);
     this.loader = loader;
-    this.solrClassLoader = loader.getCore() == null? loader: loader.getCore().getSchemaPluginsLoader();
+    this.solrClassLoader = loader;//loader.getCore() == null? loader: loader.getCore().getSchemaPluginsLoader();
     this.substitutableProperties = substitutableProperties;
   }
 
diff --git a/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java b/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java
index d44e961..9fd5243 100644
--- a/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java
+++ b/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java
@@ -372,6 +372,11 @@
       return null;
     }
 
+    @Override
+    public String getRequestType() {
+      return SolrRequest.SolrRequestType.ADMIN.toString();
+    }
+
   }
 
 
diff --git a/solr/core/src/java/org/apache/solr/schema/SchemaManager.java b/solr/core/src/java/org/apache/solr/schema/SchemaManager.java
index 2402e15..d2f5b67 100644
--- a/solr/core/src/java/org/apache/solr/schema/SchemaManager.java
+++ b/solr/core/src/java/org/apache/solr/schema/SchemaManager.java
@@ -132,7 +132,7 @@
             latestVersion = ZkController.persistConfigResourceToZooKeeper
                 (zkLoader, managedIndexSchema.getSchemaZkVersion(), managedIndexSchema.getResourceName(),
                  sw.toString().getBytes(StandardCharsets.UTF_8), true);
-            req.getCore().getCoreContainer().reload(req.getCore().getName());
+            req.getCore().getCoreContainer().reload(req.getCore().getName(), req.getCore().uniqueId);
             break;
           } catch (ZkController.ResourceModifiedInZkException e) {
             log.info("Schema was modified by another node. Retrying..");
@@ -142,7 +142,7 @@
             //only for non cloud stuff
             managedIndexSchema.persistManagedSchema(false);
             core.setLatestSchema(managedIndexSchema);
-            core.getCoreContainer().reload(core.getName());
+            core.getCoreContainer().reload(core.getName(), core.uniqueId);
           } catch (SolrException e) {
             log.warn(errorMsg);
             errors = singletonList(errorMsg + e.getMessage());
diff --git a/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java b/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java
index fce9416..8f234f0 100644
--- a/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java
+++ b/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java
@@ -799,6 +799,12 @@
         }
         
         if (inString == 0) {
+          if (!ignoreQuote && ch == '"') {
+            // end of the token if we aren't in a string, backing
+            // up the position.
+            pos--;
+            break;
+          }
           switch (ch) {
             case '!':
             case '(':
diff --git a/solr/core/src/java/org/apache/solr/search/JoinQParserPlugin.java b/solr/core/src/java/org/apache/solr/search/JoinQParserPlugin.java
index e1a689e..5149b02 100644
--- a/solr/core/src/java/org/apache/solr/search/JoinQParserPlugin.java
+++ b/solr/core/src/java/org/apache/solr/search/JoinQParserPlugin.java
@@ -16,11 +16,13 @@
  */
 package org.apache.solr.search;
 
+import java.util.EnumSet;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
 
 import org.apache.lucene.search.Query;
+import org.apache.lucene.search.join.ScoreMode;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.params.SolrParams;
 import org.apache.solr.common.util.NamedList;
@@ -67,12 +69,22 @@
         q.fromCoreOpenTime = jParams.fromCoreOpenTime;
         return q;
       }
+
+      @Override
+      Query makeJoinDirectFromParams(JoinParams jParams) {
+        return new JoinQuery(jParams.fromField, jParams.toField, null, jParams.fromQuery);
+      }
     },
     dvWithScore {
       @Override
       Query makeFilter(QParser qparser, JoinQParserPlugin plugin) throws SyntaxError {
         return new ScoreJoinQParserPlugin().createParser(qparser.qstr, qparser.localParams, qparser.params, qparser.req).parse();
       }
+
+      @Override
+      Query makeJoinDirectFromParams(JoinParams jParams) {
+        return ScoreJoinQParserPlugin.createJoinQuery(jParams.fromQuery, jParams.fromField, jParams.toField, ScoreMode.None);
+      }
     },
     topLevelDV {
       @Override
@@ -82,6 +94,11 @@
         q.fromCoreOpenTime = jParams.fromCoreOpenTime;
         return q;
       }
+
+      @Override
+      Query makeJoinDirectFromParams(JoinParams jParams) {
+        return new TopLevelJoinQuery(jParams.fromField, jParams.toField, null, jParams.fromQuery);
+      }
     },
     crossCollection {
       @Override
@@ -93,6 +110,10 @@
 
     abstract Query makeFilter(QParser qparser, JoinQParserPlugin plugin) throws SyntaxError;
 
+    Query makeJoinDirectFromParams(JoinParams jParams) {
+      throw new IllegalStateException("Join method [" + name() + "] doesn't support qparser-less creation");
+    }
+
     JoinParams parseJoin(QParser qparser) throws SyntaxError {
       final String fromField = qparser.getParam("from");
       final String fromIndex = qparser.getParam("fromIndex");
@@ -176,15 +197,40 @@
     };
   }
 
+  private static final EnumSet<Method> JOIN_METHOD_WHITELIST = EnumSet.of(Method.index, Method.topLevelDV, Method.dvWithScore);
   /**
    * A helper method for other plugins to create (non-scoring) JoinQueries wrapped around arbitrary queries against the same core.
    * 
    * @param subQuery the query to define the starting set of documents on the "left side" of the join
    * @param fromField "left side" field name to use in the join
    * @param toField "right side" field name to use in the join
+   * @param method indicates which implementation should be used to process the join.  Currently only 'index',
+   *               'dvWithScore', and 'topLevelDV' are supported.
    */
-  public static Query createJoinQuery(Query subQuery, String fromField, String toField) {
-    return new JoinQuery(fromField, toField, null, subQuery);
+  public static Query createJoinQuery(Query subQuery, String fromField, String toField, String method) {
+    // no method defaults to 'index' for back compatibility
+    if ( method == null ) {
+      return new JoinQuery(fromField, toField, null, subQuery);
+    }
+
+
+    final Method joinMethod = parseMethodString(method);
+    if (! JOIN_METHOD_WHITELIST.contains(joinMethod)) {
+      // TODO Throw something that the callers here (FacetRequest) can catch and produce a more domain-appropriate error message for?
+      throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
+          "Join method " + method + " not supported for non-scoring, same-core joins");
+    }
+
+    final JoinParams jParams = new JoinParams(fromField, null, subQuery, 0L, toField);
+    return joinMethod.makeJoinDirectFromParams(jParams);
+  }
+
+  private static Method parseMethodString(String method) {
+    try {
+      return Method.valueOf(method);
+    } catch (IllegalArgumentException iae) {
+      throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Provided join method '" + method + "' not supported");
+    }
   }
   
 }
\ No newline at end of file
diff --git a/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java b/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java
index 4cc9758..3ebd43c 100644
--- a/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java
+++ b/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java
@@ -35,7 +35,9 @@
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.atomic.AtomicReference;
 
+import com.codahale.metrics.Gauge;
 import com.google.common.collect.Iterables;
+
 import org.apache.lucene.document.Document;
 import org.apache.lucene.index.DirectoryReader;
 import org.apache.lucene.index.ExitableDirectoryReader;
@@ -51,6 +53,7 @@
 import org.apache.lucene.index.TermsEnum;
 import org.apache.lucene.search.*;
 import org.apache.lucene.search.TotalHits.Relation;
+import org.apache.lucene.store.AlreadyClosedException;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.BytesRef;
@@ -2274,12 +2277,12 @@
     parentContext.gauge(() -> warmupTime, true, "warmupTime", Category.SEARCHER.toString(), scope);
     parentContext.gauge(() -> registerTime, true, "registeredAt", Category.SEARCHER.toString(), scope);
     // reader stats
-    parentContext.gauge(() -> reader.numDocs(), true, "numDocs", Category.SEARCHER.toString(), scope);
-    parentContext.gauge(() -> reader.maxDoc(), true, "maxDoc", Category.SEARCHER.toString(), scope);
-    parentContext.gauge(() -> reader.maxDoc() - reader.numDocs(), true, "deletedDocs", Category.SEARCHER.toString(), scope);
-    parentContext.gauge(() -> reader.toString(), true, "reader", Category.SEARCHER.toString(), scope);
-    parentContext.gauge(() -> reader.directory().toString(), true, "readerDir", Category.SEARCHER.toString(), scope);
-    parentContext.gauge(() -> reader.getVersion(), true, "indexVersion", Category.SEARCHER.toString(), scope);
+    parentContext.gauge(rgauge(-1, () -> reader.numDocs()), true, "numDocs", Category.SEARCHER.toString(), scope);
+    parentContext.gauge(rgauge(-1, () -> reader.maxDoc()), true, "maxDoc", Category.SEARCHER.toString(), scope);
+    parentContext.gauge(rgauge(-1, () -> reader.maxDoc() - reader.numDocs()), true, "deletedDocs", Category.SEARCHER.toString(), scope);
+    parentContext.gauge(rgauge(-1, () -> reader.toString()), true, "reader", Category.SEARCHER.toString(), scope);
+    parentContext.gauge(rgauge("", () -> reader.directory().toString()), true, "readerDir", Category.SEARCHER.toString(), scope);
+    parentContext.gauge(rgauge(-1, () -> reader.getVersion()), true, "indexVersion", Category.SEARCHER.toString(), scope);
     // size of the currently opened commit
     parentContext.gauge(() -> {
       try {
@@ -2301,6 +2304,20 @@
         }), true, "statsCache", Category.CACHE.toString(), scope);
   }
 
+  /** 
+   * wraps a gauge (related to an IndexReader) and swallows any {@link AlreadyClosedException} that 
+   * might be thrown, returning the specified default in it's place. 
+   */
+  private <T> Gauge<T> rgauge(T closedDefault, Gauge<T> g) {
+    return () -> {
+      try {
+        return g.getValue();
+      } catch (AlreadyClosedException ignore) {
+        return closedDefault;
+      }
+    };
+  }
+  
   private static class FilterImpl extends Filter {
     private final Filter topFilter;
     private final List<Weight> weights;
diff --git a/solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java b/solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java
index c41b416..adf3b93 100644
--- a/solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java
+++ b/solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java
@@ -21,6 +21,8 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.lucene.index.QueryTimeout;
+import org.apache.solr.common.params.CommonParams;
+import org.apache.solr.request.SolrQueryRequest;
 
 /**
  * Implementation of {@link QueryTimeout} that is used by Solr. 
@@ -31,10 +33,11 @@
   /**
    * The ThreadLocal variable to store the time beyond which, the processing should exit.
    */
-  public static ThreadLocal<Long> timeoutAt = new ThreadLocal<Long>();
+  private static final ThreadLocal<Long> timeoutAt = new ThreadLocal<>();
+
+  private static final SolrQueryTimeoutImpl instance = new SolrQueryTimeoutImpl();
 
   private SolrQueryTimeoutImpl() { }
-  private static SolrQueryTimeoutImpl instance = new SolrQueryTimeoutImpl();
 
   /** Return singleton instance */
   public static SolrQueryTimeoutImpl getInstance() { 
@@ -42,15 +45,15 @@
   }
 
   /**
-   * Get the current value of timeoutAt.
+   * The time (nanoseconds) at which the request should be considered timed out.
    */
-  public static Long get() {
+  public static Long getTimeoutAtNs() {
     return timeoutAt.get();
   }
 
   @Override
   public boolean isTimeoutEnabled() {
-    return get() != null;
+    return getTimeoutAtNs() != null;
   }
 
   /**
@@ -58,7 +61,7 @@
    */
   @Override
   public boolean shouldExit() {
-    Long timeoutAt = get();
+    Long timeoutAt = getTimeoutAtNs();
     if (timeoutAt == null) {
       // timeout unset
       return false;
@@ -67,10 +70,23 @@
   }
 
   /**
-   * Method to set the time at which the timeOut should happen.
-   * @param timeAllowed set the time at which this thread should timeout.
+   * Sets or clears the time allowed based on how much time remains from the start of the request plus the configured
+   * {@link CommonParams#TIME_ALLOWED}.
    */
-  public static void set(Long timeAllowed) {
+  public static void set(SolrQueryRequest req) {
+    long timeAllowed = req.getParams().getLong(CommonParams.TIME_ALLOWED, -1L);
+    if (timeAllowed >= 0L) {
+      set(timeAllowed - (long)req.getRequestTimer().getTime()); // reduce by time already spent
+    } else {
+      reset();
+    }
+  }
+
+  /**
+   * Sets the time allowed (milliseconds), assuming we start a timer immediately.
+   * You should probably invoke {@link #set(SolrQueryRequest)} instead.
+   */
+  public static void set(long timeAllowed) {
     long time = nanoTime() + TimeUnit.NANOSECONDS.convert(timeAllowed, TimeUnit.MILLISECONDS);
     timeoutAt.set(time);
   }
@@ -84,7 +100,7 @@
 
   @Override
   public String toString() {
-    return "timeoutAt: " + get() + " (System.nanoTime(): " + nanoTime() + ")";
+    return "timeoutAt: " + getTimeoutAtNs() + " (System.nanoTime(): " + nanoTime() + ")";
   }
 }
 
diff --git a/solr/core/src/java/org/apache/solr/search/facet/FacetRequest.java b/solr/core/src/java/org/apache/solr/search/facet/FacetRequest.java
index db9d9c9..8aeb959 100644
--- a/solr/core/src/java/org/apache/solr/search/facet/FacetRequest.java
+++ b/solr/core/src/java/org/apache/solr/search/facet/FacetRequest.java
@@ -19,9 +19,12 @@
 import java.io.IOException;
 import java.util.LinkedHashMap;
 import java.util.List;
+import java.util.Locale;
 import java.util.Map;
 import java.util.Objects;
+import java.util.Set;
 
+import com.google.common.collect.Sets;
 import org.apache.lucene.search.Query;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.params.SolrParams;
@@ -163,15 +166,23 @@
 
     /** Are we doing a query time join across other documents */
     public static class JoinField {
+      private static final String FROM_PARAM = "from";
+      private static final String TO_PARAM = "to";
+      private static final String METHOD_PARAM = "method";
+      private static final Set<String> SUPPORTED_JOIN_PROPERTIES = Sets.newHashSet(FROM_PARAM, TO_PARAM, METHOD_PARAM);
+
       public final String from;
       public final String to;
+      public final String method;
 
-      private JoinField(String from, String to) {
+      private JoinField(String from, String to, String method) {
         assert null != from;
         assert null != to;
+        assert null != method;
 
         this.from = from;
         this.to = to;
+        this.method = method;
       }
 
       /**
@@ -194,17 +205,23 @@
           }
           @SuppressWarnings({"unchecked"})
           final Map<String,String> join = (Map<String,String>) queryJoin;
-          if (! (join.containsKey("from") && join.containsKey("to") &&
-              null != join.get("from") && null != join.get("to")) ) {
+          if (! (join.containsKey(FROM_PARAM) && join.containsKey(TO_PARAM) &&
+              null != join.get(FROM_PARAM) && null != join.get(TO_PARAM)) ) {
             throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
                 "'join' domain change requires non-null 'from' and 'to' field names");
           }
-          if (2 != join.size()) {
-            throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
-                "'join' domain change contains unexpected keys, only 'from' and 'to' supported: "
-                    + join.toString());
+
+          for (String providedKey : join.keySet()) {
+            if (! SUPPORTED_JOIN_PROPERTIES.contains(providedKey)) {
+              final String supportedPropsStr = String.join(", ", SUPPORTED_JOIN_PROPERTIES);
+              final String message = String.format(Locale.ROOT,
+                  "'join' domain change contains unexpected key [%s], only %s supported", providedKey, supportedPropsStr);
+              throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, message);
+            }
           }
-          domain.joinField = new JoinField(join.get("from"), join.get("to"));
+
+          final String method = join.containsKey(METHOD_PARAM) ? join.get(METHOD_PARAM) : "index";
+          domain.joinField = new JoinField(join.get(FROM_PARAM), join.get(TO_PARAM), method);
         }
       }
 
@@ -221,7 +238,7 @@
         // this shouldn't matter once we're wrapped in a join query, but just in case it ever does...
         fromQuery.setCache(false);
 
-        return JoinQParserPlugin.createJoinQuery(fromQuery, this.from, this.to);
+        return JoinQParserPlugin.createJoinQuery(fromQuery, this.from, this.to, this.method);
       }
 
 
diff --git a/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java b/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java
index 423fd25..51f2dcc 100644
--- a/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java
+++ b/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java
@@ -291,6 +291,20 @@
     return fromIndex;
   }
 
+  /**
+   * A helper method for other plugins to create single-core JoinQueries
+   *
+   * @param subQuery the query to define the starting set of documents on the "left side" of the join
+   * @param fromField "left side" field name to use in the join
+   * @param toField "right side" field name to use in the join
+   * @param scoreMode the score statistic to produce while joining
+   *
+   * @see JoinQParserPlugin#createJoinQuery(Query, String, String, String)
+   */
+  public static Query createJoinQuery(Query subQuery, String fromField, String toField, ScoreMode scoreMode) {
+    return new SameCoreJoinQuery(subQuery, fromField, toField, scoreMode);
+  }
+
   private static String resolveAlias(String fromIndex, ZkController zkController) {
     final Aliases aliases = zkController.getZkStateReader().getAliases();
     try {
diff --git a/solr/core/src/java/org/apache/solr/search/similarities/BooleanSimilarityFactory.java b/solr/core/src/java/org/apache/solr/search/similarities/BooleanSimilarityFactory.java
new file mode 100644
index 0000000..0e46e32
--- /dev/null
+++ b/solr/core/src/java/org/apache/solr/search/similarities/BooleanSimilarityFactory.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.similarities;
+
+import org.apache.lucene.search.similarities.BooleanSimilarity;
+import org.apache.lucene.search.similarities.Similarity;
+import org.apache.solr.schema.SimilarityFactory;
+
+/**
+ * Factory for {@link BooleanSimilarity}
+ *
+ * Simple similarity that gives terms a score that is equal to their query
+ * boost. This similarity is typically used with disabled norms since neither
+ * document statistics nor index statistics are used for scoring.
+ */
+public class BooleanSimilarityFactory extends SimilarityFactory {
+  
+  @Override
+  public Similarity getSimilarity() {
+    return new BooleanSimilarity();
+  }
+}
diff --git a/solr/core/src/java/org/apache/solr/search/stats/ExactSharedStatsCache.java b/solr/core/src/java/org/apache/solr/search/stats/ExactSharedStatsCache.java
index 01c479b..0b1d265 100644
--- a/solr/core/src/java/org/apache/solr/search/stats/ExactSharedStatsCache.java
+++ b/solr/core/src/java/org/apache/solr/search/stats/ExactSharedStatsCache.java
@@ -40,7 +40,7 @@
   // local stats obtained from shard servers
   private final Map<String,Map<String,TermStats>> perShardTermStats = new ConcurrentHashMap<>();
   private final Map<String,Map<String,CollectionStats>> perShardColStats = new ConcurrentHashMap<>();
-  // global stats synchronized from the master
+  // global stats synchronized from the leader
   private final Map<String,TermStats> currentGlobalTermStats = new ConcurrentHashMap<>();
   private final Map<String,CollectionStats> currentGlobalColStats = new ConcurrentHashMap<>();
 
diff --git a/solr/core/src/java/org/apache/solr/search/stats/LRUStatsCache.java b/solr/core/src/java/org/apache/solr/search/stats/LRUStatsCache.java
index 7e94f56..54780f0 100644
--- a/solr/core/src/java/org/apache/solr/search/stats/LRUStatsCache.java
+++ b/solr/core/src/java/org/apache/solr/search/stats/LRUStatsCache.java
@@ -65,7 +65,7 @@
   // map of <shardName, <field, collStats>>
   private final Map<String,Map<String,CollectionStats>> perShardColStats = new ConcurrentHashMap<>();
   
-  // global stats synchronized from the master
+  // global stats synchronized from the leader
 
   // cache of <term, termStats>
   private final CaffeineCache<String,TermStats> currentGlobalTermStats = new CaffeineCache<>();
diff --git a/solr/core/src/java/org/apache/solr/search/stats/StatsCache.java b/solr/core/src/java/org/apache/solr/search/stats/StatsCache.java
index 238bb12..3049fb6 100644
--- a/solr/core/src/java/org/apache/solr/search/stats/StatsCache.java
+++ b/solr/core/src/java/org/apache/solr/search/stats/StatsCache.java
@@ -176,7 +176,7 @@
   protected abstract void doMergeToGlobalStats(SolrQueryRequest req, List<ShardResponse> responses);
 
   /**
-   * Receive global stats data from the master and update a local cache of global stats
+   * Receive global stats data from the leader and update a local cache of global stats
    * with this global data. This event occurs either as a separate request, or
    * together with the regular query request, in which case this method is
    * called first, before preparing a {@link QueryCommand} to be submitted to
diff --git a/solr/core/src/java/org/apache/solr/security/JWTVerificationkeyResolver.java b/solr/core/src/java/org/apache/solr/security/JWTVerificationkeyResolver.java
index 3aca77c..4a0ada0 100644
--- a/solr/core/src/java/org/apache/solr/security/JWTVerificationkeyResolver.java
+++ b/solr/core/src/java/org/apache/solr/security/JWTVerificationkeyResolver.java
@@ -106,7 +106,7 @@
         }
       }
 
-      // Add all keys into a master list
+      // Add all keys into a leader list
       if (issuerConfig.usesHttpsJwk()) {
         keysSource = "[" + String.join(", ", issuerConfig.getJwksUrls()) + "]";
         for (HttpsJwks hjwks : issuerConfig.getHttpsJwks()) {
diff --git a/solr/core/src/java/org/apache/solr/servlet/QueryRateLimiter.java b/solr/core/src/java/org/apache/solr/servlet/QueryRateLimiter.java
new file mode 100644
index 0000000..ca2f696
--- /dev/null
+++ b/solr/core/src/java/org/apache/solr/servlet/QueryRateLimiter.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.servlet;
+
+import javax.servlet.FilterConfig;
+
+import org.apache.solr.client.solrj.SolrRequest;
+
+import static org.apache.solr.servlet.RateLimitManager.DEFAULT_CONCURRENT_REQUESTS;
+import static org.apache.solr.servlet.RateLimitManager.DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS;
+
+/** Implementation of RequestRateLimiter specific to query request types. Most of the actual work is delegated
+ *  to the parent class but specific configurations and parsing are handled by this class.
+ */
+public class QueryRateLimiter extends RequestRateLimiter {
+  final static String IS_QUERY_RATE_LIMITER_ENABLED = "isQueryRateLimiterEnabled";
+  final static String MAX_QUERY_REQUESTS = "maxQueryRequests";
+  final static String QUERY_WAIT_FOR_SLOT_ALLOCATION_INMS = "queryWaitForSlotAllocationInMS";
+  final static String QUERY_GUARANTEED_SLOTS = "queryGuaranteedSlots";
+  final static String QUERY_ALLOW_SLOT_BORROWING = "queryAllowSlotBorrowing";
+
+  public QueryRateLimiter(FilterConfig filterConfig) {
+    super(constructQueryRateLimiterConfig(filterConfig));
+  }
+
+  protected static RequestRateLimiter.RateLimiterConfig constructQueryRateLimiterConfig(FilterConfig filterConfig) {
+    RequestRateLimiter.RateLimiterConfig queryRateLimiterConfig = new RequestRateLimiter.RateLimiterConfig();
+
+    queryRateLimiterConfig.requestType = SolrRequest.SolrRequestType.QUERY;
+    queryRateLimiterConfig.isEnabled = getParamAndParseBoolean(filterConfig, IS_QUERY_RATE_LIMITER_ENABLED, false);
+    queryRateLimiterConfig.waitForSlotAcquisition = getParamAndParseLong(filterConfig, QUERY_WAIT_FOR_SLOT_ALLOCATION_INMS,
+        DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS);
+    queryRateLimiterConfig.allowedRequests = getParamAndParseInt(filterConfig, MAX_QUERY_REQUESTS,
+        DEFAULT_CONCURRENT_REQUESTS);
+    queryRateLimiterConfig.isSlotBorrowingEnabled = getParamAndParseBoolean(filterConfig, QUERY_ALLOW_SLOT_BORROWING, false);
+    queryRateLimiterConfig.guaranteedSlotsThreshold = getParamAndParseInt(filterConfig, QUERY_GUARANTEED_SLOTS, queryRateLimiterConfig.allowedRequests / 2);
+
+    return queryRateLimiterConfig;
+  }
+}
diff --git a/solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java b/solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java
new file mode 100644
index 0000000..0f4618a
--- /dev/null
+++ b/solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.servlet;
+
+import javax.servlet.FilterConfig;
+import javax.servlet.http.HttpServletRequest;
+import java.lang.invoke.MethodHandles;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.solr.client.solrj.SolrRequest;
+import org.apache.solr.common.annotation.SolrThreadSafe;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.solr.common.params.CommonParams.SOLR_REQUEST_CONTEXT_PARAM;
+import static org.apache.solr.common.params.CommonParams.SOLR_REQUEST_TYPE_PARAM;
+
+/**
+ * This class is responsible for managing rate limiting per request type. Rate limiters
+ * can be registered with this class against a corresponding type. There can be only one
+ * rate limiter associated with a request type.
+ *
+ * The actual rate limiting and the limits should be implemented in the corresponding RequestRateLimiter
+ * implementation. RateLimitManager is responsible for the orchestration but not the specifics of how the
+ * rate limiting is being done for a specific request type.
+ */
+@SolrThreadSafe
+public class RateLimitManager {
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  public final static int DEFAULT_CONCURRENT_REQUESTS= (Runtime.getRuntime().availableProcessors()) * 3;
+  public final static long DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS = -1;
+  private final Map<String, RequestRateLimiter> requestRateLimiterMap;
+
+  private final Map<HttpServletRequest, RequestRateLimiter.SlotMetadata> activeRequestsMap;
+
+  public RateLimitManager() {
+    this.requestRateLimiterMap = new HashMap<>();
+    this.activeRequestsMap = new ConcurrentHashMap<>();
+  }
+
+  // Handles an incoming request. The main orchestration code path, this method will
+  // identify which (if any) rate limiter can handle this request. Internal requests will not be
+  // rate limited
+  // Returns true if request is accepted for processing, false if it should be rejected
+  public boolean handleRequest(HttpServletRequest request) throws InterruptedException {
+    String requestContext = request.getHeader(SOLR_REQUEST_CONTEXT_PARAM);
+    String typeOfRequest = request.getHeader(SOLR_REQUEST_TYPE_PARAM);
+
+    if (typeOfRequest == null) {
+      // Cannot determine if this request should be throttled
+      return true;
+    }
+
+    // Do not throttle internal requests
+    if (requestContext != null && requestContext.equals(SolrRequest.SolrClientContext.SERVER.toString())) {
+      return true;
+    }
+
+    RequestRateLimiter requestRateLimiter = requestRateLimiterMap.get(typeOfRequest);
+
+    if (requestRateLimiter == null) {
+      // No request rate limiter for this request type
+      return true;
+    }
+
+    RequestRateLimiter.SlotMetadata result = requestRateLimiter.handleRequest();
+
+    if (result != null) {
+      // Can be the case if request rate limiter is disabled
+      if (result.isReleasable()) {
+        activeRequestsMap.put(request, result);
+      }
+      return true;
+    }
+
+    RequestRateLimiter.SlotMetadata slotMetadata = trySlotBorrowing(typeOfRequest);
+
+    if (slotMetadata != null) {
+      activeRequestsMap.put(request, slotMetadata);
+      return true;
+    }
+
+    return false;
+  }
+
+  /* For a rejected request type, do the following:
+   * For each request rate limiter whose type that is not of the type of the request which got rejected,
+   * check if slot borrowing is enabled. If enabled, try to acquire a slot.
+   * If allotted, return else try next request type.
+   *
+   * @lucene.experimental -- Can cause slots to be blocked if a request borrows a slot and is itself long lived.
+   */
+  private RequestRateLimiter.SlotMetadata trySlotBorrowing(String requestType) {
+    for (Map.Entry<String, RequestRateLimiter> currentEntry : requestRateLimiterMap.entrySet()) {
+      RequestRateLimiter.SlotMetadata result = null;
+      RequestRateLimiter requestRateLimiter = currentEntry.getValue();
+
+      // Cant borrow from ourselves
+      if (requestRateLimiter.getRateLimiterConfig().requestType.toString().equals(requestType)) {
+        continue;
+      }
+
+      if (requestRateLimiter.getRateLimiterConfig().isSlotBorrowingEnabled) {
+        if (log.isWarnEnabled()) {
+          String msg = "WARN: Experimental feature slots borrowing is enabled for request rate limiter type " +
+              requestRateLimiter.getRateLimiterConfig().requestType.toString();
+
+          log.warn(msg);
+        }
+
+        try {
+          result = requestRateLimiter.allowSlotBorrowing();
+        } catch (InterruptedException e) {
+          Thread.currentThread().interrupt();
+        }
+
+        if (result == null) {
+          throw new IllegalStateException("Returned metadata object is null");
+        }
+
+        if (result.isReleasable()) {
+          return result;
+        }
+      }
+    }
+
+    return null;
+  }
+
+  // Decrement the active requests in the rate limiter for the corresponding request type.
+  public void decrementActiveRequests(HttpServletRequest request) {
+    RequestRateLimiter.SlotMetadata slotMetadata = activeRequestsMap.get(request);
+
+    if (slotMetadata != null) {
+      activeRequestsMap.remove(request);
+      slotMetadata.decrementRequest();
+    }
+  }
+
+  public void registerRequestRateLimiter(RequestRateLimiter requestRateLimiter, SolrRequest.SolrRequestType requestType) {
+    requestRateLimiterMap.put(requestType.toString(), requestRateLimiter);
+  }
+
+  public RequestRateLimiter getRequestRateLimiter(SolrRequest.SolrRequestType requestType) {
+    return requestRateLimiterMap.get(requestType.toString());
+  }
+
+  public static class Builder {
+    protected FilterConfig config;
+
+    public Builder(FilterConfig config) {
+      this.config = config;
+    }
+
+    public RateLimitManager build() {
+      RateLimitManager rateLimitManager = new RateLimitManager();
+
+      rateLimitManager.registerRequestRateLimiter(new QueryRateLimiter(config), SolrRequest.SolrRequestType.QUERY);
+
+      return rateLimitManager;
+    }
+  }
+}
diff --git a/solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java b/solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java
new file mode 100644
index 0000000..f68b312
--- /dev/null
+++ b/solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.servlet;
+
+import javax.servlet.FilterConfig;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.solr.client.solrj.SolrRequest;
+import org.apache.solr.common.annotation.SolrThreadSafe;
+
+/**
+ * Handles rate limiting for a specific request type.
+ *
+ * The control flow is as follows:
+ * Handle request -- Check if slot is available -- If available, acquire slot and proceed --
+ * else reject the same.
+ */
+@SolrThreadSafe
+public class RequestRateLimiter {
+  // Slots that are guaranteed for this request rate limiter.
+  private final Semaphore guaranteedSlotsPool;
+
+  // Competitive slots pool that are available for this rate limiter as well as borrowing by other request rate limiters.
+  // By competitive, the meaning is that there is no prioritization for the acquisition of these slots -- First Come First Serve,
+  // irrespective of whether the request is of this request rate limiter or other.
+  private final Semaphore borrowableSlotsPool;
+
+  private final RateLimiterConfig rateLimiterConfig;
+  private final SlotMetadata guaranteedSlotMetadata;
+  private final SlotMetadata borrowedSlotMetadata;
+  private static final SlotMetadata nullSlotMetadata = new SlotMetadata(null);
+
+  public RequestRateLimiter(RateLimiterConfig rateLimiterConfig) {
+    this.rateLimiterConfig = rateLimiterConfig;
+    this.guaranteedSlotsPool = new Semaphore(rateLimiterConfig.guaranteedSlotsThreshold);
+    this.borrowableSlotsPool = new Semaphore(rateLimiterConfig.allowedRequests - rateLimiterConfig.guaranteedSlotsThreshold);
+    this.guaranteedSlotMetadata = new SlotMetadata(guaranteedSlotsPool);
+    this.borrowedSlotMetadata = new SlotMetadata(borrowableSlotsPool);
+  }
+
+  /**
+   * Handles an incoming request. returns a metadata object representing the metadata for the acquired slot, if acquired.
+   * If a slot is not acquired, returns a null metadata object.
+   * */
+  public SlotMetadata handleRequest() throws InterruptedException {
+
+    if (!rateLimiterConfig.isEnabled) {
+      return nullSlotMetadata;
+    }
+
+    if (guaranteedSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) {
+      return guaranteedSlotMetadata;
+    }
+
+    if (borrowableSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) {
+      return borrowedSlotMetadata;
+    }
+
+    return null;
+  }
+
+  /**
+   * Whether to allow another request type to borrow a slot from this request rate limiter. Typically works fine
+   * if there is a relatively lesser load on this request rate limiter's type compared to the others (think of skew).
+   * @return returns a metadata object for the acquired slot, if acquired. If the
+   * slot was not acquired, returns a metadata object with a null pool.
+   *
+   * @lucene.experimental -- Can cause slots to be blocked if a request borrows a slot and is itself long lived.
+   */
+  public SlotMetadata allowSlotBorrowing() throws InterruptedException {
+    if (borrowableSlotsPool.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, TimeUnit.MILLISECONDS)) {
+      return borrowedSlotMetadata;
+    }
+
+    return nullSlotMetadata;
+  }
+
+  public RateLimiterConfig getRateLimiterConfig() {
+    return rateLimiterConfig;
+  }
+
+  static long getParamAndParseLong(FilterConfig config, String parameterName, long defaultValue) {
+    String tempBuffer = config.getInitParameter(parameterName);
+
+    if (tempBuffer != null) {
+      return Long.parseLong(tempBuffer);
+    }
+
+    return defaultValue;
+  }
+
+  static int getParamAndParseInt(FilterConfig config, String parameterName, int defaultValue) {
+    String tempBuffer = config.getInitParameter(parameterName);
+
+    if (tempBuffer != null) {
+      return Integer.parseInt(tempBuffer);
+    }
+
+    return defaultValue;
+  }
+
+  static boolean getParamAndParseBoolean(FilterConfig config, String parameterName, boolean defaultValue) {
+    String tempBuffer = config.getInitParameter(parameterName);
+
+    if (tempBuffer != null) {
+      return Boolean.parseBoolean(tempBuffer);
+    }
+
+    return defaultValue;
+  }
+
+  /* Rate limiter config for a specific request rate limiter instance */
+  static class RateLimiterConfig {
+    public SolrRequest.SolrRequestType requestType;
+    public boolean isEnabled;
+    public long waitForSlotAcquisition;
+    public int allowedRequests;
+    public boolean isSlotBorrowingEnabled;
+    public int guaranteedSlotsThreshold;
+
+    public RateLimiterConfig() { }
+
+    public RateLimiterConfig(SolrRequest.SolrRequestType requestType, boolean isEnabled, int guaranteedSlotsThreshold,
+                             long waitForSlotAcquisition, int allowedRequests, boolean isSlotBorrowingEnabled) {
+      this.requestType = requestType;
+      this.isEnabled = isEnabled;
+      this.guaranteedSlotsThreshold = guaranteedSlotsThreshold;
+      this.waitForSlotAcquisition = waitForSlotAcquisition;
+      this.allowedRequests = allowedRequests;
+      this.isSlotBorrowingEnabled = isSlotBorrowingEnabled;
+    }
+  }
+
+  // Represents the metadata for a slot
+  static class SlotMetadata {
+    private final Semaphore usedPool;
+
+    public SlotMetadata(Semaphore usedPool) {
+      this.usedPool = usedPool;
+    }
+
+    public void decrementRequest() {
+      if (usedPool != null) {
+        usedPool.release();
+      }
+    }
+
+    public boolean isReleasable() {
+      return usedPool != null;
+    }
+  }
+}
diff --git a/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java b/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
index a7d8905..b6c0f42 100644
--- a/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
+++ b/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java
@@ -53,6 +53,7 @@
 import com.codahale.metrics.jvm.GarbageCollectorMetricSet;
 import com.codahale.metrics.jvm.MemoryUsageGaugeSet;
 import com.codahale.metrics.jvm.ThreadStatesGaugeSet;
+import com.google.common.annotations.VisibleForTesting;
 import io.opentracing.Scope;
 import io.opentracing.Span;
 import io.opentracing.SpanContext;
@@ -71,8 +72,8 @@
 import org.apache.solr.core.NodeConfig;
 import org.apache.solr.core.SolrCore;
 import org.apache.solr.core.SolrInfoBean;
-import org.apache.solr.core.SolrXmlConfig;
 import org.apache.solr.core.SolrPaths;
+import org.apache.solr.core.SolrXmlConfig;
 import org.apache.solr.metrics.AltBufferPoolMetricSet;
 import org.apache.solr.metrics.MetricsMap;
 import org.apache.solr.metrics.OperatingSystemMetricSet;
@@ -83,9 +84,9 @@
 import org.apache.solr.security.AuthenticationPlugin;
 import org.apache.solr.security.PKIAuthenticationPlugin;
 import org.apache.solr.security.PublicKeyHandler;
-import org.apache.solr.util.tracing.GlobalTracer;
 import org.apache.solr.util.StartupLoggingUtils;
 import org.apache.solr.util.configuration.SSLConfigurationsFactory;
+import org.apache.solr.util.tracing.GlobalTracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -114,6 +115,8 @@
   private String registryName;
   private volatile boolean closeOnDestroy = true;
 
+  private RateLimitManager rateLimitManager;
+
   /**
    * Enum to define action that needs to be processed.
    * PASSTHROUGH: Pass through to Restlet via webapp.
@@ -184,6 +187,9 @@
       coresInit = createCoreContainer(solrHomePath, extraProperties);
       this.httpClient = coresInit.getUpdateShardHandler().getDefaultHttpClient();
       setupJvmMetrics(coresInit);
+      RateLimitManager.Builder builder = new RateLimitManager.Builder(config);
+
+      this.rateLimitManager = builder.build();
       if (log.isDebugEnabled()) {
         log.debug("user.dir={}", System.getProperty("user.dir"));
       }
@@ -351,6 +357,7 @@
     HttpServletResponse response = closeShield((HttpServletResponse)_response, retry);
     Scope scope = null;
     Span span = null;
+    boolean accepted = false;
     try {
 
       if (cores == null || cores.isShutDown()) {
@@ -377,6 +384,20 @@
         }
       }
 
+      try {
+        accepted = rateLimitManager.handleRequest(request);
+      } catch (InterruptedException e) {
+        Thread.currentThread().interrupt();
+        throw new SolrException(ErrorCode.SERVER_ERROR, e.getMessage());
+      }
+
+      if (!accepted) {
+        String errorMessage = "Too many requests for this request type." +
+            "Please try after some time or increase the quota for this request type";
+
+        response.sendError(429, errorMessage);
+      }
+
       SpanContext parentSpan = GlobalTracer.get().extract(request);
       Tracer tracer = GlobalTracer.getTracer();
 
@@ -399,6 +420,7 @@
       if (!authenticateRequest(request, response, wrappedRequest)) { // the response and status code have already been sent
         return;
       }
+
       if (wrappedRequest.get() != null) {
         request = wrappedRequest.get();
       }
@@ -441,6 +463,10 @@
       consumeInputFully(request, response);
       SolrRequestInfo.reset();
       SolrRequestParsers.cleanupMultipartFiles(request);
+
+      if (accepted) {
+        rateLimitManager.decrementActiveRequests(request);
+      }
     }
   }
   
@@ -664,8 +690,13 @@
       return response;
     }
   }
-  
+
   public void closeOnDestroy(boolean closeOnDestroy) {
     this.closeOnDestroy = closeOnDestroy;
   }
+
+  @VisibleForTesting
+  void replaceRateLimitManager(RateLimitManager rateLimitManager) {
+    this.rateLimitManager = rateLimitManager;
+  }
 }
diff --git a/solr/core/src/java/org/apache/solr/update/CdcrTransactionLog.java b/solr/core/src/java/org/apache/solr/update/CdcrTransactionLog.java
deleted file mode 100644
index 86cee71..0000000
--- a/solr/core/src/java/org/apache/solr/update/CdcrTransactionLog.java
+++ /dev/null
@@ -1,401 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update;
-
-import java.io.File;
-import java.io.IOException;
-import java.io.RandomAccessFile;
-import java.lang.invoke.MethodHandles;
-import java.nio.channels.Channels;
-import java.nio.file.Files;
-import java.util.Collection;
-
-import org.apache.lucene.util.BytesRef;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.util.FastOutputStream;
-import org.apache.solr.common.util.JavaBinCodec;
-import org.apache.solr.common.util.ObjectReleaseTracker;
-import org.apache.solr.update.processor.CdcrUpdateProcessor;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Extends {@link org.apache.solr.update.TransactionLog} to:
- * <ul>
- * <li>reopen automatically the output stream if its reference count reached 0. This is achieved by extending
- * methods {@link #incref()}, {@link #close()} and {@link #reopenOutputStream()}.</li>
- * <li>encode the number of records in the tlog file in the last commit record. The number of records will be
- * decoded and reuse if the tlog file is reopened. This is achieved by extending the constructor, and the
- * methods {@link #writeCommit(CommitUpdateCommand)} and {@link #getReader(long)}.</li>
- * </ul>
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-public class CdcrTransactionLog extends TransactionLog {
-
-  private boolean isReplaying;
-  long startVersion; // (absolute) version of the first element of this transaction log
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private boolean debug = log.isDebugEnabled();
-
-  CdcrTransactionLog(File tlogFile, Collection<String> globalStrings) {
-    super(tlogFile, globalStrings);
-
-    // The starting version number will be used to seek more efficiently tlogs
-    // and to filter out tlog files during replication (in ReplicationHandler#getTlogFileList)
-    String filename = tlogFile.getName();
-    startVersion = Math.abs(Long.parseLong(filename.substring(filename.lastIndexOf('.') + 1)));
-
-    isReplaying = false;
-  }
-
-  CdcrTransactionLog(File tlogFile, Collection<String> globalStrings, boolean openExisting) {
-    super(tlogFile, globalStrings, openExisting);
-
-    // The starting version number will be used to seek more efficiently tlogs
-    String filename = tlogFile.getName();
-    startVersion = Math.abs(Long.parseLong(filename.substring(filename.lastIndexOf('.') + 1)));
-
-    numRecords = openExisting ? this.readNumRecords() : 0;
-    // if we try to reopen an existing tlog file and that the number of records is equal to 0, then we are replaying
-    // the log and we will append a commit
-    if (openExisting && numRecords == 0) {
-      isReplaying = true;
-    }
-  }
-
-  /**
-   * Returns the number of records in the log (currently includes the header and an optional commit).
-   */
-  public int numRecords() {
-    return super.numRecords();
-  }
-
-  /**
-   * The last record of the transaction log file is expected to be a commit with a 4 byte integer that encodes the
-   * number of records in the file.
-   */
-  private int readNumRecords() {
-    try {
-      if (endsWithCommit()) {
-        long size = fos.size();
-        // 4 bytes for the record size, the lenght of the end message + 1 byte for its value tag,
-        // and 4 bytes for the number of records
-        long pos = size - 4 - END_MESSAGE.length() - 1 - 4;
-        if (pos < 0) return 0;
-        try (ChannelFastInputStream is = new ChannelFastInputStream(channel, pos)) {
-          return is.readInt();
-        }
-      }
-    } catch (IOException e) {
-      log.error("Error while reading number of records in tlog {}", this, e);
-    }
-    return 0;
-  }
-
-  @Override
-  public long write(AddUpdateCommand cmd, long prevPointer) {
-    assert (-1 <= prevPointer && (cmd.isInPlaceUpdate() || (-1 == prevPointer)));
-
-    LogCodec codec = new LogCodec(resolver);
-    SolrInputDocument sdoc = cmd.getSolrInputDocument();
-
-    try {
-      checkWriteHeader(codec, sdoc);
-
-      // adaptive buffer sizing
-      int bufSize = lastAddSize;    // unsynchronized access of lastAddSize should be fine
-      bufSize = Math.min(1024*1024, bufSize+(bufSize>>3)+256);
-
-      MemOutputStream out = new MemOutputStream(new byte[bufSize]);
-      codec.init(out);
-      if (cmd.isInPlaceUpdate()) {
-        codec.writeTag(JavaBinCodec.ARR, 6);
-        codec.writeInt(UpdateLog.UPDATE_INPLACE);  // should just take one byte
-        codec.writeLong(cmd.getVersion());
-        codec.writeLong(prevPointer);
-        codec.writeLong(cmd.prevVersion);
-        if (cmd.getReq().getParamString().contains(CdcrUpdateProcessor.CDCR_UPDATE)) {
-          // if the update is received via cdcr source; add boolean entry
-          // CdcrReplicator.isTargetCluster() checks that particular boolean to accept or discard the update
-          // to forward to its own target cluster
-          codec.writePrimitive(true);
-        } else {
-          codec.writePrimitive(false);
-        }
-        codec.writeSolrInputDocument(cmd.getSolrInputDocument());
-
-      } else {
-        codec.writeTag(JavaBinCodec.ARR, 4);
-        codec.writeInt(UpdateLog.ADD);  // should just take one byte
-        codec.writeLong(cmd.getVersion());
-        if (cmd.getReq().getParamString().contains(CdcrUpdateProcessor.CDCR_UPDATE)) {
-          // if the update is received via cdcr source; add extra boolean entry
-          // CdcrReplicator.isTargetCluster() checks that particular boolean to accept or discard the update
-          // to forward to its own target cluster
-          codec.writePrimitive(true);
-        } else {
-          codec.writePrimitive(false);
-        }
-        codec.writeSolrInputDocument(cmd.getSolrInputDocument());
-      }
-      lastAddSize = (int)out.size();
-
-      synchronized (this) {
-        long pos = fos.size();   // if we had flushed, this should be equal to channel.position()
-        assert pos != 0;
-
-        /***
-         System.out.println("###writing at " + pos + " fos.size()=" + fos.size() + " raf.length()=" + raf.length());
-         if (pos != fos.size()) {
-         throw new RuntimeException("ERROR" + "###writing at " + pos + " fos.size()=" + fos.size() + " raf.length()=" + raf.length());
-         }
-         ***/
-
-        out.writeAll(fos);
-        endRecord(pos);
-        // fos.flushBuffer();  // flush later
-        return pos;
-      }
-
-    } catch (IOException e) {
-      // TODO: reset our file pointer back to "pos", the start of this record.
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Error logging add", e);
-    }
-  }
-
-  @Override
-  public long writeDelete(DeleteUpdateCommand cmd) {
-    LogCodec codec = new LogCodec(resolver);
-
-    try {
-      checkWriteHeader(codec, null);
-
-      BytesRef br = cmd.getIndexedId();
-
-      MemOutputStream out = new MemOutputStream(new byte[20 + br.length]);
-      codec.init(out);
-      codec.writeTag(JavaBinCodec.ARR, 4);
-      codec.writeInt(UpdateLog.DELETE);  // should just take one byte
-      codec.writeLong(cmd.getVersion());
-      codec.writeByteArray(br.bytes, br.offset, br.length);
-      if (cmd.getReq().getParamString().contains(CdcrUpdateProcessor.CDCR_UPDATE)) {
-        // if the update is received via cdcr source; add extra boolean entry
-        // CdcrReplicator.isTargetCluster() checks that particular boolean to accept or discard the update
-        // to forward to its own target cluster
-        codec.writePrimitive(true);
-      } else {
-        codec.writePrimitive(false);
-      }
-
-      synchronized (this) {
-        long pos = fos.size();   // if we had flushed, this should be equal to channel.position()
-        assert pos != 0;
-        out.writeAll(fos);
-        endRecord(pos);
-        // fos.flushBuffer();  // flush later
-        return pos;
-      }
-
-    } catch (IOException e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
-    }
-  }
-
-  @Override
-  public long writeDeleteByQuery(DeleteUpdateCommand cmd) {
-    LogCodec codec = new LogCodec(resolver);
-    try {
-      checkWriteHeader(codec, null);
-
-      MemOutputStream out = new MemOutputStream(new byte[20 + (cmd.query.length())]);
-      codec.init(out);
-      codec.writeTag(JavaBinCodec.ARR, 4);
-      codec.writeInt(UpdateLog.DELETE_BY_QUERY);  // should just take one byte
-      codec.writeLong(cmd.getVersion());
-      codec.writeStr(cmd.query);
-      if (cmd.getReq().getParamString().contains(CdcrUpdateProcessor.CDCR_UPDATE)) {
-        // if the update is received via cdcr source; add extra boolean entry
-        // CdcrReplicator.isTargetCluster() checks that particular boolean to accept or discard the update
-        // to forward to its own target cluster
-        codec.writePrimitive(true);
-      } else {
-        codec.writePrimitive(false);
-      }
-      synchronized (this) {
-        long pos = fos.size();   // if we had flushed, this should be equal to channel.position()
-        out.writeAll(fos);
-        endRecord(pos);
-        // fos.flushBuffer();  // flush later
-        return pos;
-      }
-    } catch (IOException e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
-    }
-  }
-
-  @Override
-  public long writeCommit(CommitUpdateCommand cmd) {
-    LogCodec codec = new LogCodec(resolver);
-    synchronized (this) {
-      try {
-        long pos = fos.size();   // if we had flushed, this should be equal to channel.position()
-
-        if (pos == 0) {
-          writeLogHeader(codec);
-          pos = fos.size();
-        }
-        codec.init(fos);
-        codec.writeTag(JavaBinCodec.ARR, 4);
-        codec.writeInt(UpdateLog.COMMIT);  // should just take one byte
-        codec.writeLong(cmd.getVersion());
-        codec.writeTag(JavaBinCodec.INT); // Enforce the encoding of a plain integer, to simplify decoding
-        fos.writeInt(numRecords + 1); // the number of records in the file - +1 to account for the commit operation being written
-        codec.writeStr(END_MESSAGE);  // ensure these bytes are (almost) last in the file
-
-        endRecord(pos);
-
-        fos.flush();  // flush since this will be the last record in a log fill
-        assert fos.size() == channel.size();
-
-        isReplaying = false; // we have replayed and appended a commit record with the number of records in the file
-
-        return pos;
-      } catch (IOException e) {
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
-      }
-    }
-  }
-
-  /**
-   * Returns a reader that can be used while a log is still in use.
-   * Currently only *one* LogReader may be outstanding, and that log may only
-   * be used from a single thread.
-   */
-  @Override
-  public LogReader getReader(long startingPos) {
-    return new CdcrLogReader(startingPos);
-  }
-
-  public class CdcrLogReader extends LogReader {
-
-    private int numRecords = 1; // start at 1 to account for the header record
-
-    public CdcrLogReader(long startingPos) {
-      super(startingPos);
-    }
-
-    @Override
-    public Object next() throws IOException, InterruptedException {
-      Object o = super.next();
-      if (o != null) {
-        this.numRecords++;
-        // We are replaying the log. We need to update the number of records for the writeCommit.
-        if (isReplaying) {
-          synchronized (CdcrTransactionLog.this) {
-            CdcrTransactionLog.this.numRecords = this.numRecords;
-          }
-        }
-      }
-      return o;
-    }
-
-  }
-
-  @Override
-  public void incref() {
-    // if the refcount is 0, we need to reopen the output stream
-    if (refcount.getAndIncrement() == 0) {
-      reopenOutputStream(); // synchronised with this
-    }
-  }
-
-  /**
-   * Modified to act like {@link #incref()} in order to be compatible with {@link UpdateLog#recoverFromLog()}.
-   * Otherwise, we would have to duplicate the method {@link UpdateLog#recoverFromLog()} in
-   * {@link org.apache.solr.update.CdcrUpdateLog} and change the call
-   * {@code if (!ll.try_incref()) continue; } to {@code incref(); }.
-   */
-  @Override
-  public boolean try_incref() {
-    this.incref();
-    return true;
-  }
-
-  @Override
-  public void close() {
-    try {
-      if (debug) {
-        log.debug("Closing tlog {}", this);
-      }
-
-      synchronized (this) {
-        if (fos != null) {
-          fos.flush();
-          fos.close();
-
-          // dereference these variables for GC
-          fos = null;
-          os = null;
-          channel = null;
-          raf = null;
-        }
-      }
-
-      if (deleteOnClose) {
-        try {
-          Files.deleteIfExists(tlogFile.toPath());
-        } catch (IOException e) {
-          // TODO: should this class care if a file couldnt be deleted?
-          // this just emulates previous behavior, where only SecurityException would be handled.
-        }
-      }
-    } catch (IOException e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
-    } finally {
-      assert ObjectReleaseTracker.release(this);
-    }
-  }
-
-  /**
-   * Re-open the output stream of the tlog and position
-   * the file pointer at the end of the file. It assumes
-   * that the tlog is non-empty and that the tlog's header
-   * has been already read.
-   */
-  synchronized void reopenOutputStream() {
-    try {
-      if (debug) {
-        log.debug("Re-opening tlog's output stream: {}", this);
-      }
-
-      raf = new RandomAccessFile(this.tlogFile, "rw");
-      channel = raf.getChannel();
-      long start = raf.length();
-      raf.seek(start);
-      os = Channels.newOutputStream(channel);
-      fos = new FastOutputStream(os, new byte[65536], 0);
-      fos.setWritten(start);    // reflect that we aren't starting at the beginning
-    } catch (IOException e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
-    }
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/update/CdcrUpdateLog.java b/solr/core/src/java/org/apache/solr/update/CdcrUpdateLog.java
deleted file mode 100644
index eee3127..0000000
--- a/solr/core/src/java/org/apache/solr/update/CdcrUpdateLog.java
+++ /dev/null
@@ -1,796 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update;
-
-import java.io.File;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.LinkedBlockingDeque;
-
-import org.apache.lucene.util.BytesRef;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.request.LocalSolrQueryRequest;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.update.processor.DistributedUpdateProcessor;
-import org.apache.solr.update.processor.DistributingUpdateProcessorFactory;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * An extension of the {@link org.apache.solr.update.UpdateLog} for the CDCR scenario.<br>
- * Compared to the original update log implementation, transaction logs are removed based on
- * pointers instead of a fixed size limit. Pointers are created by the CDC replicators and
- * correspond to replication checkpoints. If all pointers are ahead of a transaction log,
- * this transaction log is removed.<br>
- * Given that the number of transaction logs can become considerable if some pointers are
- * lagging behind, the {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader} provides
- * a {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader#seek(long)} method to
- * efficiently lookup a particular transaction log file given a version number.
- */
-public class CdcrUpdateLog extends UpdateLog {
-
-  protected final Map<CdcrLogReader, CdcrLogPointer> logPointers = new ConcurrentHashMap<>();
-
-  /**
-   * A reader that will be used as toggle to turn on/off the buffering of tlogs
-   */
-  private CdcrLogReader bufferToggle;
-
-  public static String LOG_FILENAME_PATTERN = "%s.%019d.%1d";
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  private boolean debug = log.isDebugEnabled();
-
-  @Override
-  public void init(UpdateHandler uhandler, SolrCore core) {
-    // remove dangling readers
-    for (CdcrLogReader reader : logPointers.keySet()) {
-      reader.close();
-    }
-    logPointers.clear();
-
-    // init
-    super.init(uhandler, core);
-  }
-
-  @Override
-  public TransactionLog newTransactionLog(File tlogFile, Collection<String> globalStrings, boolean openExisting) {
-    return new CdcrTransactionLog(tlogFile, globalStrings, openExisting);
-  }
-
-  @Override
-  protected void addOldLog(TransactionLog oldLog, boolean removeOld) {
-    if (oldLog == null) return;
-
-    numOldRecords += oldLog.numRecords();
-
-    int currRecords = numOldRecords;
-
-    if (oldLog != tlog && tlog != null) {
-      currRecords += tlog.numRecords();
-    }
-
-    while (removeOld && logs.size() > 0) {
-      TransactionLog log = logs.peekLast();
-      int nrec = log.numRecords();
-
-      // remove oldest log if we don't need it to keep at least numRecordsToKeep, or if
-      // we already have the limit of 10 log files.
-      if (currRecords - nrec >= numRecordsToKeep || logs.size() >= 10) {
-        // remove the oldest log if nobody points to it
-        if (!this.hasLogPointer(log)) {
-          currRecords -= nrec;
-          numOldRecords -= nrec;
-          TransactionLog last = logs.removeLast();
-          last.deleteOnClose = true;
-          last.close();  // it will be deleted if no longer in use
-          continue;
-        }
-        // we have one log with one pointer, we should stop removing logs
-        break;
-      }
-
-      break;
-    }
-
-    // Decref old log as we do not write to it anymore
-    // If the oldlog is uncapped, i.e., a write commit has to be performed
-    // during recovery, the output stream will be automatically re-open when
-    // TransaactionLog#incref will be called.
-    oldLog.deleteOnClose = false;
-    oldLog.decref();
-
-    // don't incref... we are taking ownership from the caller.
-    logs.addFirst(oldLog);
-  }
-
-  /**
-   * Checks if one of the log pointer is pointing to the given tlog.
-   */
-  private boolean hasLogPointer(TransactionLog tlog) {
-    for (CdcrLogPointer pointer : logPointers.values()) {
-      // if we have a pointer that is not initialised, then do not remove the old tlogs
-      // as we have a log reader that didn't pick them up yet.
-      if (!pointer.isInitialised()) {
-        return true;
-      }
-
-      if (pointer.tlogFile == tlog.tlogFile) {
-        return true;
-      }
-    }
-    return false;
-  }
-
-  @Override
-  public long getLastLogId() {
-    if (id != -1) return id;
-    if (tlogFiles.length == 0) return -1;
-    String last = tlogFiles[tlogFiles.length - 1];
-    if (TLOG_NAME.length() + 1 > last.lastIndexOf('.'))  {
-      // old tlog created by default UpdateLog impl
-      return Long.parseLong(last.substring(TLOG_NAME.length() + 1));
-    } else  {
-      return Long.parseLong(last.substring(TLOG_NAME.length() + 1, last.lastIndexOf('.')));
-    }
-  }
-
-  @Override
-  public void add(AddUpdateCommand cmd, boolean clearCaches) {
-    // Ensure we create a new tlog file following our filename format,
-    // the variable tlog will be not null, and the ensureLog of the parent will be skipped
-    synchronized (this) {
-      if ((cmd.getFlags() & UpdateCommand.REPLAY) == 0) {
-        ensureLog(cmd.getVersion());
-      }
-    }
-    // Then delegate to parent method
-    super.add(cmd, clearCaches);
-  }
-
-  @Override
-  public void delete(DeleteUpdateCommand cmd) {
-    // Ensure we create a new tlog file following our filename format
-    // the variable tlog will be not null, and the ensureLog of the parent will be skipped
-    synchronized (this) {
-      if ((cmd.getFlags() & UpdateCommand.REPLAY) == 0) {
-        ensureLog(cmd.getVersion());
-      }
-    }
-    // Then delegate to parent method
-    super.delete(cmd);
-  }
-
-  @Override
-  public void deleteByQuery(DeleteUpdateCommand cmd) {
-    // Ensure we create a new tlog file following our filename format
-    // the variable tlog will be not null, and the ensureLog of the parent will be skipped
-    synchronized (this) {
-      if ((cmd.getFlags() & UpdateCommand.REPLAY) == 0) {
-        ensureLog(cmd.getVersion());
-      }
-    }
-    // Then delegate to parent method
-    super.deleteByQuery(cmd);
-  }
-
-  /**
-   * Creates a new {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogReader}
-   * initialised with the current list of tlogs.
-   */
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  public CdcrLogReader newLogReader() {
-    return new CdcrLogReader(new ArrayList(logs), tlog);
-  }
-
-  /**
-   * Enable the buffering of the tlogs. When buffering is activated, the update logs will not remove any
-   * old transaction log files.
-   */
-  public void enableBuffer() {
-    if (bufferToggle == null) {
-      bufferToggle = this.newLogReader();
-    }
-  }
-
-  /**
-   * Disable the buffering of the tlogs.
-   */
-  public void disableBuffer() {
-    if (bufferToggle != null) {
-      bufferToggle.close();
-      bufferToggle = null;
-    }
-  }
-
-  public CdcrLogReader getBufferToggle() {
-    return bufferToggle;
-  }
-
-  /**
-   * Is the update log buffering the tlogs ?
-   */
-  public boolean isBuffering() {
-    return bufferToggle == null ? false : true;
-  }
-
-  protected void ensureLog(long startVersion) {
-    if (tlog == null) {
-      long absoluteVersion = Math.abs(startVersion); // version is negative for deletes
-      if (tlog == null) {
-        String newLogName = String.format(Locale.ROOT, LOG_FILENAME_PATTERN, TLOG_NAME, id, absoluteVersion);
-        tlog = new CdcrTransactionLog(new File(tlogDir, newLogName), globalStrings);
-      }
-
-      // push the new tlog to the opened readers
-      for (CdcrLogReader reader : logPointers.keySet()) {
-        reader.push(tlog);
-      }
-    }
-  }
-
-  /**
-   * expert: Reset the update log before initialisation. This is called by
-   * {@link org.apache.solr.handler.IndexFetcher#moveTlogFiles(File)} during a
-   * a Recovery operation in order to re-initialise the UpdateLog with a new set of tlog files.
-   * @see #initForRecovery(File, long)
-   */
-  public BufferedUpdates resetForRecovery() {
-    synchronized (this) { // since we blocked updates in IndexFetcher, this synchronization shouldn't strictly be necessary.
-      // If we are buffering, we need to return the related information to the index fetcher
-      // for properly initialising the new update log - SOLR-8263
-      BufferedUpdates bufferedUpdates = new BufferedUpdates();
-      if (state == State.BUFFERING && tlog != null) {
-        bufferedUpdates.tlog = tlog.tlogFile; // file to keep
-        bufferedUpdates.offset = this.recoveryInfo.positionOfStart;
-      }
-
-      // Close readers
-      for (CdcrLogReader reader : logPointers.keySet()) {
-        reader.close();
-      }
-      logPointers.clear();
-
-      // Close and clear logs
-      doClose(prevTlog);
-      doClose(tlog);
-
-      for (TransactionLog log : logs) {
-        if (log == prevTlog || log == tlog) continue;
-        doClose(log);
-      }
-
-      logs.clear();
-      newestLogsOnStartup.clear();
-      tlog = prevTlog = null;
-      prevMapLog = prevMapLog2 = null;
-
-      map.clear();
-      if (prevMap != null) prevMap.clear();
-      if (prevMap2 != null) prevMap2.clear();
-
-      tlogFiles = null;
-      numOldRecords = 0;
-
-      oldDeletes.clear();
-      deleteByQueries.clear();
-
-      return bufferedUpdates;
-    }
-  }
-
-  public static class BufferedUpdates {
-    public File tlog;
-    public long offset;
-  }
-
-  /**
-   * <p>
-   *   expert: Initialise the update log with a tlog file containing buffered updates. This is called by
-   *   {@link org.apache.solr.handler.IndexFetcher#moveTlogFiles(File)} during a Recovery operation.
-   *   This is mainly a copy of the original {@link UpdateLog#init(UpdateHandler, SolrCore)} method, but modified
-   *   to:
-   *   <ul>
-   *     <li>preserve the same {@link VersionInfo} instance in order to not "unblock" updates, since the
-   *     {@link org.apache.solr.handler.IndexFetcher#moveTlogFiles(File)} acquired a write lock from this instance.</li>
-   *     <li>copy the buffered updates.</li>
-   *   </ul>
-   * @see #resetForRecovery()
-   */
-  public void initForRecovery(File bufferedTlog, long offset) {
-    tlogFiles = getLogList(tlogDir);
-    id = getLastLogId() + 1;   // add 1 since we will create a new log for the next update
-
-    if (debug) {
-      log.debug("UpdateHandler init: tlogDir={}, existing tlogs={}, next id={}", tlogDir, Arrays.asList(tlogFiles), id);
-    }
-
-    TransactionLog oldLog = null;
-    for (String oldLogName : tlogFiles) {
-      File f = new File(tlogDir, oldLogName);
-      try {
-        oldLog = newTransactionLog(f, null, true);
-        addOldLog(oldLog, false);  // don't remove old logs on startup since more than one may be uncapped.
-      } catch (Exception e) {
-        SolrException.log(log, "Failure to open existing log file (non fatal) " + f, e);
-        deleteFile(f);
-      }
-    }
-
-    // Record first two logs (oldest first) at startup for potential tlog recovery.
-    // It's possible that at abnormal close both "tlog" and "prevTlog" were uncapped.
-    for (TransactionLog ll : logs) {
-      newestLogsOnStartup.addFirst(ll);
-      if (newestLogsOnStartup.size() >= 2) break;
-    }
-
-    // TODO: these startingVersions assume that we successfully recover from all non-complete tlogs.
-    UpdateLog.RecentUpdates startingUpdates = getRecentUpdates();
-    long latestVersion = startingUpdates.getMaxRecentVersion();
-    try {
-      startingVersions = startingUpdates.getVersions(numRecordsToKeep);
-
-      // populate recent deletes list (since we can't get that info from the index)
-      for (int i=startingUpdates.deleteList.size()-1; i>=0; i--) {
-        DeleteUpdate du = startingUpdates.deleteList.get(i);
-        oldDeletes.put(new BytesRef(du.id), new LogPtr(-1,du.version));
-      }
-
-      // populate recent deleteByQuery commands
-      for (int i=startingUpdates.deleteByQueryList.size()-1; i>=0; i--) {
-        Update update = startingUpdates.deleteByQueryList.get(i);
-        @SuppressWarnings({"unchecked"})
-        List<Object> dbq = (List<Object>) update.log.lookup(update.pointer);
-        long version = (Long) dbq.get(1);
-        String q = (String) dbq.get(2);
-        trackDeleteByQuery(q, version);
-      }
-
-    } finally {
-      startingUpdates.close();
-    }
-
-    // Copy buffered updates
-    if (bufferedTlog != null) {
-      this.copyBufferedUpdates(bufferedTlog, offset, latestVersion);
-    }
-  }
-
-  /**
-   * <p>
-   *   Read the entries from the given tlog file and replay them as buffered updates.
-   *   The buffered tlog that we are trying to copy might contain duplicate operations with the
-   *   current update log. During the tlog replication process, the replica might buffer update operations
-   *   that will be present also in the tlog files downloaded from the leader. In order to remove these
-   *   duplicates, it will skip any operations with a version inferior to the latest know version.
-   */
-  private void copyBufferedUpdates(File tlogSrc, long offsetSrc, long latestVersion) {
-    recoveryInfo = new RecoveryInfo();
-    state = State.BUFFERING;
-
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set(DistributingUpdateProcessorFactory.DISTRIB_UPDATE_PARAM, DistributedUpdateProcessor.DistribPhase.FROMLEADER.toString());
-    SolrQueryRequest req = new LocalSolrQueryRequest(uhandler.core, params);
-
-    CdcrTransactionLog src = new CdcrTransactionLog(tlogSrc, null, true);
-    TransactionLog.LogReader tlogReader = src.getReader(offsetSrc);
-    try {
-      int operationAndFlags = 0;
-      for (; ; ) {
-        Object o = tlogReader.next();
-        if (o == null) break; // we reached the end of the tlog
-        // should currently be a List<Oper,Ver,Doc/Id>
-        @SuppressWarnings({"rawtypes"})
-        List entry = (List) o;
-        operationAndFlags = (Integer) entry.get(0);
-        int oper = operationAndFlags & OPERATION_MASK;
-        long version = (Long) entry.get(1);
-        if (version <= latestVersion) {
-          // probably a buffered update that is also present in a tlog file coming from the leader,
-          // skip it.
-          log.debug("Dropping buffered operation - version {} < {}", version, latestVersion);
-          continue;
-        }
-
-        switch (oper) {
-          case UpdateLog.ADD: {
-            SolrInputDocument sdoc = (SolrInputDocument) entry.get(entry.size() - 1);
-            AddUpdateCommand cmd = new AddUpdateCommand(req);
-            cmd.solrDoc = sdoc;
-            cmd.setVersion(version);
-            cmd.setFlags(UpdateCommand.BUFFERING);
-            this.add(cmd);
-            break;
-          }
-          case UpdateLog.DELETE: {
-            byte[] idBytes = (byte[]) entry.get(2);
-            DeleteUpdateCommand cmd = new DeleteUpdateCommand(req);
-            cmd.setIndexedId(new BytesRef(idBytes));
-            cmd.setVersion(version);
-            cmd.setFlags(UpdateCommand.BUFFERING);
-            this.delete(cmd);
-            break;
-          }
-
-          case UpdateLog.DELETE_BY_QUERY: {
-            String query = (String) entry.get(2);
-            DeleteUpdateCommand cmd = new DeleteUpdateCommand(req);
-            cmd.query = query;
-            cmd.setVersion(version);
-            cmd.setFlags(UpdateCommand.BUFFERING);
-            this.deleteByQuery(cmd);
-            break;
-          }
-
-          default:
-            throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Invalid Operation! " + oper);
-        }
-
-      }
-    }
-    catch (Exception e) {
-      throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Unable to copy buffered updates", e);
-    }
-    finally {
-      try {
-        tlogReader.close();
-      }
-      finally {
-        this.doClose(src);
-      }
-    }
-  }
-
-  private void doClose(TransactionLog theLog) {
-    if (theLog != null) {
-      theLog.deleteOnClose = false;
-      theLog.decref();
-      theLog.forceClose();
-    }
-  }
-
-  @Override
-  public void close(boolean committed, boolean deleteOnClose) {
-    for (CdcrLogReader reader : new ArrayList<>(logPointers.keySet())) {
-      reader.close();
-    }
-    super.close(committed, deleteOnClose);
-  }
-
-  private static class CdcrLogPointer {
-
-    File tlogFile = null;
-
-    private CdcrLogPointer() {
-    }
-
-    private void set(File tlogFile) {
-      this.tlogFile = tlogFile;
-    }
-
-    private boolean isInitialised() {
-      return tlogFile == null ? false : true;
-    }
-
-    @Override
-    public String toString() {
-      return "CdcrLogPointer(" + tlogFile + ")";
-    }
-
-  }
-
-  public class CdcrLogReader {
-
-    private TransactionLog currentTlog;
-    private TransactionLog.LogReader tlogReader;
-
-    // we need to use a blocking deque because of #getNumberOfRemainingRecords
-    private final LinkedBlockingDeque<TransactionLog> tlogs;
-    private final CdcrLogPointer pointer;
-
-    /**
-     * Used to record the last position of the tlog
-     */
-    private long lastPositionInTLog = 0;
-
-    /**
-     * lastVersion is used to get nextToLastVersion
-     */
-    private long lastVersion = -1;
-
-    /**
-     * nextToLastVersion is communicated by leader to replicas so that they can remove no longer needed tlogs
-     * <p>
-     * nextToLastVersion is used because thanks to {@link #resetToLastPosition()} lastVersion can become the current version
-     */
-    private long nextToLastVersion = -1;
-
-    /**
-     * Used to record the number of records read in the current tlog
-     */
-    private long numRecordsReadInCurrentTlog = 0;
-
-    private CdcrLogReader(List<TransactionLog> tlogs, TransactionLog tlog) {
-      this.tlogs = new LinkedBlockingDeque<>();
-      this.tlogs.addAll(tlogs);
-      if (tlog != null) this.tlogs.push(tlog); // ensure that the tlog being written is pushed
-
-      // Register the pointer in the parent UpdateLog
-      pointer = new CdcrLogPointer();
-      logPointers.put(this, pointer);
-
-      // If the reader is initialised while the updates log is empty, do nothing
-      if ((currentTlog = this.tlogs.peekLast()) != null) {
-        tlogReader = currentTlog.getReader(0);
-        pointer.set(currentTlog.tlogFile);
-        numRecordsReadInCurrentTlog = 0;
-        log.debug("Init new tlog reader for {} - tlogReader = {}", currentTlog.tlogFile, tlogReader);
-      }
-    }
-
-    private void push(TransactionLog tlog) {
-      this.tlogs.push(tlog);
-
-      // The reader was initialised while the update logs was empty, or reader was exhausted previously,
-      // we have to update the current tlog and the associated tlog reader.
-      if (currentTlog == null && !tlogs.isEmpty()) {
-        currentTlog = tlogs.peekLast();
-        tlogReader = currentTlog.getReader(0);
-        pointer.set(currentTlog.tlogFile);
-        numRecordsReadInCurrentTlog = 0;
-        log.debug("Init new tlog reader for {} - tlogReader = {}", currentTlog.tlogFile, tlogReader);
-      }
-    }
-
-    /**
-     * Expert: Instantiate a sub-reader. A sub-reader is used for batch updates. It allows to iterates over the
-     * update logs entries without modifying the state of the parent log reader. If the batch update fails, the state
-     * of the sub-reader is discarded and the state of the parent reader is not modified. If the batch update
-     * is successful, the sub-reader is used to fast forward the parent reader with the method
-     * {@link #forwardSeek(org.apache.solr.update.CdcrUpdateLog.CdcrLogReader)}.
-     */
-    public CdcrLogReader getSubReader() {
-      // Add the last element of the queue to properly initialise the pointer and log reader
-      CdcrLogReader clone = new CdcrLogReader(new ArrayList<TransactionLog>(), this.tlogs.peekLast());
-      clone.tlogs.clear(); // clear queue before copy
-      clone.tlogs.addAll(tlogs); // perform a copy of the list
-      clone.lastPositionInTLog = this.lastPositionInTLog;
-      clone.numRecordsReadInCurrentTlog = this.numRecordsReadInCurrentTlog;
-      clone.lastVersion = this.lastVersion;
-      clone.nextToLastVersion = this.nextToLastVersion;
-
-      // If the update log is not empty, we need to initialise the tlog reader
-      // NB: the tlogReader is equal to null if the update log is empty
-      if (tlogReader != null) {
-        clone.tlogReader.close();
-        clone.tlogReader = currentTlog.getReader(this.tlogReader.currentPos());
-      }
-
-      return clone;
-    }
-
-    /**
-     * Expert: Fast forward this log reader with a log subreader. The subreader will be closed after calling this
-     * method. In order to avoid unexpected results, the log
-     * subreader must be created from this reader with the method {@link #getSubReader()}.
-     */
-    public void forwardSeek(CdcrLogReader subReader) {
-      // If a subreader has a null tlog reader, does nothing
-      // This can happened if a subreader is instantiated from a non-initialised parent reader, or if the subreader
-      // has been closed.
-      if (subReader.tlogReader == null) {
-        return;
-      }
-
-      tlogReader.close(); // close the existing reader, a new one will be created
-      while (this.tlogs.peekLast().id < subReader.tlogs.peekLast().id) {
-        tlogs.removeLast();
-        currentTlog = tlogs.peekLast();
-      }
-      assert this.tlogs.peekLast().id == subReader.tlogs.peekLast().id : this.tlogs.peekLast().id+" != "+subReader.tlogs.peekLast().id;
-      this.pointer.set(currentTlog.tlogFile);
-      this.lastPositionInTLog = subReader.lastPositionInTLog;
-      this.numRecordsReadInCurrentTlog = subReader.numRecordsReadInCurrentTlog;
-      this.lastVersion = subReader.lastVersion;
-      this.nextToLastVersion = subReader.nextToLastVersion;
-      this.tlogReader = currentTlog.getReader(subReader.tlogReader.currentPos());
-    }
-
-    /**
-     * Advances to the next log entry in the updates log and returns the log entry itself.
-     * Returns null if there are no more log entries in the updates log.<br>
-     * <p>
-     * <b>NOTE:</b> after the reader has exhausted, you can call again this method since the updates
-     * log might have been updated with new entries.
-     */
-    public Object next() throws IOException, InterruptedException {
-      while (!tlogs.isEmpty()) {
-        lastPositionInTLog = tlogReader.currentPos();
-        Object o = tlogReader.next();
-
-        if (o != null) {
-          pointer.set(currentTlog.tlogFile);
-          nextToLastVersion = lastVersion;
-          lastVersion = getVersion(o);
-          numRecordsReadInCurrentTlog++;
-          return o;
-        }
-
-        if (tlogs.size() > 1) { // if the current tlog is not the newest one, we can advance to the next one
-          tlogReader.close();
-          tlogs.removeLast();
-          currentTlog = tlogs.peekLast();
-          tlogReader = currentTlog.getReader(0);
-          pointer.set(currentTlog.tlogFile);
-          numRecordsReadInCurrentTlog = 0;
-          log.debug("Init new tlog reader for {} - tlogReader = {}", currentTlog.tlogFile, tlogReader);
-        } else {
-          // the only tlog left is the new tlog which is currently being written,
-          // we should not remove it as we have to try to read it again later.
-          return null;
-        }
-      }
-
-      return null;
-    }
-
-    /**
-     * Advances to the first beyond the current whose version number is greater
-     * than or equal to <i>targetVersion</i>.<br>
-     * Returns true if the reader has been advanced. If <i>targetVersion</i> is
-     * greater than the highest version number in the updates log, the reader
-     * has been advanced to the end of the current tlog, and a call to
-     * {@link #next()} will probably return null.<br>
-     * Returns false if <i>targetVersion</i> is lower than the oldest known entry.
-     * In this scenario, it probably means that there is a gap in the updates log.<br>
-     * <p>
-     * <b>NOTE:</b> This method must be called before the first call to {@link #next()}.
-     */
-    public boolean seek(long targetVersion) throws IOException, InterruptedException {
-      Object o;
-      // version is negative for deletes - ensure that we are manipulating absolute version numbers.
-      targetVersion = Math.abs(targetVersion);
-
-      if (tlogs.isEmpty() || !this.seekTLog(targetVersion)) {
-        return false;
-      }
-
-      // now that we might be on the right tlog, iterates over the entries to find the one we are looking for
-      while ((o = this.next()) != null) {
-        if (this.getVersion(o) >= targetVersion) {
-          this.resetToLastPosition();
-          return true;
-        }
-      }
-
-      return true;
-    }
-
-    /**
-     * Seeks the tlog associated to the target version by using the updates log index,
-     * and initialises the log reader to the start of the tlog. Returns true if it was able
-     * to seek the corresponding tlog, false if the <i>targetVersion</i> is lower than the
-     * oldest known entry (which probably indicates a gap).<br>
-     * <p>
-     * <b>NOTE:</b> This method might modify the tlog queue by removing tlogs that are older
-     * than the target version.
-     */
-    private boolean seekTLog(long targetVersion) {
-      // if the target version is lower than the oldest known entry, we have probably a gap.
-      if (targetVersion < ((CdcrTransactionLog) tlogs.peekLast()).startVersion) {
-        return false;
-      }
-
-      // closes existing reader before performing seek and possibly modifying the queue;
-      tlogReader.close();
-
-      // iterates over the queue and removes old tlogs
-      TransactionLog last = null;
-      while (tlogs.size() > 1) {
-        if (((CdcrTransactionLog) tlogs.peekLast()).startVersion >= targetVersion) {
-          break;
-        }
-        last = tlogs.pollLast();
-      }
-
-      // the last tlog removed is the one we look for, add it back to the queue
-      if (last != null) tlogs.addLast(last);
-
-      currentTlog = tlogs.peekLast();
-      tlogReader = currentTlog.getReader(0);
-      pointer.set(currentTlog.tlogFile);
-      numRecordsReadInCurrentTlog = 0;
-
-      return true;
-    }
-
-    /**
-     * Extracts the version number and converts it to its absolute form.
-     */
-    private long getVersion(Object o) {
-      @SuppressWarnings({"rawtypes"})
-      List entry = (List) o;
-      // version is negative for delete, ensure that we are manipulating absolute version numbers
-      return Math.abs((Long) entry.get(1));
-    }
-
-    /**
-     * If called after {@link #next()}, it resets the reader to its last position.
-     */
-    public void resetToLastPosition() {
-      try {
-        if (tlogReader != null) {
-          tlogReader.fis.seek(lastPositionInTLog);
-          numRecordsReadInCurrentTlog--;
-          lastVersion = nextToLastVersion;
-        }
-      } catch (IOException e) {
-        log.error("Failed to seek last position in tlog", e);
-        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed to seek last position in tlog", e);
-      }
-    }
-
-    /**
-     * Returns the number of remaining records (including commit but excluding header) to be read in the logs.
-     */
-    public long getNumberOfRemainingRecords() {
-      long numRemainingRecords = 0;
-
-      synchronized (tlogs) {
-        for (TransactionLog tlog : tlogs) {
-          numRemainingRecords += tlog.numRecords() - 1; // minus 1 as the number of records returned by the tlog includes the header
-        }
-      }
-
-      return numRemainingRecords - numRecordsReadInCurrentTlog;
-    }
-
-    /**
-     * Closes streams and remove the associated {@link org.apache.solr.update.CdcrUpdateLog.CdcrLogPointer} from the
-     * parent {@link org.apache.solr.update.CdcrUpdateLog}.
-     */
-    public void close() {
-      if (tlogReader != null) {
-        tlogReader.close();
-        tlogReader = null;
-        currentTlog = null;
-      }
-      tlogs.clear();
-      logPointers.remove(this);
-    }
-
-    /**
-     * Returns the absolute form of the version number of the last entry read. If the current version is equal
-     * to 0 (because of a commit), it will return the next to last version number.
-     */
-    public long getLastVersion() {
-      return lastVersion == 0 ? nextToLastVersion : lastVersion;
-    }
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/update/DefaultSolrCoreState.java b/solr/core/src/java/org/apache/solr/update/DefaultSolrCoreState.java
index 53dcb3e..2454dc4 100644
--- a/solr/core/src/java/org/apache/solr/update/DefaultSolrCoreState.java
+++ b/solr/core/src/java/org/apache/solr/update/DefaultSolrCoreState.java
@@ -18,12 +18,10 @@
 
 import java.io.IOException;
 import java.lang.invoke.MethodHandles;
-import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Future;
 import java.util.concurrent.RejectedExecutionException;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
@@ -79,14 +77,6 @@
   
   protected final ReentrantLock commitLock = new ReentrantLock();
 
-
-  private AtomicBoolean cdcrRunning = new AtomicBoolean();
-
-  private volatile Future<Boolean> cdcrBootstrapFuture;
-
-  @SuppressWarnings({"rawtypes"})
-  private volatile Callable cdcrBootstrapCallable;
-
   @Deprecated
   public DefaultSolrCoreState(DirectoryFactory directoryFactory) {
     this(directoryFactory, new RecoveryStrategy.Builder());
@@ -427,35 +417,4 @@
   public Lock getRecoveryLock() {
     return recoveryLock;
   }
-
-  @Override
-  public boolean getCdcrBootstrapRunning() {
-    return cdcrRunning.get();
-  }
-
-  @Override
-  public void setCdcrBootstrapRunning(boolean cdcrRunning) {
-    this.cdcrRunning.set(cdcrRunning);
-  }
-
-  @Override
-  public Future<Boolean> getCdcrBootstrapFuture() {
-    return cdcrBootstrapFuture;
-  }
-
-  @Override
-  public void setCdcrBootstrapFuture(Future<Boolean> cdcrBootstrapFuture) {
-    this.cdcrBootstrapFuture = cdcrBootstrapFuture;
-  }
-
-  @Override
-  @SuppressWarnings({"rawtypes"})
-  public Callable getCdcrBootstrapCallable() {
-    return cdcrBootstrapCallable;
-  }
-
-  @Override
-  public void setCdcrBootstrapCallable(@SuppressWarnings({"rawtypes"})Callable cdcrBootstrapCallable) {
-    this.cdcrBootstrapCallable = cdcrBootstrapCallable;
-  }
 }
diff --git a/solr/core/src/java/org/apache/solr/update/PeerSync.java b/solr/core/src/java/org/apache/solr/update/PeerSync.java
index 09d2ce0..c23bfb1 100644
--- a/solr/core/src/java/org/apache/solr/update/PeerSync.java
+++ b/solr/core/src/java/org/apache/solr/update/PeerSync.java
@@ -22,17 +22,13 @@
 import java.net.SocketTimeoutException;
 import java.util.ArrayList;
 import java.util.Comparator;
-import java.util.HashSet;
 import java.util.List;
 import java.util.Optional;
-import java.util.Set;
-import java.util.function.Supplier;
 import java.util.stream.Collectors;
 
 import com.codahale.metrics.Counter;
 import com.codahale.metrics.Timer;
 import org.apache.http.NoHttpResponseException;
-import org.apache.http.client.HttpClient;
 import org.apache.http.conn.ConnectTimeoutException;
 import org.apache.lucene.util.BytesRef;
 import org.apache.solr.client.solrj.SolrServerException;
@@ -41,7 +37,6 @@
 import org.apache.solr.common.SolrInputDocument;
 import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.util.IOUtils;
-import org.apache.solr.common.util.StrUtils;
 import org.apache.solr.core.SolrCore;
 import org.apache.solr.core.SolrInfoBean;
 import org.apache.solr.handler.component.ShardHandler;
@@ -84,7 +79,6 @@
 
   private final boolean cantReachIsSuccess;
   private final boolean doFingerprint;
-  private final HttpClient client;
   private final boolean onlyIfActive;
   private SolrCore core;
   private Updater updater;
@@ -117,7 +111,6 @@
     this.nUpdates = nUpdates;
     this.cantReachIsSuccess = cantReachIsSuccess;
     this.doFingerprint = doFingerprint && !("true".equals(System.getProperty("solr.disableFingerprint")));
-    this.client = core.getCoreContainer().getUpdateShardHandler().getDefaultHttpClient();
     this.onlyIfActive = onlyIfActive;
     
     uhandler = core.getUpdateHandler();
@@ -406,31 +399,6 @@
     }
   }
 
-  private boolean canHandleVersionRanges(String replica) {
-    SyncShardRequest sreq = new SyncShardRequest();
-    requests.add(sreq);
-
-    // determine if leader can handle version ranges
-    sreq.shards = new String[] {replica};
-    sreq.actualShards = sreq.shards;
-    sreq.params = new ModifiableSolrParams();
-    sreq.params.set("qt", "/get");
-    sreq.params.set(DISTRIB, false);
-    sreq.params.set("checkCanHandleVersionRanges", false);
-
-    ShardHandler sh = shardHandlerFactory.getShardHandler();
-    sh.submit(sreq, replica, sreq.params);
-
-    ShardResponse srsp = sh.takeCompletedIncludingErrors();
-    Boolean canHandleVersionRanges = srsp.getSolrResponse().getResponse().getBooleanArg("canHandleVersionRanges");
-
-    if (canHandleVersionRanges == null || canHandleVersionRanges.booleanValue() == false) {
-      return false;
-    }
-
-    return true;
-  }
-
   private boolean handleVersions(ShardResponse srsp) {
     // we retrieved the last N updates from the replica
     @SuppressWarnings({"unchecked"})
@@ -453,8 +421,7 @@
     }
     
     MissedUpdatesRequest updatesRequest = missedUpdatesFinder.find(
-        otherVersions, sreq.shards[0],
-        () -> core.getSolrConfig().useRangeVersionsForPeerSync && canHandleVersionRanges(sreq.shards[0]));
+        otherVersions, sreq.shards[0]);
 
     if (updatesRequest == MissedUpdatesRequest.ALREADY_IN_SYNC) {
       return true;
@@ -717,16 +684,12 @@
   }
 
   static abstract class MissedUpdatesFinderBase {
-    private Set<Long> ourUpdateSet;
-    private Set<Long> requestedUpdateSet = new HashSet<>();
-
     long ourLowThreshold;  // 20th percentile
     List<Long> ourUpdates;
 
     MissedUpdatesFinderBase(List<Long> ourUpdates, long ourLowThreshold) {
       assert sorted(ourUpdates);
       this.ourUpdates = ourUpdates;
-      this.ourUpdateSet = new HashSet<>(ourUpdates);
       this.ourLowThreshold = ourLowThreshold;
     }
 
@@ -783,26 +746,6 @@
       String rangesToRequestStr = rangesToRequest.stream().collect(Collectors.joining(","));
       return MissedUpdatesRequest.of(rangesToRequestStr, totalRequestedVersions);
     }
-
-    MissedUpdatesRequest handleIndividualVersions(List<Long> otherVersions, boolean completeList) {
-      List<Long> toRequest = new ArrayList<>();
-      for (Long otherVersion : otherVersions) {
-        // stop when the entries get old enough that reorders may lead us to see updates we don't need
-        if (!completeList && Math.abs(otherVersion) < ourLowThreshold) break;
-
-        if (ourUpdateSet.contains(otherVersion) || requestedUpdateSet.contains(otherVersion)) {
-          // we either have this update, or already requested it
-          // TODO: what if the shard we previously requested this from returns failure (because it goes
-          // down)
-          continue;
-        }
-
-        toRequest.add(otherVersion);
-        requestedUpdateSet.add(otherVersion);
-      }
-
-      return MissedUpdatesRequest.of(StrUtils.join(toRequest, ','), toRequest.size());
-    }
   }
 
   /**
@@ -824,7 +767,7 @@
       this.nUpdates = nUpdates;
     }
 
-    public MissedUpdatesRequest find(List<Long> otherVersions, Object updateFrom, Supplier<Boolean> canHandleVersionRanges) {
+    public MissedUpdatesRequest find(List<Long> otherVersions, Object updateFrom) {
       otherVersions.sort(absComparator);
       if (debug) {
         log.debug("{} sorted versions from {} = {}", logPrefix, otherVersions, updateFrom);
@@ -858,12 +801,7 @@
 
       boolean completeList = otherVersions.size() < nUpdates;
 
-      MissedUpdatesRequest updatesRequest;
-      if (canHandleVersionRanges.get()) {
-        updatesRequest = handleVersionsWithRanges(otherVersions, completeList);
-      } else {
-        updatesRequest = handleIndividualVersions(otherVersions, completeList);
-      }
+      MissedUpdatesRequest updatesRequest = handleVersionsWithRanges(otherVersions, completeList);
 
       if (updatesRequest.totalRequestedUpdates > nUpdates) {
         log.info("{} PeerSync will fail because number of missed updates is more than:{}", logPrefix, nUpdates);
diff --git a/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java b/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java
index 5e81b9d..0414fd3 100644
--- a/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java
+++ b/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java
@@ -21,7 +21,6 @@
 import java.lang.invoke.MethodHandles;
 import java.util.List;
 import java.util.Set;
-import java.util.function.Supplier;
 
 import com.codahale.metrics.Counter;
 import com.codahale.metrics.Timer;
@@ -238,7 +237,7 @@
       return MissedUpdatesRequest.UNABLE_TO_SYNC;
     }
 
-    MissedUpdatesRequest updatesRequest = missedUpdatesFinder.find(otherVersions, leaderUrl, () -> core.getSolrConfig().useRangeVersionsForPeerSync && canHandleVersionRanges());
+    MissedUpdatesRequest updatesRequest = missedUpdatesFinder.find(otherVersions, leaderUrl);
     if (updatesRequest == MissedUpdatesRequest.EMPTY) {
       if (doFingerprint) return MissedUpdatesRequest.UNABLE_TO_SYNC;
       return MissedUpdatesRequest.ALREADY_IN_SYNC;
@@ -313,19 +312,6 @@
     return true;
   }
 
-  // determine if leader can handle version ranges
-  private boolean canHandleVersionRanges() {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set("qt", "/get");
-    params.set(DISTRIB, false);
-    params.set("checkCanHandleVersionRanges", false);
-
-    NamedList<Object> rsp = request(params, "Failed on determine if leader can handle version ranges");
-    Boolean canHandleVersionRanges = rsp.getBooleanArg("canHandleVersionRanges");
-
-    return canHandleVersionRanges != null && canHandleVersionRanges;
-  }
-
   private NamedList<Object> request(ModifiableSolrParams params, String onFail) {
     try {
       QueryResponse rsp = new QueryRequest(params, SolrRequest.METHOD.POST).process(clientToLeader);
@@ -404,7 +390,7 @@
       this.nUpdates = nUpdates;
     }
 
-    public MissedUpdatesRequest find(List<Long> leaderVersions, Object updateFrom, Supplier<Boolean> canHandleVersionRanges) {
+    public MissedUpdatesRequest find(List<Long> leaderVersions, Object updateFrom) {
       leaderVersions.sort(absComparator);
       log.debug("{} sorted versions from {} = {}", logPrefix, leaderVersions, updateFrom);
 
@@ -418,12 +404,7 @@
       // In that case, we will fail on compute fingerprint with the current leader and start segments replication
 
       boolean completeList = leaderVersions.size() < nUpdates;
-      MissedUpdatesRequest updatesRequest;
-      if (canHandleVersionRanges.get()) {
-        updatesRequest = handleVersionsWithRanges(leaderVersions, completeList);
-      } else {
-        updatesRequest = handleIndividualVersions(leaderVersions, completeList);
-      }
+      MissedUpdatesRequest updatesRequest = handleVersionsWithRanges(leaderVersions, completeList);
 
       if (updatesRequest.totalRequestedUpdates > nUpdates) {
         log.info("{} PeerSync will fail because number of missed updates is more than:{}", logPrefix, nUpdates);
diff --git a/solr/core/src/java/org/apache/solr/update/SolrCoreState.java b/solr/core/src/java/org/apache/solr/update/SolrCoreState.java
index eddd5b7..c8f61d5 100644
--- a/solr/core/src/java/org/apache/solr/update/SolrCoreState.java
+++ b/solr/core/src/java/org/apache/solr/update/SolrCoreState.java
@@ -18,8 +18,6 @@
 
 import java.io.IOException;
 import java.lang.invoke.MethodHandles;
-import java.util.concurrent.Callable;
-import java.util.concurrent.Future;
 import java.util.concurrent.locks.Lock;
 
 import org.apache.lucene.index.IndexWriter;
@@ -186,21 +184,6 @@
 
   public abstract Lock getRecoveryLock();
 
-  // These are needed to properly synchronize the bootstrapping when the
-  // in the target DC require a full sync.
-  public abstract boolean getCdcrBootstrapRunning();
-
-  public abstract void setCdcrBootstrapRunning(boolean cdcrRunning);
-
-  public abstract Future<Boolean> getCdcrBootstrapFuture();
-
-  public abstract void setCdcrBootstrapFuture(Future<Boolean> cdcrBootstrapFuture);
-
-  @SuppressWarnings("rawtypes")
-  public abstract Callable getCdcrBootstrapCallable();
-
-  public abstract void setCdcrBootstrapCallable(@SuppressWarnings("rawtypes") Callable cdcrBootstrapCallable);
-
   public Throwable getTragicException() throws IOException {
     RefCounted<IndexWriter> ref = getIndexWriter(null);
     if (ref == null) return null;
diff --git a/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java b/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
index e189ad1..a364757 100644
--- a/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
+++ b/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java
@@ -68,6 +68,19 @@
 
   public final double ramBufferSizeMB;
   public final int ramPerThreadHardLimitMB;
+  /**
+   * <p>
+   * When using a custom merge policy that allows triggering synchronous merges on commit
+   * (see {@link MergePolicy#findFullFlushMerges(org.apache.lucene.index.MergeTrigger, org.apache.lucene.index.SegmentInfos, org.apache.lucene.index.MergePolicy.MergeContext)}),
+   * a timeout (in milliseconds) can be set for those merges to finish. Use {@code <maxCommitMergeWaitTime>1000</maxCommitMergeWaitTime>} in the {@code <indexConfig>} section.
+   * See {@link IndexWriterConfig#setMaxFullFlushMergeWaitMillis(long)}.
+   * </p>
+   * <p>
+   * Note that as of Solr 8.6, no {@code MergePolicy} shipped with Lucene/Solr make use of
+   * {@code MergePolicy.findFullFlushMerges}, which means this setting has no effect unless a custom {@code MergePolicy} is used.
+   * </p> 
+   */
+  public final int maxCommitMergeWaitMillis;
 
   public final int writeLockTimeout;
   public final String lockType;
@@ -87,6 +100,7 @@
     maxBufferedDocs = -1;
     ramBufferSizeMB = 100;
     ramPerThreadHardLimitMB = -1;
+    maxCommitMergeWaitMillis = -1;
     writeLockTimeout = -1;
     lockType = DirectoryFactory.LOCK_TYPE_NATIVE;
     mergePolicyFactoryInfo = null;
@@ -129,8 +143,9 @@
         true);
 
     useCompoundFile = solrConfig.getBool(prefix+"/useCompoundFile", def.useCompoundFile);
-    maxBufferedDocs=solrConfig.getInt(prefix+"/maxBufferedDocs",def.maxBufferedDocs);
+    maxBufferedDocs = solrConfig.getInt(prefix+"/maxBufferedDocs", def.maxBufferedDocs);
     ramBufferSizeMB = solrConfig.getDouble(prefix+"/ramBufferSizeMB", def.ramBufferSizeMB);
+    maxCommitMergeWaitMillis = solrConfig.getInt(prefix+"/maxCommitMergeWaitTime", def.maxCommitMergeWaitMillis);
 
     // how do we validate the value??
     ramPerThreadHardLimitMB = solrConfig.getInt(prefix+"/ramPerThreadHardLimitMB", def.ramPerThreadHardLimitMB);
@@ -185,6 +200,7 @@
         "maxBufferedDocs", maxBufferedDocs,
         "ramBufferSizeMB", ramBufferSizeMB,
         "ramPerThreadHardLimitMB", ramPerThreadHardLimitMB,
+        "maxCommitMergeWaitTime", maxCommitMergeWaitMillis,
         "writeLockTimeout", writeLockTimeout,
         "lockType", lockType,
         "infoStreamEnabled", infoStream != InfoStream.NO_OUTPUT);
@@ -230,6 +246,10 @@
     if (ramPerThreadHardLimitMB != -1) {
       iwc.setRAMPerThreadHardLimitMB(ramPerThreadHardLimitMB);
     }
+    
+    if (maxCommitMergeWaitMillis > 0) {
+      iwc.setMaxFullFlushMergeWaitMillis(maxCommitMergeWaitMillis);
+    }
 
     iwc.setSimilarity(schema.getSimilarity());
     MergePolicy mergePolicy = buildMergePolicy(core.getResourceLoader(), schema);
diff --git a/solr/core/src/java/org/apache/solr/update/UpdateLog.java b/solr/core/src/java/org/apache/solr/update/UpdateLog.java
index 85ee40a..b928878 100644
--- a/solr/core/src/java/org/apache/solr/update/UpdateLog.java
+++ b/solr/core/src/java/org/apache/solr/update/UpdateLog.java
@@ -1517,8 +1517,7 @@
                   update.version = version;
 
                   if (oper == UpdateLog.UPDATE_INPLACE) {
-                    if ((update.log instanceof CdcrTransactionLog && entry.size() == 6) ||
-                        (!(update.log instanceof CdcrTransactionLog) && entry.size() == 5)) {
+                    if (entry.size() == 5) {
                       update.previousVersion = (Long) entry.get(UpdateLog.PREV_VERSION_IDX);
                     }
                   }
diff --git a/solr/core/src/java/org/apache/solr/update/processor/CdcrUpdateProcessor.java b/solr/core/src/java/org/apache/solr/update/processor/CdcrUpdateProcessor.java
deleted file mode 100644
index 180784a..0000000
--- a/solr/core/src/java/org/apache/solr/update/processor/CdcrUpdateProcessor.java
+++ /dev/null
@@ -1,132 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update.processor;
-
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.params.SolrParams;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.response.SolrQueryResponse;
-import org.apache.solr.update.AddUpdateCommand;
-import org.apache.solr.update.DeleteUpdateCommand;
-import org.apache.solr.update.UpdateCommand;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * <p>
- * Extends {@link org.apache.solr.update.processor.DistributedUpdateProcessor} to force peer sync logic
- * for every updates. This ensures that the version parameter sent by the source cluster is kept
- * by the target cluster.
- * </p>
- * @deprecated since 8.6
- */
-@Deprecated(since = "8.6")
-public class CdcrUpdateProcessor extends DistributedZkUpdateProcessor {
-
-  public static final String CDCR_UPDATE = "cdcr.update";
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  public CdcrUpdateProcessor(SolrQueryRequest req, SolrQueryResponse rsp, UpdateRequestProcessor next) {
-    super(req, rsp, next);
-  }
-
-  @Override
-  protected boolean versionAdd(AddUpdateCommand cmd) throws IOException {
-    /*
-    temporarily set the PEER_SYNC flag so that DistributedUpdateProcessor.versionAdd doesn't execute leader logic
-    but the else part of that if. That way version remains preserved.
-
-    we cannot set the flag for the whole processAdd method because DistributedUpdateProcessor.setupRequest() would set
-    isLeader to false which wouldn't work
-     */
-    if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
-      cmd.setFlags(cmd.getFlags() | UpdateCommand.PEER_SYNC); // we need super.versionAdd() to set leaderLogic to false
-    }
-
-    boolean result = super.versionAdd(cmd);
-
-    // unset the flag to avoid unintended consequences down the chain
-    if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
-      cmd.setFlags(cmd.getFlags() & ~UpdateCommand.PEER_SYNC);
-    }
-
-    return result;
-  }
-
-  @Override
-  protected boolean versionDelete(DeleteUpdateCommand cmd) throws IOException {
-    /*
-    temporarily set the PEER_SYNC flag so that DistributedUpdateProcessor.deleteAdd doesn't execute leader logic
-    but the else part of that if. That way version remains preserved.
-
-    we cannot set the flag for the whole processDelete method because DistributedUpdateProcessor.setupRequest() would set
-    isLeader to false which wouldn't work
-     */
-    if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
-      cmd.setFlags(cmd.getFlags() | UpdateCommand.PEER_SYNC); // we need super.versionAdd() to set leaderLogic to false
-    }
-
-    boolean result = super.versionDelete(cmd);
-
-    // unset the flag to avoid unintended consequences down the chain
-    if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
-      cmd.setFlags(cmd.getFlags() & ~UpdateCommand.PEER_SYNC);
-    }
-
-    return result;
-  }
-
-  protected ModifiableSolrParams filterParams(SolrParams params) {
-    ModifiableSolrParams result = super.filterParams(params);
-    if (params.get(CDCR_UPDATE) != null) {
-      result.set(CDCR_UPDATE, "");
-//      if (params.get(DistributedUpdateProcessor.VERSION_FIELD) == null) {
-//        log.warn("+++ cdcr.update but no version field, params are: " + params);
-//      } else {
-//        log.info("+++ cdcr.update version present, params are: " + params);
-//      }
-      result.set(CommonParams.VERSION_FIELD, params.get(CommonParams.VERSION_FIELD));
-    }
-
-    return result;
-  }
-
-  @Override
-  protected void versionDeleteByQuery(DeleteUpdateCommand cmd) throws IOException {
-    /*
-    temporarily set the PEER_SYNC flag so that DistributedUpdateProcessor.versionDeleteByQuery doesn't execute leader logic
-    That way version remains preserved.
-
-     */
-    if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
-      cmd.setFlags(cmd.getFlags() | UpdateCommand.PEER_SYNC); // we need super.versionDeleteByQuery() to set leaderLogic to false
-    }
-
-    super.versionDeleteByQuery(cmd);
-
-    // unset the flag to avoid unintended consequences down the chain
-    if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
-      cmd.setFlags(cmd.getFlags() & ~UpdateCommand.PEER_SYNC);
-    }
-  }
-}
-
diff --git a/solr/core/src/java/org/apache/solr/update/processor/CdcrUpdateProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/CdcrUpdateProcessorFactory.java
deleted file mode 100644
index 08cec4f..0000000
--- a/solr/core/src/java/org/apache/solr/update/processor/CdcrUpdateProcessorFactory.java
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update.processor;
-
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.response.SolrQueryResponse;
-
-/**
- * Factory for {@link org.apache.solr.update.processor.CdcrUpdateProcessor}.
- *
- * @see org.apache.solr.update.processor.CdcrUpdateProcessor
- * @since 6.0.0
- */
-public class CdcrUpdateProcessorFactory
-    extends UpdateRequestProcessorFactory
-    implements DistributingUpdateProcessorFactory {
-
-  @Override
-  public void init(@SuppressWarnings({"rawtypes"})NamedList args) {
-
-  }
-
-  @Override
-  public CdcrUpdateProcessor getInstance(SolrQueryRequest req,
-                                         SolrQueryResponse rsp, UpdateRequestProcessor next) {
-
-    return new CdcrUpdateProcessor(req, rsp, next);
-  }
-
-}
-
diff --git a/solr/core/src/java/org/apache/solr/util/PackageTool.java b/solr/core/src/java/org/apache/solr/util/PackageTool.java
index b92f21e..90d13a1 100644
--- a/solr/core/src/java/org/apache/solr/util/PackageTool.java
+++ b/solr/core/src/java/org/apache/solr/util/PackageTool.java
@@ -176,6 +176,16 @@
                 }
                 break;
               }
+              case "uninstall": {
+                Pair<String, String> parsedVersion = parsePackageVersion(cli.getArgList().get(1).toString());
+                if (parsedVersion.second() == null) {
+                  throw new SolrException(ErrorCode.BAD_REQUEST, "Package name and version are both required. Actual: " + cli.getArgList().get(1));
+                }
+                String packageName = parsedVersion.first();
+                String version = parsedVersion.second();
+                packageManager.uninstall(packageName, version);
+                break;
+              }
               case "help":
               case "usage":
                 print("Package Manager\n---------------");
diff --git a/solr/core/src/java/org/apache/solr/util/SolrCLI.java b/solr/core/src/java/org/apache/solr/util/SolrCLI.java
index 19de535..5e37ccb 100755
--- a/solr/core/src/java/org/apache/solr/util/SolrCLI.java
+++ b/solr/core/src/java/org/apache/solr/util/SolrCLI.java
@@ -2637,7 +2637,7 @@
               .argName("NAME")
               .hasArg()
               .required(true)
-              .desc("Name of the example to launch, one of: cloud, techproducts, dih, schemaless")
+              .desc("Name of the example to launch, one of: cloud, techproducts, schemaless")
               .longOpt("example")
               .build(),
           Option.builder("script")
@@ -2753,34 +2753,14 @@
       String exampleType = cli.getOptionValue("example");
       if ("cloud".equals(exampleType)) {
         runCloudExample(cli);
-      } else if ("dih".equals(exampleType)) {
-        runDihExample(cli);
       } else if ("techproducts".equals(exampleType) || "schemaless".equals(exampleType)) {
         runExample(cli, exampleType);
       } else {
         throw new IllegalArgumentException("Unsupported example "+exampleType+
-            "! Please choose one of: cloud, dih, schemaless, or techproducts");
+            "! Please choose one of: cloud, schemaless, or techproducts");
       }
     }
 
-    protected void runDihExample(CommandLine cli) throws Exception {
-      File dihSolrHome = new File(exampleDir, "example-DIH/solr");
-      if (!dihSolrHome.isDirectory()) {
-        dihSolrHome = new File(serverDir.getParentFile(), "example/example-DIH/solr");
-        if (!dihSolrHome.isDirectory()) {
-          throw new Exception("example/example-DIH/solr directory not found");
-        }
-      }
-
-      boolean isCloudMode = cli.hasOption('c');
-      String zkHost = cli.getOptionValue('z');
-      int port = Integer.parseInt(cli.getOptionValue('p', "8983"));
-
-      Map<String,Object> nodeStatus = startSolr(dihSolrHome, isCloudMode, cli, port, zkHost, 30);
-      String solrUrl = (String)nodeStatus.get("baseUrl");
-      echo("\nSolr dih example launched successfully. Direct your Web browser to "+solrUrl+" to visit the Solr Admin UI");
-    }
-
     protected void runExample(CommandLine cli, String exampleName) throws Exception {
       File exDir = setupExampleDir(serverDir, exampleDir, exampleName);
       String collectionName = "schemaless".equals(exampleName) ? "gettingstarted" : exampleName;
diff --git a/solr/core/src/java/org/apache/solr/util/TestInjection.java b/solr/core/src/java/org/apache/solr/util/TestInjection.java
index 651d26c1..95696bc 100644
--- a/solr/core/src/java/org/apache/solr/util/TestInjection.java
+++ b/solr/core/src/java/org/apache/solr/util/TestInjection.java
@@ -143,7 +143,7 @@
 
   private volatile static AtomicInteger countPrepRecoveryOpPauseForever = new AtomicInteger(0);
 
-  public volatile static Integer delayBeforeSlaveCommitRefresh=null;
+  public volatile static Integer delayBeforeFollowerCommitRefresh=null;
 
   public volatile static Integer delayInExecutePlanAction=null;
 
@@ -185,7 +185,7 @@
     countPrepRecoveryOpPauseForever = new AtomicInteger(0);
     failIndexFingerprintRequests = null;
     wrongIndexFingerprint = null;
-    delayBeforeSlaveCommitRefresh = null;
+    delayBeforeFollowerCommitRefresh = null;
     delayInExecutePlanAction = null;
     failInExecutePlanAction = false;
     skipIndexWriterCommitOnClose = false;
@@ -521,11 +521,11 @@
     return new Pair<>(Boolean.parseBoolean(val), Integer.parseInt(percent));
   }
 
-  public static boolean injectDelayBeforeSlaveCommitRefresh() {
-    if (delayBeforeSlaveCommitRefresh!=null) {
+  public static boolean injectDelayBeforeFollowerCommitRefresh() {
+    if (delayBeforeFollowerCommitRefresh!=null) {
       try {
-        log.info("Pausing IndexFetcher for {}ms", delayBeforeSlaveCommitRefresh);
-        Thread.sleep(delayBeforeSlaveCommitRefresh);
+        log.info("Pausing IndexFetcher for {}ms", delayBeforeFollowerCommitRefresh);
+        Thread.sleep(delayBeforeFollowerCommitRefresh);
       } catch (InterruptedException e) {
         Thread.currentThread().interrupt();
       }
diff --git a/solr/core/src/java/org/apache/solr/util/circuitbreaker/CPUCircuitBreaker.java b/solr/core/src/java/org/apache/solr/util/circuitbreaker/CPUCircuitBreaker.java
new file mode 100644
index 0000000..ffef4e5
--- /dev/null
+++ b/solr/core/src/java/org/apache/solr/util/circuitbreaker/CPUCircuitBreaker.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.util.circuitbreaker;
+
+import java.lang.invoke.MethodHandles;
+import java.lang.management.ManagementFactory;
+import java.lang.management.OperatingSystemMXBean;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * <p>
+ * Tracks current CPU usage and triggers if the specified threshold is breached.
+ *
+ * This circuit breaker gets the average CPU load over the last minute and uses
+ * that data to take a decision. We depend on OperatingSystemMXBean which does
+ * not allow a configurable interval of collection of data.
+ * //TODO: Use Codahale Meter to calculate the value locally.
+ * </p>
+ *
+ * <p>
+ * The configuration to define which mode to use and the trigger threshold are defined in
+ * solrconfig.xml
+ * </p>
+ */
+public class CPUCircuitBreaker extends CircuitBreaker {
+  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  private static final OperatingSystemMXBean operatingSystemMXBean = ManagementFactory.getOperatingSystemMXBean();
+
+  private final boolean enabled;
+  private final double cpuUsageThreshold;
+
+  // Assumption -- the value of these parameters will be set correctly before invoking getDebugInfo()
+  private static final ThreadLocal<Double> seenCPUUsage = ThreadLocal.withInitial(() -> 0.0);
+
+  private static final ThreadLocal<Double> allowedCPUUsage = ThreadLocal.withInitial(() -> 0.0);
+
+  public CPUCircuitBreaker(CircuitBreakerConfig config) {
+    super(config);
+
+    this.enabled = config.getCpuCBEnabled();
+    this.cpuUsageThreshold = config.getCpuCBThreshold();
+  }
+
+  @Override
+  public boolean isTripped() {
+    if (!isEnabled()) {
+      return false;
+    }
+
+    if (!enabled) {
+      return false;
+    }
+
+    double localAllowedCPUUsage = getCpuUsageThreshold();
+    double localSeenCPUUsage = calculateLiveCPUUsage();
+
+    if (localSeenCPUUsage < 0) {
+      if (log.isWarnEnabled()) {
+        String msg = "Unable to get CPU usage";
+
+        log.warn(msg);
+      }
+
+      return false;
+    }
+
+    allowedCPUUsage.set(localAllowedCPUUsage);
+
+    seenCPUUsage.set(localSeenCPUUsage);
+
+    return (localSeenCPUUsage >= localAllowedCPUUsage);
+  }
+
+  @Override
+  public String getDebugInfo() {
+
+    if (seenCPUUsage.get() == 0.0 || seenCPUUsage.get() == 0.0) {
+      log.warn("CPUCircuitBreaker's monitored values (seenCPUUSage, allowedCPUUsage) not set");
+    }
+
+    return "seenCPUUSage=" + seenCPUUsage.get() + " allowedCPUUsage=" + allowedCPUUsage.get();
+  }
+
+  @Override
+  public String getErrorMessage() {
+    return "CPU Circuit Breaker triggered as seen CPU usage is above allowed threshold." +
+        "Seen CPU usage " + seenCPUUsage.get() + " and allocated threshold " +
+        allowedCPUUsage.get();
+  }
+
+  public double getCpuUsageThreshold() {
+    return cpuUsageThreshold;
+  }
+
+  protected double calculateLiveCPUUsage() {
+    return operatingSystemMXBean.getSystemLoadAverage();
+  }
+}
diff --git a/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreaker.java b/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreaker.java
index f56f81e..90fda7a 100644
--- a/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreaker.java
+++ b/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreaker.java
@@ -17,8 +17,6 @@
 
 package org.apache.solr.util.circuitbreaker;
 
-import org.apache.solr.core.SolrConfig;
-
 /**
  * Default class to define circuit breakers for Solr.
  * <p>
@@ -27,21 +25,24 @@
  *  2. Use the circuit breaker in a specific code path(s).
  *
  * TODO: This class should be grown as the scope of circuit breakers grow.
+ *
+ * The class and its derivatives raise a standard exception when a circuit breaker is triggered.
+ * We should make it into a dedicated exception (https://issues.apache.org/jira/browse/SOLR-14755)
  * </p>
  */
 public abstract class CircuitBreaker {
   public static final String NAME = "circuitbreaker";
 
-  protected final SolrConfig solrConfig;
+  protected final CircuitBreakerConfig config;
 
-  public CircuitBreaker(SolrConfig solrConfig) {
-    this.solrConfig = solrConfig;
+  public CircuitBreaker(CircuitBreakerConfig config) {
+    this.config = config;
   }
 
   // Global config for all circuit breakers. For specific circuit breaker configs, define
   // your own config.
   protected boolean isEnabled() {
-    return solrConfig.useCircuitBreakers;
+    return config.isEnabled();
   }
 
   /**
@@ -53,4 +54,46 @@
    * Get debug useful info.
    */
   public abstract String getDebugInfo();
+
+  /**
+   * Get error message when the circuit breaker triggers
+   */
+  public abstract String getErrorMessage();
+
+  public static class CircuitBreakerConfig {
+    private final boolean enabled;
+    private final boolean memCBEnabled;
+    private final int memCBThreshold;
+    private final boolean cpuCBEnabled;
+    private final int cpuCBThreshold;
+
+    public CircuitBreakerConfig(final boolean enabled, final boolean memCBEnabled, final int memCBThreshold,
+                                  final boolean cpuCBEnabled, final int cpuCBThreshold) {
+      this.enabled = enabled;
+      this.memCBEnabled = memCBEnabled;
+      this.memCBThreshold = memCBThreshold;
+      this.cpuCBEnabled = cpuCBEnabled;
+      this.cpuCBThreshold = cpuCBThreshold;
+    }
+
+    public boolean isEnabled() {
+      return enabled;
+    }
+
+    public boolean getMemCBEnabled() {
+      return memCBEnabled;
+    }
+
+    public int getMemCBThreshold() {
+      return memCBThreshold;
+    }
+
+    public boolean getCpuCBEnabled() {
+      return cpuCBEnabled;
+    }
+
+    public int getCpuCBThreshold() {
+      return cpuCBThreshold;
+    }
+  }
 }
\ No newline at end of file
diff --git a/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreakerManager.java b/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreakerManager.java
index 584b933..2f6f93d 100644
--- a/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreakerManager.java
+++ b/solr/core/src/java/org/apache/solr/util/circuitbreaker/CircuitBreakerManager.java
@@ -20,7 +20,10 @@
 import java.util.ArrayList;
 import java.util.List;
 
-import org.apache.solr.core.SolrConfig;
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.core.PluginInfo;
+import org.apache.solr.util.plugin.PluginInfoInitialized;
 
 /**
  * Manages all registered circuit breaker instances. Responsible for a holistic view
@@ -36,7 +39,7 @@
  * NOTE: The current way of registering new default circuit breakers is minimal and not a long term
  * solution. There will be a follow up with a SIP for a schema API design.
  */
-public class CircuitBreakerManager {
+public class CircuitBreakerManager implements PluginInfoInitialized {
   // Class private to potentially allow "family" of circuit breakers to be enabled or disabled
   private final boolean enableCircuitBreakerManager;
 
@@ -46,6 +49,18 @@
     this.enableCircuitBreakerManager = enableCircuitBreakerManager;
   }
 
+  @Override
+  public void init(PluginInfo pluginInfo) {
+    CircuitBreaker.CircuitBreakerConfig circuitBreakerConfig = buildCBConfig(pluginInfo);
+
+    // Install the default circuit breakers
+    CircuitBreaker memoryCircuitBreaker = new MemoryCircuitBreaker(circuitBreakerConfig);
+    CircuitBreaker cpuCircuitBreaker = new CPUCircuitBreaker(circuitBreakerConfig);
+
+    register(memoryCircuitBreaker);
+    register(cpuCircuitBreaker);
+  }
+
   public void register(CircuitBreaker circuitBreaker) {
     circuitBreakerList.add(circuitBreaker);
   }
@@ -107,9 +122,7 @@
     StringBuilder sb = new StringBuilder();
 
     for (CircuitBreaker circuitBreaker : circuitBreakerList) {
-      sb.append(circuitBreaker.getClass().getName());
-      sb.append(" ");
-      sb.append(circuitBreaker.getDebugInfo());
+      sb.append(circuitBreaker.getErrorMessage());
       sb.append("\n");
     }
 
@@ -122,13 +135,48 @@
    *
    * Any default circuit breakers should be registered here.
    */
-  public static CircuitBreakerManager build(SolrConfig solrConfig) {
-    CircuitBreakerManager circuitBreakerManager = new CircuitBreakerManager(solrConfig.useCircuitBreakers);
+  @SuppressWarnings({"rawtypes"})
+  public static CircuitBreakerManager build(PluginInfo pluginInfo) {
+    boolean enabled = pluginInfo == null ? false : Boolean.parseBoolean(pluginInfo.attributes.getOrDefault("enabled", "false"));
+    CircuitBreakerManager circuitBreakerManager = new CircuitBreakerManager(enabled);
 
-    // Install the default circuit breakers
-    CircuitBreaker memoryCircuitBreaker = new MemoryCircuitBreaker(solrConfig);
-    circuitBreakerManager.register(memoryCircuitBreaker);
+    circuitBreakerManager.init(pluginInfo);
 
     return circuitBreakerManager;
   }
+
+  @VisibleForTesting
+  @SuppressWarnings({"rawtypes"})
+  public static CircuitBreaker.CircuitBreakerConfig buildCBConfig(PluginInfo pluginInfo) {
+    boolean enabled = false;
+    boolean cpuCBEnabled = false;
+    boolean memCBEnabled = false;
+    int memCBThreshold = 100;
+    int cpuCBThreshold = 100;
+
+
+    if (pluginInfo != null) {
+      NamedList args = pluginInfo.initArgs;
+
+      enabled = Boolean.parseBoolean(pluginInfo.attributes.getOrDefault("enabled", "false"));
+
+      if (args != null) {
+        cpuCBEnabled = Boolean.parseBoolean(args._getStr("cpuEnabled", "false"));
+        memCBEnabled = Boolean.parseBoolean(args._getStr("memEnabled", "false"));
+        memCBThreshold = Integer.parseInt(args._getStr("memThreshold", "100"));
+        cpuCBThreshold = Integer.parseInt(args._getStr("cpuThreshold", "100"));
+      }
+    }
+
+    return new CircuitBreaker.CircuitBreakerConfig(enabled, memCBEnabled, memCBThreshold, cpuCBEnabled, cpuCBThreshold);
+  }
+
+  public boolean isEnabled() {
+    return enableCircuitBreakerManager;
+  }
+
+  @VisibleForTesting
+  public List<CircuitBreaker> getRegisteredCircuitBreakers() {
+    return circuitBreakerList;
+  }
 }
diff --git a/solr/core/src/java/org/apache/solr/util/circuitbreaker/MemoryCircuitBreaker.java b/solr/core/src/java/org/apache/solr/util/circuitbreaker/MemoryCircuitBreaker.java
index 629d84a..0fc7b01 100644
--- a/solr/core/src/java/org/apache/solr/util/circuitbreaker/MemoryCircuitBreaker.java
+++ b/solr/core/src/java/org/apache/solr/util/circuitbreaker/MemoryCircuitBreaker.java
@@ -21,7 +21,6 @@
 import java.lang.management.ManagementFactory;
 import java.lang.management.MemoryMXBean;
 
-import org.apache.solr.core.SolrConfig;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -43,14 +42,17 @@
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   private static final MemoryMXBean MEMORY_MX_BEAN = ManagementFactory.getMemoryMXBean();
 
+  private boolean enabled;
   private final long heapMemoryThreshold;
 
   // Assumption -- the value of these parameters will be set correctly before invoking getDebugInfo()
-  private final ThreadLocal<Long> seenMemory = new ThreadLocal<>();
-  private final ThreadLocal<Long> allowedMemory = new ThreadLocal<>();
+  private static final ThreadLocal<Long> seenMemory = ThreadLocal.withInitial(() -> 0L);
+  private static final ThreadLocal<Long> allowedMemory = ThreadLocal.withInitial(() -> 0L);
 
-  public MemoryCircuitBreaker(SolrConfig solrConfig) {
-    super(solrConfig);
+  public MemoryCircuitBreaker(CircuitBreakerConfig config) {
+    super(config);
+
+    this.enabled = config.getMemCBEnabled();
 
     long currentMaxHeap = MEMORY_MX_BEAN.getHeapMemoryUsage().getMax();
 
@@ -58,7 +60,7 @@
       throw new IllegalArgumentException("Invalid JVM state for the max heap usage");
     }
 
-    int thresholdValueInPercentage = solrConfig.memoryCircuitBreakerThresholdPct;
+    int thresholdValueInPercentage = config.getMemCBThreshold();
     double thresholdInFraction = thresholdValueInPercentage / (double) 100;
     heapMemoryThreshold = (long) (currentMaxHeap * thresholdInFraction);
 
@@ -76,6 +78,10 @@
       return false;
     }
 
+    if (!enabled) {
+      return false;
+    }
+
     long localAllowedMemory = getCurrentMemoryThreshold();
     long localSeenMemory = calculateLiveMemoryUsage();
 
@@ -95,6 +101,13 @@
     return "seenMemory=" + seenMemory.get() + " allowedMemory=" + allowedMemory.get();
   }
 
+  @Override
+  public String getErrorMessage() {
+    return "Memory Circuit Breaker triggered as JVM heap usage values are greater than allocated threshold." +
+        "Seen JVM heap memory usage " + seenMemory.get() + " and allocated threshold " +
+        allowedMemory.get();
+  }
+
   private long getCurrentMemoryThreshold() {
     return heapMemoryThreshold;
   }
diff --git a/solr/core/src/java/org/apache/solr/util/plugin/AbstractPluginLoader.java b/solr/core/src/java/org/apache/solr/util/plugin/AbstractPluginLoader.java
index 568338a..77fd1f7 100644
--- a/solr/core/src/java/org/apache/solr/util/plugin/AbstractPluginLoader.java
+++ b/solr/core/src/java/org/apache/solr/util/plugin/AbstractPluginLoader.java
@@ -23,7 +23,7 @@
 
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrException.ErrorCode;
-import org.apache.solr.core.SolrClassLoader;
+import org.apache.solr.common.cloud.SolrClassLoader;
 import org.apache.solr.util.DOMUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
diff --git a/solr/core/src/resources/EditableSolrConfigAttributes.json b/solr/core/src/resources/EditableSolrConfigAttributes.json
index c441c30..03bb1b6 100644
--- a/solr/core/src/resources/EditableSolrConfigAttributes.json
+++ b/solr/core/src/resources/EditableSolrConfigAttributes.json
@@ -55,8 +55,6 @@
     "queryResultMaxDocsCached":1,
     "enableLazyFieldLoading":1,
     "boolTofilterOptimizer":1,
-    "useCircuitBreakers":10,
-    "memoryCircuitBreakerThresholdPct":20,
     "maxBooleanClauses":1},
   "jmx":{
     "agentId":0,
@@ -70,7 +68,4 @@
       "enableRemoteStreaming":10,
       "enableStreamBody":10,
       "addHttpRequestToContext":0}},
-  "peerSync":{
-    "useRangeVersions":11
-  }
 }
diff --git a/solr/core/src/resources/ImplicitPlugins.json b/solr/core/src/resources/ImplicitPlugins.json
index 1796477..0cfc009 100644
--- a/solr/core/src/resources/ImplicitPlugins.json
+++ b/solr/core/src/resources/ImplicitPlugins.json
@@ -89,10 +89,6 @@
       "class": "solr.LoggingHandler",
       "useParams":"_ADMIN_LOGGING"
     },
-    "/admin/health": {
-      "class": "solr.HealthCheckHandler",
-      "useParams":"_ADMIN_HEALTH"
-    },
     "/admin/file": {
       "class": "solr.ShowFileRequestHandler",
       "useParams":"_ADMIN_FILE"
diff --git a/solr/core/src/test-files/solr/collection1/conf/schema-booleansimilarity.xml b/solr/core/src/test-files/solr/collection1/conf/schema-booleansimilarity.xml
new file mode 100644
index 0000000..3f7a2c1
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/schema-booleansimilarity.xml
@@ -0,0 +1,35 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- Test schema file for BooleanSimilarityFactory -->
+
+<schema name="booleansimilarity" version="1.0">
+  <fieldType name="string" class="solr.StrField" omitNorms="true" positionIncrementGap="0"/>
+
+  <fieldType name="text" class="solr.TextField">
+    <analyzer class="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
+    <similarity class="solr.BooleanSimilarityFactory"/>
+  </fieldType>
+
+  <field name="id" type="string" indexed="true" stored="true" multiValued="false" required="false"/>
+  <field name="text" type="text" indexed="true" stored="false"/>
+
+  <uniqueKey>id</uniqueKey>
+
+</schema>
+
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-cdcr.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-cdcr.xml
deleted file mode 100644
index c6b360c..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-cdcr.xml
+++ /dev/null
@@ -1,77 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <jmx/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}">
-    <!-- used to keep RAM reqs down for HdfsDirectoryFactory -->
-    <bool name="solr.hdfs.blockcache.enabled">${solr.hdfs.blockcache.enabled:true}</bool>
-    <int name="solr.hdfs.blockcache.blocksperbank">${solr.hdfs.blockcache.blocksperbank:1024}</int>
-    <str name="solr.hdfs.home">${solr.hdfs.home:}</str>
-    <str name="solr.hdfs.confdir">${solr.hdfs.confdir:}</str>
-    <str name="solr.hdfs.blockcache.global">${solr.hdfs.blockcache.global:false}</str>
-  </directoryFactory>
-
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-  </requestHandler>
-
-  <requestHandler name="/update" class="solr.UpdateRequestHandler">
-    <lst name="defaults">
-      <str name="update.chain">cdcr-processor-chain</str>
-    </lst>
-  </requestHandler>
-
-  <updateRequestProcessorChain name="cdcr-processor-chain">
-    <processor class="solr.CdcrUpdateProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-  <requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-    <lst name="replica">
-      <str name="zkHost">${zkHost}</str>
-      <str name="source">source_collection</str>
-      <str name="target">target_collection</str>
-    </lst>
-    <lst name="replicator">
-      <str name="threadPoolSize">8</str>
-      <str name="schedule">1000</str>
-      <str name="batchSize">64</str>
-    </lst>
-    <lst name="updateLogSynchronizer">
-      <str name="schedule">1000</str>
-    </lst>
-  </requestHandler>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <updateLog class="solr.CdcrUpdateLog">
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-  </updateHandler>
-
-</config>
-
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-cdcrupdatelog.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-cdcrupdatelog.xml
deleted file mode 100644
index 86b2d2b..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-cdcrupdatelog.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <jmx/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}">
-    <!-- used to keep RAM reqs down for HdfsDirectoryFactory -->
-    <bool name="solr.hdfs.blockcache.enabled">${solr.hdfs.blockcache.enabled:true}</bool>
-    <int name="solr.hdfs.blockcache.blocksperbank">${solr.hdfs.blockcache.blocksperbank:1024}</int>
-    <str name="solr.hdfs.home">${solr.hdfs.home:}</str>
-    <str name="solr.hdfs.confdir">${solr.hdfs.confdir:}</str>
-    <str name="solr.hdfs.blockcache.global">${solr.hdfs.blockcache.global:false}</str>
-  </directoryFactory>
-
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-  </requestHandler>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <updateLog class="solr.CdcrUpdateLog">
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-  </updateHandler>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-follower.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-follower.xml
new file mode 100644
index 0000000..217d568
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-follower.xml
@@ -0,0 +1,59 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+  <dataDir>${solr.data.dir:}</dataDir>
+
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <bool name="httpCaching">true</bool>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/defaults" class="solr.SearchHandler">
+
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
+  </requestHandler>
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="follower">
+      <str name="leaderUrl">http://127.0.0.1:TEST_PORT/solr/collection1</str>
+      <str name="pollInterval">00:00:01</str>
+      <str name="compression">COMPRESSION</str>
+    </lst>
+  </requestHandler>
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-slave1.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-follower1.xml
similarity index 100%
rename from solr/core/src/test-files/solr/collection1/conf/solrconfig-slave1.xml
rename to solr/core/src/test-files/solr/collection1/conf/solrconfig-follower1.xml
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master-throttled.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader-throttled.xml
similarity index 100%
rename from solr/core/src/test-files/solr/collection1/conf/solrconfig-master-throttled.xml
rename to solr/core/src/test-files/solr/collection1/conf/solrconfig-leader-throttled.xml
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader.xml
new file mode 100644
index 0000000..374c413
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader.xml
@@ -0,0 +1,70 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>  
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+  <dataDir>${solr.data.dir:}</dataDir>
+
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <bool name="httpCaching">true</bool>
+  </requestHandler>
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="leader">
+      <str name="replicateAfter">commit</str>
+      <!-- we don't really need dummy.xsl, but we want to be sure subdir 
+           files replicate (see SOLR-3809)
+      -->
+      <str name="confFiles">schema.xml,xslt/dummy.xsl</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/defaults" class="solr.SearchHandler">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader1-keepOneBackup.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader1-keepOneBackup.xml
new file mode 100644
index 0000000..101ba30
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader1-keepOneBackup.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <dataDir>${solr.data.dir:}</dataDir>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler" />
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="leader">
+      <str name="replicateAfter">commit</str>
+      <str name="confFiles">schema-replication2.xml:schema.xml</str>
+      <str name="backupAfter">commit</str>
+    </lst>    
+    <str name="maxNumberOfBackups">1</str>
+  </requestHandler>
+  
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader1.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader1.xml
new file mode 100644
index 0000000..0c3eb86
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader1.xml
@@ -0,0 +1,68 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <dataDir>${solr.data.dir:}</dataDir>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <bool name="httpCaching">true</bool>
+  </requestHandler>
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="leader">
+      <str name="replicateAfter">commit</str>
+      <str name="backupAfter">commit</str>
+      <str name="confFiles">schema-replication2.xml:schema.xml</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/defaults" class="solr.SearchHandler">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader2.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader2.xml
new file mode 100644
index 0000000..5eca462
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader2.xml
@@ -0,0 +1,66 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <dataDir>${solr.data.dir:}</dataDir>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <bool name="httpCaching">true</bool>
+  </requestHandler>
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="leader">
+      <str name="replicateAfter">startup</str>
+      <str name="confFiles">schema.xml</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/defaults" class="solr.SearchHandler">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader3.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader3.xml
new file mode 100644
index 0000000..5d97350
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-leader3.xml
@@ -0,0 +1,67 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <dataDir>${solr.data.dir:}</dataDir>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <bool name="httpCaching">true</bool>
+  </requestHandler>
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="leader">
+      <str name="replicateAfter">commit</str>
+      <str name="replicateAfter">startup</str>
+      <str name="confFiles">schema.xml</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/defaults" class="solr.SearchHandler">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <int name="rows">4</int>
+      <bool name="hl">true</bool>
+      <str name="hl.fl">text,name,subject,title,whitetok</str>
+    </lst>
+  </requestHandler>
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-master.xml
deleted file mode 100644
index e501af2..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master.xml
+++ /dev/null
@@ -1,70 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>  
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <bool name="httpCaching">true</bool>
-  </requestHandler>
-
-  <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="master">
-      <str name="replicateAfter">commit</str>
-      <!-- we don't really need dummy.xsl, but we want to be sure subdir 
-           files replicate (see SOLR-3809)
-      -->
-      <str name="confFiles">schema.xml,xslt/dummy.xsl</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/defaults" class="solr.SearchHandler">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <requestDispatcher>
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
-    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
-      <cacheControl>max-age=30, public</cacheControl>
-    </httpCaching>
-  </requestDispatcher>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master1-keepOneBackup.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-master1-keepOneBackup.xml
deleted file mode 100644
index bcd7874..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master1-keepOneBackup.xml
+++ /dev/null
@@ -1,49 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <dataDir>${solr.data.dir:}</dataDir>
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler" />
-
-  <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="master">
-      <str name="replicateAfter">commit</str>
-      <str name="confFiles">schema-replication2.xml:schema.xml</str>
-      <str name="backupAfter">commit</str>
-    </lst>    
-    <str name="maxNumberOfBackups">1</str>
-  </requestHandler>
-  
-
-  <requestDispatcher>
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
-    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
-      <cacheControl>max-age=30, public</cacheControl>
-    </httpCaching>
-  </requestDispatcher>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master1.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-master1.xml
deleted file mode 100644
index 9271686..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master1.xml
+++ /dev/null
@@ -1,68 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <dataDir>${solr.data.dir:}</dataDir>
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <bool name="httpCaching">true</bool>
-  </requestHandler>
-
-  <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="master">
-      <str name="replicateAfter">commit</str>
-      <str name="backupAfter">commit</str>
-      <str name="confFiles">schema-replication2.xml:schema.xml</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/defaults" class="solr.SearchHandler">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <requestDispatcher>
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
-    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
-      <cacheControl>max-age=30, public</cacheControl>
-    </httpCaching>
-  </requestDispatcher>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master2.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-master2.xml
deleted file mode 100644
index 55301c2..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master2.xml
+++ /dev/null
@@ -1,66 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <dataDir>${solr.data.dir:}</dataDir>
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <bool name="httpCaching">true</bool>
-  </requestHandler>
-
-  <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="master">
-      <str name="replicateAfter">startup</str>
-      <str name="confFiles">schema.xml</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/defaults" class="solr.SearchHandler">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <requestDispatcher>
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
-    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
-      <cacheControl>max-age=30, public</cacheControl>
-    </httpCaching>
-  </requestDispatcher>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master3.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-master3.xml
deleted file mode 100644
index 1c1dd40..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-master3.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <dataDir>${solr.data.dir:}</dataDir>
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <bool name="httpCaching">true</bool>
-  </requestHandler>
-
-  <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="master">
-      <str name="replicateAfter">commit</str>
-      <str name="replicateAfter">startup</str>
-      <str name="confFiles">schema.xml</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/defaults" class="solr.SearchHandler">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <int name="rows">4</int>
-      <bool name="hl">true</bool>
-      <str name="hl.fl">text,name,subject,title,whitetok</str>
-    </lst>
-  </requestHandler>
-
-  <requestDispatcher>
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
-    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
-      <cacheControl>max-age=30, public</cacheControl>
-    </httpCaching>
-  </requestDispatcher>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-memory-circuitbreaker.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-memory-circuitbreaker.xml
index b6b20ff..699a7bd 100644
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-memory-circuitbreaker.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-memory-circuitbreaker.xml
@@ -78,12 +78,11 @@
 
   </query>
 
-  <circuitBreaker>
-
-    <useCircuitBreakers>true</useCircuitBreakers>
-
-    <memoryCircuitBreakerThresholdPct>75</memoryCircuitBreakerThresholdPct>
-
+  <circuitBreaker class="solr.CircuitBreakerManager" enabled="true">
+    <str name="memEnabled">true</str>
+    <str name="memThreshold">75</str>
+    <str name="cpuEnabled">true</str>
+    <str name="cpuThreshold">75</str>
   </circuitBreaker>
 
   <initParams path="/select">
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-repeater.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-repeater.xml
index 761d45d..160bc4e 100644
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-repeater.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-repeater.xml
@@ -42,12 +42,12 @@
   </requestHandler>
 
   <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="master">
+    <lst name="leader">
       <str name="replicateAfter">commit</str>
       <str name="confFiles">schema.xml</str>
     </lst>
-    <lst name="slave">
-      <str name="masterUrl">http://127.0.0.1:TEST_PORT/solr/replication</str>
+    <lst name="follower">
+      <str name="leaderUrl">http://127.0.0.1:TEST_PORT/solr/replication</str>
     </lst>
   </requestHandler>
 
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-replication-legacy.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-replication-legacy.xml
new file mode 100644
index 0000000..c2f25ba
--- /dev/null
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-replication-legacy.xml
@@ -0,0 +1,62 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<config>
+  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
+  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+  <dataDir>${solr.data.dir:}</dataDir>
+
+  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+  <updateHandler class="solr.DirectUpdateHandler2">
+  </updateHandler>
+
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <bool name="httpCaching">true</bool>
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/defaults" class="solr.SearchHandler">
+
+  </requestHandler>
+
+  <!-- test query parameter defaults -->
+  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
+  </requestHandler>
+
+  <requestHandler name="/replication" class="solr.ReplicationHandler">
+    <lst name="master">
+      <str name="replicateAfter">commit</str>
+    </lst>
+    <lst name="slave">
+      <str name="masterUrl">http://127.0.0.1:TEST_PORT/solr/collection1</str>
+      <str name="pollInterval">00:00:01</str>
+      <str name="compression">COMPRESSION</str>
+    </lst>
+  </requestHandler>
+
+  <requestDispatcher>
+    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
+    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
+      <cacheControl>max-age=30, public</cacheControl>
+    </httpCaching>
+  </requestDispatcher>
+
+</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-slave.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-slave.xml
deleted file mode 100644
index 39a7870..0000000
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-slave.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<config>
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-  <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.RAMDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <xi:include href="solrconfig.snippet.randomindexconfig.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <bool name="httpCaching">true</bool>
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/defaults" class="solr.SearchHandler">
-
-  </requestHandler>
-
-  <!-- test query parameter defaults -->
-  <requestHandler name="/lazy" class="solr.SearchHandler" startup="lazy">
-  </requestHandler>
-
-  <requestHandler name="/replication" class="solr.ReplicationHandler">
-    <lst name="slave">
-      <str name="masterUrl">http://127.0.0.1:TEST_PORT/solr/collection1</str>
-      <str name="pollInterval">00:00:01</str>
-      <str name="compression">COMPRESSION</str>
-    </lst>
-  </requestHandler>
-
-  <requestDispatcher>
-    <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="-1"/>
-    <httpCaching lastModifiedFrom="openTime" etagSeed="Solr" never304="false">
-      <cacheControl>max-age=30, public</cacheControl>
-    </httpCaching>
-  </requestDispatcher>
-
-</config>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
index 2fc3c52..f2cad61 100644
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
@@ -49,10 +49,6 @@
 
   <requestHandler name="/select" class="solr.SearchHandler" />
   
-  <peerSync>
-    <useRangeVersions>${solr.peerSync.useRangeVersions:true}</useRangeVersions>
-  </peerSync>
-
   <updateHandler class="solr.DirectUpdateHandler2">
     <!-- autocommit pending docs if certain criteria are met -->
     <autoCommit>
diff --git a/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml b/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
index 20ddf96..de5c714 100644
--- a/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
@@ -32,6 +32,7 @@
 
   <maxBufferedDocs>${solr.tests.maxBufferedDocs}</maxBufferedDocs>
   <ramBufferSizeMB>${solr.tests.ramBufferSizeMB}</ramBufferSizeMB>
+  <maxCommitMergeWaitTime>${solr.tests.maxCommitMergeWaitTime:-1}</maxCommitMergeWaitTime>
   <ramPerThreadHardLimitMB>${solr.tests.ramPerThreadHardLimitMB}</ramPerThreadHardLimitMB>
 
   <mergeScheduler class="${solr.tests.mergeScheduler}" />
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/managed-schema b/solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/managed-schema
deleted file mode 100644
index 2df6c0a..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/managed-schema
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<schema name="minimal" version="1.1">
-  <types>
-    <fieldType name="string" class="solr.StrField"/>
-    <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  </types>
-  <fields>
-    <field name="id" type="string" indexed="true" stored="true"/>
-    <field name="_version_" type="long" indexed="true" stored="true"/>
-    <dynamicField name="*" type="string" indexed="true" stored="true"/>
-  </fields>
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/solrconfig.xml b/solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/solrconfig.xml
deleted file mode 100644
index da548c4..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/solrconfig.xml
+++ /dev/null
@@ -1,80 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- This is a "kitchen sink" config file that tests can use.
-     When writting a new test, feel free to add *new* items (plugins,
-     config options, etc...) as long as they don't break any existing
-     tests.  if you need to test something esoteric please add a new
-     "solrconfig-your-esoteric-purpose.xml" config file.
-
-     Note in particular that this test is used by MinimalSchemaTest so
-     Anything added to this file needs to work correctly even if there
-     is now uniqueKey or defaultSearch Field.
-  -->
-
-<config>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <updateRequestProcessorChain name="cdcr-processor-chain">
-    <processor class="solr.CdcrUpdateProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-  <requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-    <lst name="replica">
-      <str name="zkHost">${cdcr.cluster2.zkHost}</str>
-      <str name="source">cdcr-cluster1</str>
-      <str name="target">cdcr-cluster2</str>
-    </lst>
-    <lst name="replicator">
-      <str name="threadPoolSize">1</str>
-      <str name="schedule">1000</str>
-      <str name="batchSize">1000</str>
-    </lst>
-    <lst name="updateLogSynchronizer">
-      <str name="schedule">1000</str>
-    </lst>
-  </requestHandler>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <updateLog class="solr.CdcrUpdateLog">
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler" />
-
-  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
-    <lst name="defaults">
-      <str name="df">_text_</str>
-    </lst>
-  </initParams>
-
-  <requestHandler name="/update" class="solr.UpdateRequestHandler">
-    <lst name="defaults">
-      <str name="update.chain">cdcr-processor-chain</str>
-    </lst>
-  </requestHandler>
-</config>
\ No newline at end of file
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/managed-schema b/solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/managed-schema
deleted file mode 100644
index 2df6c0a..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/managed-schema
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<schema name="minimal" version="1.1">
-  <types>
-    <fieldType name="string" class="solr.StrField"/>
-    <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  </types>
-  <fields>
-    <field name="id" type="string" indexed="true" stored="true"/>
-    <field name="_version_" type="long" indexed="true" stored="true"/>
-    <dynamicField name="*" type="string" indexed="true" stored="true"/>
-  </fields>
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/solrconfig.xml b/solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/solrconfig.xml
deleted file mode 100644
index 8e26d45..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/solrconfig.xml
+++ /dev/null
@@ -1,80 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- This is a "kitchen sink" config file that tests can use.
-     When writting a new test, feel free to add *new* items (plugins,
-     config options, etc...) as long as they don't break any existing
-     tests.  if you need to test something esoteric please add a new
-     "solrconfig-your-esoteric-purpose.xml" config file.
-
-     Note in particular that this test is used by MinimalSchemaTest so
-     Anything added to this file needs to work correctly even if there
-     is now uniqueKey or defaultSearch Field.
-  -->
-
-<config>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <updateRequestProcessorChain name="cdcr-processor-chain">
-    <processor class="solr.CdcrUpdateProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-  <requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-    <lst name="replica">
-      <str name="zkHost">${cdcr.cluster1.zkHost}</str>
-      <str name="source">cdcr-cluster2</str>
-      <str name="target">cdcr-cluster1</str>
-    </lst>
-    <lst name="replicator">
-      <str name="threadPoolSize">1</str>
-      <str name="schedule">1000</str>
-      <str name="batchSize">1000</str>
-    </lst>
-    <lst name="updateLogSynchronizer">
-      <str name="schedule">1000</str>
-    </lst>
-  </requestHandler>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <updateLog class="solr.CdcrUpdateLog">
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler" />
-
-  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
-    <lst name="defaults">
-      <str name="df">_text_</str>
-    </lst>
-  </initParams>
-
-  <requestHandler name="/update" class="solr.UpdateRequestHandler">
-    <lst name="defaults">
-      <str name="update.chain">cdcr-processor-chain</str>
-    </lst>
-  </requestHandler>
-</config>
\ No newline at end of file
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-source-disabled/conf/schema.xml b/solr/core/src/test-files/solr/configsets/cdcr-source-disabled/conf/schema.xml
deleted file mode 100644
index 2df6c0a..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-source-disabled/conf/schema.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<schema name="minimal" version="1.1">
-  <types>
-    <fieldType name="string" class="solr.StrField"/>
-    <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  </types>
-  <fields>
-    <field name="id" type="string" indexed="true" stored="true"/>
-    <field name="_version_" type="long" indexed="true" stored="true"/>
-    <dynamicField name="*" type="string" indexed="true" stored="true"/>
-  </fields>
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-source-disabled/conf/solrconfig.xml b/solr/core/src/test-files/solr/configsets/cdcr-source-disabled/conf/solrconfig.xml
deleted file mode 100644
index e63d9a6..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-source-disabled/conf/solrconfig.xml
+++ /dev/null
@@ -1,60 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- This is a "kitchen sink" config file that tests can use.
-     When writting a new test, feel free to add *new* items (plugins,
-     config options, etc...) as long as they don't break any existing
-     tests.  if you need to test something esoteric please add a new
-     "solrconfig-your-esoteric-purpose.xml" config file.
-
-     Note in particular that this test is used by MinimalSchemaTest so
-     Anything added to this file needs to work correctly even if there
-     is now uniqueKey or defaultSearch Field.
-  -->
-
-<config>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-  <schemaFactory class="ClassicIndexSchemaFactory"/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <commitWithin>
-      <softCommit>${solr.commitwithin.softcommit:true}</softCommit>
-    </commitWithin>
-
-    <updateLog>
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-
-  </updateHandler>
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="indent">true</str>
-      <str name="df">text</str>
-    </lst>
-
-  </requestHandler>
-</config>
-
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-source/conf/schema.xml b/solr/core/src/test-files/solr/configsets/cdcr-source/conf/schema.xml
deleted file mode 100644
index 2df6c0a..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-source/conf/schema.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<schema name="minimal" version="1.1">
-  <types>
-    <fieldType name="string" class="solr.StrField"/>
-    <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  </types>
-  <fields>
-    <field name="id" type="string" indexed="true" stored="true"/>
-    <field name="_version_" type="long" indexed="true" stored="true"/>
-    <dynamicField name="*" type="string" indexed="true" stored="true"/>
-  </fields>
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-source/conf/solrconfig.xml b/solr/core/src/test-files/solr/configsets/cdcr-source/conf/solrconfig.xml
deleted file mode 100644
index 6469038..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-source/conf/solrconfig.xml
+++ /dev/null
@@ -1,75 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- This is a "kitchen sink" config file that tests can use.
-     When writting a new test, feel free to add *new* items (plugins,
-     config options, etc...) as long as they don't break any existing
-     tests.  if you need to test something esoteric please add a new
-     "solrconfig-your-esoteric-purpose.xml" config file.
-
-     Note in particular that this test is used by MinimalSchemaTest so
-     Anything added to this file needs to work correctly even if there
-     is now uniqueKey or defaultSearch Field.
-  -->
-
-<config>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <updateRequestProcessorChain name="cdcr-processor-chain">
-    <processor class="solr.CdcrUpdateProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-  <requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-    <lst name="replica">
-      <str name="zkHost">${cdcr.target.zkHost}</str>
-      <str name="source">cdcr-source</str>
-      <str name="target">cdcr-target</str>
-    </lst>
-    <lst name="replicator">
-      <str name="threadPoolSize">1</str>
-      <str name="schedule">1000</str>
-      <str name="batchSize">1000</str>
-    </lst>
-    <lst name="updateLogSynchronizer">
-      <str name="schedule">1000</str>
-    </lst>
-  </requestHandler>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <updateLog class="solr.CdcrUpdateLog">
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler" />
-
-  <requestHandler name="/update" class="solr.UpdateRequestHandler">
-    <lst name="defaults">
-      <str name="update.chain">cdcr-processor-chain</str>
-    </lst>
-  </requestHandler>
-</config>
-
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-target/conf/schema.xml b/solr/core/src/test-files/solr/configsets/cdcr-target/conf/schema.xml
deleted file mode 100644
index 2df6c0a..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-target/conf/schema.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<schema name="minimal" version="1.1">
-  <types>
-    <fieldType name="string" class="solr.StrField"/>
-    <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  </types>
-  <fields>
-    <field name="id" type="string" indexed="true" stored="true"/>
-    <field name="_version_" type="long" indexed="true" stored="true"/>
-    <dynamicField name="*" type="string" indexed="true" stored="true"/>
-  </fields>
-  <uniqueKey>id</uniqueKey>
-</schema>
diff --git a/solr/core/src/test-files/solr/configsets/cdcr-target/conf/solrconfig.xml b/solr/core/src/test-files/solr/configsets/cdcr-target/conf/solrconfig.xml
deleted file mode 100644
index bb4a774..0000000
--- a/solr/core/src/test-files/solr/configsets/cdcr-target/conf/solrconfig.xml
+++ /dev/null
@@ -1,62 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- This is a "kitchen sink" config file that tests can use.
-     When writting a new test, feel free to add *new* items (plugins,
-     config options, etc...) as long as they don't break any existing
-     tests.  if you need to test something esoteric please add a new
-     "solrconfig-your-esoteric-purpose.xml" config file.
-
-     Note in particular that this test is used by MinimalSchemaTest so
-     Anything added to this file needs to work correctly even if there
-     is now uniqueKey or defaultSearch Field.
-  -->
-
-<config>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <updateRequestProcessorChain name="cdcr-processor-chain">
-    <processor class="solr.CdcrUpdateProcessorFactory"/>
-    <processor class="solr.RunUpdateProcessorFactory"/>
-  </updateRequestProcessorChain>
-
-  <requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-  </requestHandler>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <updateLog class="solr.CdcrUpdateLog">
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-  </updateHandler>
-
-  <requestHandler name="/select" class="solr.SearchHandler" />
-
-  <requestHandler name="/update" class="solr.UpdateRequestHandler">
-    <lst name="defaults">
-      <str name="update.chain">cdcr-processor-chain</str>
-    </lst>
-  </requestHandler>
-</config>
-
diff --git a/solr/core/src/test-files/solr/configsets/upload/dih-script-transformer/managed-schema b/solr/core/src/test-files/solr/configsets/upload/dih-script-transformer/managed-schema
deleted file mode 100644
index 9e2f947..0000000
--- a/solr/core/src/test-files/solr/configsets/upload/dih-script-transformer/managed-schema
+++ /dev/null
@@ -1,25 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<schema name="minimal" version="1.1">
- <types>
-  <fieldType name="string" class="solr.StrField"/>
- </types>
- <fields>
-   <dynamicField name="*" type="string" indexed="true" stored="true" />
- </fields>
-</schema>
diff --git a/solr/core/src/test-files/solr/configsets/upload/dih-script-transformer/solrconfig.xml b/solr/core/src/test-files/solr/configsets/upload/dih-script-transformer/solrconfig.xml
deleted file mode 100644
index 82d0cc9..0000000
--- a/solr/core/src/test-files/solr/configsets/upload/dih-script-transformer/solrconfig.xml
+++ /dev/null
@@ -1,61 +0,0 @@
-<?xml version="1.0" ?>
-
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- This is a "kitchen sink" config file that tests can use.
-     When writting a new test, feel free to add *new* items (plugins,
-     config options, etc...) as long as they don't break any existing
-     tests.  if you need to test something esoteric please add a new
-     "solrconfig-your-esoteric-purpose.xml" config file.
-
-     Note in particular that this test is used by MinimalSchemaTest so
-     Anything added to this file needs to work correctly even if there
-     is now uniqueKey or defaultSearch Field.
-  -->
-
-<config>
-
-  <dataDir>${solr.data.dir:}</dataDir>
-
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>
-
-  <updateHandler class="solr.DirectUpdateHandler2">
-    <commitWithin>
-      <softCommit>${solr.commitwithin.softcommit:true}</softCommit>
-    </commitWithin>
-
-  </updateHandler>
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="indent">true</str>
-      <str name="df">text</str>
-    </lst>
-
-  </requestHandler>
-
-  <requestHandler name="/update/xslt"
-                   startup="lazy"
-                   class="solr.XsltUpdateRequestHandler"/>
-
-  <requestHandler name="/update" class="solr.UpdateRequestHandler"  />
-</config>
-
diff --git a/solr/core/src/test/org/apache/solr/analysis/TestDeprecatedFilters.java b/solr/core/src/test/org/apache/solr/analysis/TestDeprecatedFilters.java
index 120fda1..fea1ca8 100644
--- a/solr/core/src/test/org/apache/solr/analysis/TestDeprecatedFilters.java
+++ b/solr/core/src/test/org/apache/solr/analysis/TestDeprecatedFilters.java
@@ -24,7 +24,7 @@
 
   @BeforeClass
   public static void beforeClass() throws Exception {
-    initCore("solrconfig-master.xml","schema-deprecations.xml");
+    initCore("solrconfig-leader.xml","schema-deprecations.xml");
   }
 
   public void testLowerCaseTokenizer() {
diff --git a/solr/core/src/test/org/apache/solr/client/solrj/embedded/TestEmbeddedSolrServerAdminHandler.java b/solr/core/src/test/org/apache/solr/client/solrj/embedded/TestEmbeddedSolrServerAdminHandler.java
index b5faba2..19d6dbe 100644
--- a/solr/core/src/test/org/apache/solr/client/solrj/embedded/TestEmbeddedSolrServerAdminHandler.java
+++ b/solr/core/src/test/org/apache/solr/client/solrj/embedded/TestEmbeddedSolrServerAdminHandler.java
@@ -61,6 +61,11 @@
         protected QueryResponse createResponse(final SolrClient client) {
             return new QueryResponse();
         }
+
+        @Override
+        public String getRequestType() {
+            return SolrRequest.SolrRequestType.ADMIN.toString();
+        }
     }
 
 }
diff --git a/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java b/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java
index 86742fa..a3c8a55 100644
--- a/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java
+++ b/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java
@@ -77,7 +77,7 @@
 import org.apache.solr.common.util.NamedList;
 import org.apache.solr.common.util.Utils;
 import org.apache.solr.core.ConfigSetProperties;
-import org.apache.solr.core.TestDynamicLoading;
+import org.apache.solr.core.TestSolrConfigHandler;
 import org.apache.solr.security.BasicAuthIntegrationTest;
 import org.apache.solr.servlet.SolrDispatchFilter;
 import org.apache.solr.util.ExternalPaths;
@@ -485,7 +485,7 @@
   private long uploadConfigSet(String configSetName, String suffix, String username, String password,
       SolrZkClient zkClient) throws IOException {
     // Read zipped sample config
-    ByteBuffer sampleZippedConfig = TestDynamicLoading
+    ByteBuffer sampleZippedConfig = TestSolrConfigHandler
         .getFileContent(
             createTempZipFile("solr/configsets/upload/"+configSetName), false);
 
@@ -509,7 +509,7 @@
     File zipFile = new File(solrCluster.getBaseDir().toFile().getAbsolutePath() +
         File.separator + TestUtil.randomSimpleString(random(), 6, 8) + ".zip");
 
-    File directory = TestDynamicLoading.getFile(directoryPath);
+    File directory = SolrTestCaseJ4.getFile(directoryPath);
     if (log.isInfoEnabled()) {
       log.info("Directory: {}", directory.getAbsolutePath());
     }
diff --git a/solr/core/src/test/org/apache/solr/cloud/TestCryptoKeys.java b/solr/core/src/test/org/apache/solr/cloud/TestCryptoKeys.java
deleted file mode 100644
index 321e208..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/TestCryptoKeys.java
+++ /dev/null
@@ -1,209 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.cloud;
-
-import java.io.FileInputStream;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.Map;
-
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.common.LinkedHashMapWriter;
-import org.apache.solr.common.cloud.SolrZkClient;
-import org.apache.solr.common.util.Utils;
-import org.apache.solr.core.MemClassLoader;
-import org.apache.solr.core.TestDynamicLoading;
-import org.apache.solr.core.TestSolrConfigHandler;
-import org.apache.solr.handler.TestBlobHandler;
-import org.apache.solr.util.CryptoKeys;
-import org.apache.solr.util.RestTestHarness;
-import org.apache.zookeeper.CreateMode;
-import org.junit.Test;
-
-import static java.util.Arrays.asList;
-import static org.apache.solr.handler.TestSolrConfigHandlerCloud.compareValues;
-
-public class TestCryptoKeys extends AbstractFullDistribZkTestBase {
-
-  public TestCryptoKeys() {
-    super();
-    sliceCount = 1;
-  }
-
-  @Test
-  public void test() throws Exception {
-    System.setProperty("enable.runtime.lib", "true");
-    setupRestTestHarnesses();
-    String pk1sig = "G8LEW7uJ1is81Aqqfl3Sld3qDtOxPuVFeTLJHFJWecgDvUkmJNFXmf7nkHOVlXnDWahp1vqZf0W02VHXg37lBw==";
-    String pk2sig = "pCyBQycB/0YvLVZfKLDIIqG1tFwM/awqzkp2QNpO7R3ThTqmmrj11wEJFDRLkY79efuFuQPHt40EE7jrOKoj9jLNELsfEqvU3jw9sZKiDONY+rV9Bj9QPeW8Pgt+F9Y1";
-    String wrongKeySig = "xTk2hTipfpb+J5s4x3YZGOXkmHWtnJz05Vvd8RTm/Q1fbQVszR7vMk6dQ1URxX08fcg4HvxOo8g9bG2TSMOGjg==";
-    String result = null;
-    CryptoKeys cryptoKeys = null;
-    SolrZkClient zk = getCommonCloudSolrClient().getZkStateReader().getZkClient();
-    cryptoKeys = new CryptoKeys(CloudUtil.getTrustedKeys(zk, "exe"));
-    ByteBuffer samplefile = ByteBuffer.wrap(readFile("cryptokeys/samplefile.bin"));
-    //there are no keys yet created in ZK
-
-    result = cryptoKeys.verify( pk1sig,samplefile);
-    assertNull(result);
-
-    zk.makePath("/keys/exe", true);
-    zk.create("/keys/exe/pubk1.der", readFile("cryptokeys/pubk1.der"), CreateMode.PERSISTENT, true);
-    zk.create("/keys/exe/pubk2.der", readFile("cryptokeys/pubk2.der"), CreateMode.PERSISTENT, true);
-    Map<String, byte[]> trustedKeys = CloudUtil.getTrustedKeys(zk, "exe");
-
-    cryptoKeys = new CryptoKeys(trustedKeys);
-    result = cryptoKeys.verify(pk2sig, samplefile);
-    assertEquals("pubk2.der", result);
-
-
-    result = cryptoKeys.verify(pk1sig, samplefile);
-    assertEquals("pubk1.der", result);
-
-    try {
-      result = cryptoKeys.verify(wrongKeySig,samplefile);
-      assertNull(result);
-    } catch (Exception e) {
-      //pass
-    }
-    try {
-      result = cryptoKeys.verify( "SGVsbG8gV29ybGQhCg==", samplefile);
-      assertNull(result);
-    } catch (Exception e) {
-      //pass
-    }
-
-
-    HttpSolrClient randomClient = (HttpSolrClient) clients.get(random().nextInt(clients.size()));
-    String baseURL = randomClient.getBaseURL();
-    baseURL = baseURL.substring(0, baseURL.lastIndexOf('/'));
-
-    TestBlobHandler.createSystemCollection(getHttpSolrClient(baseURL, randomClient.getHttpClient()));
-    waitForRecoveriesToFinish(".system", true);
-
-    ByteBuffer jar = TestDynamicLoading.getFileContent("runtimecode/runtimelibs.jar.bin");
-    String blobName = "signedjar";
-    TestBlobHandler.postAndCheck(cloudClient, baseURL, blobName, jar, 1);
-
-    String payload = "{\n" +
-        "'create-requesthandler' : { 'name' : '/runtime', 'class': 'org.apache.solr.core.RuntimeLibReqHandler' , 'runtimeLib':true }" +
-        "}";
-    RestTestHarness client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "requestHandler", "/runtime", "class"),
-        "org.apache.solr.core.RuntimeLibReqHandler", 10);
-
-
-    payload = "{\n" +
-        "'add-runtimelib' : { 'name' : 'signedjar' ,'version':1}\n" +
-        "}";
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "version"),
-        1l, 10);
-
-    @SuppressWarnings({"rawtypes"})
-    LinkedHashMapWriter map = TestSolrConfigHandler.getRespMap("/runtime", client);
-    String s = map._getStr( "error/msg",null);
-    assertNotNull(map.toString(), s);
-    assertTrue(map.toString(), s.contains("should be signed with one of the keys in ZK /keys/exe"));
-
-    String wrongSig = "QKqHtd37QN02iMW9UEgvAO9g9qOOuG5vEBNkbUsN7noc2hhXKic/ABFIOYJA9PKw61mNX2EmNFXOcO3WClYdSw==";
-
-    payload = "{\n" +
-        "'update-runtimelib' : { 'name' : 'signedjar' ,'version':1, 'sig': 'QKqHtd37QN02iMW9UEgvAO9g9qOOuG5vEBNkbUsN7noc2hhXKic/ABFIOYJA9PKw61mNX2EmNFXOcO3WClYdSw=='}\n" +
-        "}";
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "sig"),
-        wrongSig, 10);
-
-    map = TestSolrConfigHandler.getRespMap("/runtime", client);
-    s = (String) Utils.getObjectByPath(map, false, Arrays.asList("error", "msg"));
-    assertNotNull(map.toString(), s);//No key matched signature for jar
-    assertTrue(map.toString(), s.contains("No key matched signature for jar"));
-
-    String rightSig = "YkTQgOtvcM/H/5EQdABGl3wjjrPhonAGlouIx59vppBy2cZEofX3qX1yZu5sPNRmJisNXEuhHN2149dxeUmk2Q==";
-
-    payload = "{\n" +
-        "'update-runtimelib' : { 'name' : 'signedjar' ,'version':1, 'sig': 'YkTQgOtvcM/H/5EQdABGl3wjjrPhonAGlouIx59vppBy2cZEofX3qX1yZu5sPNRmJisNXEuhHN2149dxeUmk2Q=='}\n" +
-        "}";
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "sig"),
-        rightSig, 10);
-
-    map = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/runtime",
-        null,
-        Arrays.asList("class"),
-        "org.apache.solr.core.RuntimeLibReqHandler", 10);
-    compareValues(map, MemClassLoader.class.getName(), asList("loader"));
-
-    rightSig = "VJPMTxDf8Km3IBj2B5HWkIOqeM/o+HHNobOYCNA3WjrEVfOMZbMMqS1Lo7uLUUp//RZwOGkOhrUhuPNY1z2CGEIKX2/m8VGH64L14d52oSvFiwhoTDDuuyjW1TFGu35D";
-    payload = "{\n" +
-        "'update-runtimelib' : { 'name' : 'signedjar' ,'version':1, 'sig': 'VJPMTxDf8Km3IBj2B5HWkIOqeM/o+HHNobOYCNA3WjrEVfOMZbMMqS1Lo7uLUUp//RZwOGkOhrUhuPNY1z2CGEIKX2/m8VGH64L14d52oSvFiwhoTDDuuyjW1TFGu35D'}\n" +
-        "}";
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "sig"),
-        rightSig, 10);
-
-    map = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/runtime",
-        null,
-        Arrays.asList("class"),
-        "org.apache.solr.core.RuntimeLibReqHandler", 10);
-    compareValues(map, MemClassLoader.class.getName(), asList("loader"));
-  }
-
-
-  private byte[] readFile(String fname) throws IOException {
-    byte[] buf = null;
-    try (FileInputStream fis = new FileInputStream(getFile(fname))) {
-      buf = new byte[fis.available()];
-      fis.read(buf);
-    }
-    return buf;
-  }
-
-
-}
diff --git a/solr/core/src/test/org/apache/solr/cloud/TestLazySolrCluster.java b/solr/core/src/test/org/apache/solr/cloud/TestLazySolrCluster.java
new file mode 100644
index 0000000..0414f6c
--- /dev/null
+++ b/solr/core/src/test/org/apache/solr/cloud/TestLazySolrCluster.java
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cloud;
+
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.client.solrj.request.CollectionAdminRequest;
+import org.apache.solr.cluster.api.CollectionConfig;
+import org.apache.solr.cluster.api.SimpleMap;
+import org.apache.solr.cluster.api.SolrCollection;
+import org.apache.solr.common.LazySolrCluster;
+import org.apache.solr.common.cloud.SolrZkClient;
+import org.apache.solr.common.cloud.ZkStateReader;
+import org.apache.zookeeper.CreateMode;
+import org.junit.BeforeClass;
+
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class TestLazySolrCluster extends SolrCloudTestCase {
+    @BeforeClass
+    public static void setupCluster() throws Exception {
+        configureCluster(5)
+                .addConfig("conf1", TEST_PATH().resolve("configsets").resolve("cloud-minimal").resolve("conf"))
+                .configure();
+    }
+
+    public void test() throws Exception {
+        CloudSolrClient cloudClient = cluster.getSolrClient();
+        String collection = "testLazyCluster1";
+        cloudClient.request(CollectionAdminRequest.createCollection(collection, "conf1", 2, 2));
+        cluster.waitForActiveCollection(collection, 2, 4);
+        collection = "testLazyCluster2";
+        cloudClient.request(CollectionAdminRequest.createCollection(collection, "conf1", 2, 2));
+        cluster.waitForActiveCollection(collection, 2, 4);
+
+        LazySolrCluster solrCluster = new LazySolrCluster(cluster.getSolrClient().getZkStateReader());
+        SimpleMap<SolrCollection> colls = solrCluster.collections();
+
+        SolrCollection c = colls.get("testLazyCluster1");
+        assertNotNull(c);
+        c = colls.get("testLazyCluster2");
+        assertNotNull(c);
+        int[] count = new int[1];
+        solrCluster.collections().forEachEntry((s, solrCollection) -> count[0]++);
+        assertEquals(2, count[0]);
+
+        count[0] = 0;
+
+        assertEquals(2, solrCluster.collections().get("testLazyCluster1").shards().size());
+        solrCluster.collections().get("testLazyCluster1").shards()
+                .forEachEntry((s, shard) -> shard.replicas().forEachEntry((s1, replica) -> count[0]++));
+        assertEquals(4, count[0]);
+
+        assertEquals(5, solrCluster.nodes().size());
+        SolrZkClient zkClient = cloudClient.getZkStateReader().getZkClient();
+        zkClient.create(ZkStateReader.CONFIGS_ZKNODE + "/conf1/a", null, CreateMode.PERSISTENT, true);
+        zkClient.create(ZkStateReader.CONFIGS_ZKNODE + "/conf1/a/aa1", new byte[1024], CreateMode.PERSISTENT, true);
+        zkClient.create(ZkStateReader.CONFIGS_ZKNODE + "/conf1/a/aa2", new byte[1024 * 2], CreateMode.PERSISTENT, true);
+
+        List<String> allFiles =  new ArrayList<>();
+        byte[] buf = new byte[3*1024];
+        CollectionConfig conf1 = solrCluster.configs().get("conf1");
+        conf1.resources().abortableForEach((s, resource) -> {
+            allFiles.add(s);
+            if("a/aa1".equals(s)) {
+                resource.get(is -> assertEquals(1024,  is.read(buf)));
+            }
+            if("a/aa2".equals(s)) {
+                resource.get(is -> assertEquals(2*1024,  is.read(buf)));
+            }
+            if("a".equals(s)) {
+                resource.get(is -> assertEquals(-1, is.read()));
+            }
+            return Boolean.TRUE;
+        });
+        assertEquals(5, allFiles.size());
+
+    }
+
+
+}
diff --git a/solr/core/src/test/org/apache/solr/cloud/api/collections/SimpleCollectionCreateDeleteTest.java b/solr/core/src/test/org/apache/solr/cloud/api/collections/SimpleCollectionCreateDeleteTest.java
index 54b0368..8cd835f 100644
--- a/solr/core/src/test/org/apache/solr/cloud/api/collections/SimpleCollectionCreateDeleteTest.java
+++ b/solr/core/src/test/org/apache/solr/cloud/api/collections/SimpleCollectionCreateDeleteTest.java
@@ -16,10 +16,6 @@
  */
 package org.apache.solr.cloud.api.collections;
 
-import java.util.Collection;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
-
 import org.apache.solr.client.solrj.embedded.JettySolrRunner;
 import org.apache.solr.client.solrj.request.CollectionAdminRequest;
 import org.apache.solr.cloud.AbstractFullDistribZkTestBase;
@@ -32,69 +28,151 @@
 import org.apache.solr.util.TimeOut;
 import org.junit.Test;
 
+import java.util.Collection;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
 public class SimpleCollectionCreateDeleteTest extends AbstractFullDistribZkTestBase {
 
-  public SimpleCollectionCreateDeleteTest() {
-    sliceCount = 1;
-  }
-
-  @Test
-  @ShardsFixed(num = 1)
-  public void test() throws Exception {
-    String overseerNode = OverseerCollectionConfigSetProcessor.getLeaderNode(cloudClient.getZkStateReader().getZkClient());
-    String notOverseerNode = null;
-    for (CloudJettyRunner cloudJetty : cloudJettys) {
-      if (!overseerNode.equals(cloudJetty.nodeName)) {
-        notOverseerNode = cloudJetty.nodeName;
-        break;
-      }
+    public SimpleCollectionCreateDeleteTest() {
+        sliceCount = 1;
     }
-    String collectionName = "SimpleCollectionCreateDeleteTest";
-    CollectionAdminRequest.Create create = CollectionAdminRequest.createCollection(collectionName,1,1)
-            .setCreateNodeSet(overseerNode);
 
-    NamedList<Object> request = create.process(cloudClient).getResponse();
-
-    if (request.get("success") != null) {
-      assertTrue(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionName, false));
-
-      @SuppressWarnings({"rawtypes"})
-      CollectionAdminRequest delete = CollectionAdminRequest.deleteCollection(collectionName);
-      cloudClient.request(delete);
-
-      assertFalse(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionName, false));
-      
-      // currently, removing a collection does not wait for cores to be unloaded
-      TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-      while (true) {
-        
-        if( timeout.hasTimedOut() ) {
-          throw new TimeoutException("Timed out waiting for all collections to be fully removed.");
-        }
-        
-        boolean allContainersEmpty = true;
-        for(JettySolrRunner jetty : jettys) {
-          
-          Collection<SolrCore> cores = jetty.getCoreContainer().getCores();
-          for (SolrCore core : cores) {
-            CoreDescriptor cd = core.getCoreDescriptor();
-            if (cd != null) {
-              if (cd.getCloudDescriptor().getCollectionName().equals(collectionName)) {
-                allContainersEmpty = false;
-              }
+    @Test
+    @ShardsFixed(num = 1)
+    public void testCreateAndDeleteThenCreateAgain() throws Exception {
+        String overseerNode = OverseerCollectionConfigSetProcessor.getLeaderNode(cloudClient.getZkStateReader().getZkClient());
+        String notOverseerNode = null;
+        for (CloudJettyRunner cloudJetty : cloudJettys) {
+            if (!overseerNode.equals(cloudJetty.nodeName)) {
+                notOverseerNode = cloudJetty.nodeName;
+                break;
             }
-          }
         }
-        if (allContainersEmpty) {
-          break;
-        }
-      }
+        String collectionName = "SimpleCollectionCreateDeleteTest";
+        CollectionAdminRequest.Create create = CollectionAdminRequest.createCollection(collectionName, 1, 1)
+                .setCreateNodeSet(overseerNode);
 
-      // create collection again on a node other than the overseer leader
-      create = CollectionAdminRequest.createCollection(collectionName,1,1)
-              .setCreateNodeSet(notOverseerNode);
-      request = create.process(cloudClient).getResponse();
-      assertTrue("Collection creation should not have failed", request.get("success") != null);
+        NamedList<Object> request = create.process(cloudClient).getResponse();
+
+        if (request.get("success") != null) {
+            assertTrue(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionName, false));
+
+            CollectionAdminRequest.Delete delete = CollectionAdminRequest.deleteCollection(collectionName);
+            cloudClient.request(delete);
+
+            assertFalse(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionName, false));
+
+            // currently, removing a collection does not wait for cores to be unloaded
+            TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
+            while (true) {
+
+                if (timeout.hasTimedOut()) {
+                    throw new TimeoutException("Timed out waiting for all collections to be fully removed.");
+                }
+
+                boolean allContainersEmpty = true;
+                for (JettySolrRunner jetty : jettys) {
+
+                    Collection<SolrCore> cores = jetty.getCoreContainer().getCores();
+                    for (SolrCore core : cores) {
+                        CoreDescriptor cd = core.getCoreDescriptor();
+                        if (cd != null) {
+                            if (cd.getCloudDescriptor().getCollectionName().equals(collectionName)) {
+                                allContainersEmpty = false;
+                            }
+                        }
+                    }
+                }
+                if (allContainersEmpty) {
+                    break;
+                }
+            }
+
+            // create collection again on a node other than the overseer leader
+            create = CollectionAdminRequest.createCollection(collectionName, 1, 1)
+                    .setCreateNodeSet(notOverseerNode);
+            request = create.process(cloudClient).getResponse();
+            assertTrue("Collection creation should not have failed", request.get("success") != null);
+        }
     }
-  }
+
+    @Test
+    @ShardsFixed(num = 1)
+    public void testDeleteAlsoDeletesAutocreatedConfigSet() throws Exception {
+        String collectionName = "SimpleCollectionCreateDeleteTest.testDeleteAlsoDeletesAutocreatedConfigSet";
+        CollectionAdminRequest.Create create = CollectionAdminRequest.createCollection(collectionName, 1, 1);
+
+        NamedList<Object> request = create.process(cloudClient).getResponse();
+
+        if (request.get("success") != null) {
+            // collection exists now
+            assertTrue(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionName, false));
+
+            String configName = cloudClient.getZkStateReader().readConfigName(collectionName);
+
+            // config for this collection is '.AUTOCREATED', and exists globally
+            assertTrue(configName.endsWith(".AUTOCREATED"));
+            assertTrue(cloudClient.getZkStateReader().getConfigManager().listConfigs().contains(configName));
+
+            CollectionAdminRequest.Delete delete = CollectionAdminRequest.deleteCollection(collectionName);
+            cloudClient.request(delete);
+
+            // collection has been deleted
+            assertFalse(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionName, false));
+            // ... and so has its autocreated config set
+            assertFalse("The auto-created config set should have been deleted with its collection", cloudClient.getZkStateReader().getConfigManager().listConfigs().contains(configName));
+        }
+    }
+
+    @Test
+    @ShardsFixed(num = 1)
+    public void testDeleteDoesNotDeleteSharedAutocreatedConfigSet() throws Exception {
+        String collectionNameInitial = "SimpleCollectionCreateDeleteTest.initialCollection";
+        CollectionAdminRequest.Create createInitial = CollectionAdminRequest.createCollection(collectionNameInitial, 1, 1);
+
+        NamedList<Object> requestInitial = createInitial.process(cloudClient).getResponse();
+
+        if (requestInitial.get("success") != null) {
+            // collection exists now
+            assertTrue(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionNameInitial, false));
+
+            String configName = cloudClient.getZkStateReader().readConfigName(collectionNameInitial);
+
+            // config for this collection is '.AUTOCREATED', and exists globally
+            assertTrue(configName.endsWith(".AUTOCREATED"));
+            assertTrue(cloudClient.getZkStateReader().getConfigManager().listConfigs().contains(configName));
+
+            // create a second collection, sharing the same configSet
+            String collectionNameWithSharedConfig = "SimpleCollectionCreateDeleteTest.collectionSharingAutocreatedConfigSet";
+            CollectionAdminRequest.Create createWithSharedConfig = CollectionAdminRequest.createCollection(collectionNameWithSharedConfig, configName, 1, 1);
+
+            NamedList<Object> requestWithSharedConfig = createWithSharedConfig.process(cloudClient).getResponse();
+            assertTrue("The collection with shared config set should have been created", requestWithSharedConfig.get("success") != null);
+            assertTrue("The new collection should exist after a successful creation", cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionNameWithSharedConfig, false));
+
+            String configNameOfSecondCollection = cloudClient.getZkStateReader().readConfigName(collectionNameWithSharedConfig);
+
+            assertEquals("Both collections should be using the same config", configName, configNameOfSecondCollection);
+
+            // delete the initial collection - the config set should stay, since it is shared with the other collection
+            CollectionAdminRequest.Delete deleteInitialCollection = CollectionAdminRequest.deleteCollection(collectionNameInitial);
+            cloudClient.request(deleteInitialCollection);
+
+            // initial collection has been deleted
+            assertFalse(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionNameInitial, false));
+            // ... but not its autocreated config set, since it is shared with another collection
+            assertTrue("The auto-created config set should NOT have been deleted. Another collection is using it.", cloudClient.getZkStateReader().getConfigManager().listConfigs().contains(configName));
+
+            // delete the second collection - the config set should now be deleted, since it is no longer shared any other collection
+            CollectionAdminRequest.Delete deleteSecondCollection = CollectionAdminRequest.deleteCollection(collectionNameWithSharedConfig);
+            cloudClient.request(deleteSecondCollection);
+
+            // the collection has been deleted
+            assertFalse(cloudClient.getZkStateReader().getZkClient().exists(ZkStateReader.COLLECTIONS_ZKNODE + "/" + collectionNameWithSharedConfig, false));
+            // ... and the config set is now also deleted - once it doesn't get referenced by any collection
+            assertFalse("The auto-created config set should have been deleted now. No collection is referencing it.", cloudClient.getZkStateReader().getConfigManager().listConfigs().contains(configName));
+        }
+    }
+
 }
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/BaseCdcrDistributedZkTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/BaseCdcrDistributedZkTest.java
deleted file mode 100644
index aa35de1..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/BaseCdcrDistributedZkTest.java
+++ /dev/null
@@ -1,906 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.cloud.cdcr;
-
-import java.io.File;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Set;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.solr.client.solrj.SolrClient;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.SolrRequest;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.embedded.JettySolrRunner;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.client.solrj.request.QueryRequest;
-import org.apache.solr.client.solrj.response.CollectionAdminResponse;
-import org.apache.solr.cloud.AbstractDistribZkTestBase;
-import org.apache.solr.cloud.AbstractZkTestCase;
-import org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.cloud.ClusterState;
-import org.apache.solr.common.cloud.DocCollection;
-import org.apache.solr.common.cloud.Replica;
-import org.apache.solr.common.cloud.Slice;
-import org.apache.solr.common.cloud.ZkCoreNodeProps;
-import org.apache.solr.common.cloud.ZkNodeProps;
-import org.apache.solr.common.cloud.ZkStateReader;
-import org.apache.solr.common.params.CollectionParams;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.util.IOUtils;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.common.util.TimeSource;
-import org.apache.solr.common.util.Utils;
-import org.apache.solr.core.CoreDescriptor;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.handler.CdcrParams;
-import org.apache.solr.util.TimeOut;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import static org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.CREATE_NODE_SET;
-import static org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.NUM_SLICES;
-import static org.apache.solr.common.cloud.ZkStateReader.CLUSTER_PROPS;
-import static org.apache.solr.common.cloud.ZkStateReader.REPLICATION_FACTOR;
-import static org.apache.solr.handler.admin.CoreAdminHandler.COMPLETED;
-import static org.apache.solr.handler.admin.CoreAdminHandler.RESPONSE_STATUS;
-
-/**
- * <p>
- * Abstract class for CDCR unit testing. This class emulates two clusters, a source and target, by using different
- * collections in the same SolrCloud cluster. Therefore, the two clusters will share the same Zookeeper cluster. In
- * real scenario, the two collections/clusters will likely have their own zookeeper cluster.
- * </p>
- * <p>
- * This class will automatically create two collections, the source and the target. Each collection will have
- * {@link #shardCount} shards, and {@link #replicationFactor} replicas per shard. One jetty instance will
- * be created per core.
- * </p>
- * <p>
- * The source and target collection can be reinitialised at will by calling {@link #clearSourceCollection()} and
- * {@link #clearTargetCollection()}. After reinitialisation, a collection will have a new fresh index and update log.
- * </p>
- * <p>
- * Servers can be restarted at will by calling
- * {@link #restartServer(BaseCdcrDistributedZkTest.CloudJettyRunner)} or
- * {@link #restartServers(java.util.List)}.
- * </p>
- * <p>
- * The creation of the target collection can be disabled with the flag {@link #createTargetCollection};
- * </p>
- * <p>
- * NB: We cannot use multiple cores per jetty instance, as jetty will load only one core when restarting. It seems
- * that this is a limitation of the {@link org.apache.solr.client.solrj.embedded.JettySolrRunner}. This class
- * tries to ensure that there always is one single core per jetty instance.
- * </p>
- */
-public class BaseCdcrDistributedZkTest extends AbstractDistribZkTestBase {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  protected int shardCount = 2;
-  protected int replicationFactor = 2;
-  protected boolean createTargetCollection = true;
-
-  private static final String CDCR_PATH = "/cdcr";
-
-  protected static final String SOURCE_COLLECTION = "source_collection";
-  protected static final String TARGET_COLLECTION = "target_collection";
-
-  public static final String SHARD1 = "shard1";
-  public static final String SHARD2 = "shard2";
-
-  @Override
-  protected String getCloudSolrConfig() {
-    return "solrconfig-cdcr.xml";
-  }
-
-  @Override
-  public void distribSetUp() throws Exception {
-    super.distribSetUp();
-
-    if (shardCount > 0) {
-      System.setProperty("numShards", Integer.toString(shardCount));
-    } else {
-      System.clearProperty("numShards");
-    }
-
-    if (isSSLMode()) {
-      System.clearProperty("urlScheme");
-      ZkStateReader zkStateReader = new ZkStateReader(zkServer.getZkAddress(),
-          AbstractZkTestCase.TIMEOUT, AbstractZkTestCase.TIMEOUT);
-      try {
-        zkStateReader.getZkClient().create(ZkStateReader.CLUSTER_PROPS,
-            Utils.toJSON(Collections.singletonMap("urlScheme", "https")),
-            CreateMode.PERSISTENT, true);
-      } catch (KeeperException.NodeExistsException e) {
-        ZkNodeProps props = ZkNodeProps.load(zkStateReader.getZkClient().getData(ZkStateReader.CLUSTER_PROPS,
-            null, null, true));
-        props = props.plus("urlScheme", "https");
-        zkStateReader.getZkClient().setData(CLUSTER_PROPS, Utils.toJSON(props), true);
-      } finally {
-        zkStateReader.close();
-      }
-    }
-  }
-
-  @Override
-  protected void createServers(int numServers) throws Exception {
-  }
-
-  @BeforeClass
-  public static void beforeClass() {
-    System.setProperty("solrcloud.update.delay", "0");
-  }
-
-  @AfterClass
-  public static void afterClass() throws Exception {
-    System.clearProperty("solrcloud.update.delay");
-  }
-
-  @Before
-  @SuppressWarnings({"rawtypes"})
-  public void baseBefore() throws Exception {
-    this.createSourceCollection();
-    if (this.createTargetCollection) this.createTargetCollection();
-    RandVal.uniqueValues = new HashSet(); //reset random values
-  }
-
-  @After
-  public void baseAfter() throws Exception {
-    for (List<CloudJettyRunner> runners : cloudJettys.values()) {
-      for (CloudJettyRunner runner : runners) {
-        runner.client.close();
-      }
-    }
-    destroyServers();
-  }
-
-  protected CloudSolrClient createCloudClient(String defaultCollection) {
-    CloudSolrClient server = getCloudSolrClient(zkServer.getZkAddress(), random().nextBoolean());
-    if (defaultCollection != null) server.setDefaultCollection(defaultCollection);
-    return server;
-  }
-
-  protected SolrInputDocument getDoc(Object... fields) throws Exception {
-    SolrInputDocument doc = new SolrInputDocument();
-    addFields(doc, fields);
-    return doc;
-  }
-
-  protected void index(String collection, SolrInputDocument doc) throws IOException, SolrServerException {
-    CloudSolrClient client = createCloudClient(collection);
-    try {
-      client.add(doc);
-      client.commit(true, true);
-    } finally {
-      client.close();
-    }
-  }
-
-  protected void index(String collection, List<SolrInputDocument> docs) throws IOException, SolrServerException {
-    CloudSolrClient client = createCloudClient(collection);
-    try {
-      client.add(docs);
-      client.commit(true, true);
-    } finally {
-      client.close();
-    }
-  }
-
-  protected void deleteById(String collection, List<String> ids) throws IOException, SolrServerException {
-    CloudSolrClient client = createCloudClient(collection);
-    try {
-      client.deleteById(ids);
-      client.commit(true, true);
-    } finally {
-      client.close();
-    }
-  }
-
-  protected void deleteByQuery(String collection, String q) throws IOException, SolrServerException {
-    CloudSolrClient client = createCloudClient(collection);
-    try {
-      client.deleteByQuery(q);
-      client.commit(true, true);
-    } finally {
-      client.close();
-    }
-  }
-
-  /**
-   * Invokes a commit on the given collection.
-   */
-  protected void commit(String collection) throws IOException, SolrServerException {
-    CloudSolrClient client = createCloudClient(collection);
-    try {
-      client.commit(true, true);
-    } finally {
-      client.close();
-    }
-  }
-
-  /**
-   * Assert the number of documents in a given collection
-   */
-  protected void assertNumDocs(int expectedNumDocs, String collection)
-  throws SolrServerException, IOException, InterruptedException {
-    CloudSolrClient client = createCloudClient(collection);
-    try {
-      int cnt = 30; // timeout after 15 seconds
-      AssertionError lastAssertionError = null;
-      while (cnt > 0) {
-        try {
-          assertEquals(expectedNumDocs, client.query(new SolrQuery("*:*")).getResults().getNumFound());
-          return;
-        }
-        catch (AssertionError e) {
-          lastAssertionError = e;
-          cnt--;
-          Thread.sleep(500);
-        }
-      }
-      throw new AssertionError("Timeout while trying to assert number of documents @ " + collection, lastAssertionError);
-    } finally {
-      client.close();
-    }
-  }
-
-  /**
-   * Invokes a CDCR action on a given node.
-   */
-  @SuppressWarnings({"rawtypes"})
-  protected NamedList invokeCdcrAction(CloudJettyRunner jetty, CdcrParams.CdcrAction action) throws Exception {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set(CommonParams.ACTION, action.toString());
-
-    SolrRequest request = new QueryRequest(params);
-    request.setPath(CDCR_PATH);
-
-    return jetty.client.request(request);
-  }
-
-  protected void waitForCdcrStateReplication(String collection) throws Exception {
-    log.info("Wait for CDCR state to replicate - collection: {}", collection);
-
-    int cnt = 30;
-    while (cnt > 0) {
-      @SuppressWarnings({"rawtypes"})
-      NamedList status = null;
-      boolean allEquals = true;
-      for (CloudJettyRunner jetty : cloudJettys.get(collection)) { // check all replicas
-        @SuppressWarnings({"rawtypes"})
-        NamedList rsp = invokeCdcrAction(jetty, CdcrParams.CdcrAction.STATUS);
-        if (status == null) {
-          status = (NamedList) rsp.get(CdcrParams.CdcrAction.STATUS.toLower());
-          continue;
-        }
-        allEquals &= status.equals(rsp.get(CdcrParams.CdcrAction.STATUS.toLower()));
-      }
-
-      if (allEquals) {
-        break;
-      }
-      else {
-        if (cnt == 0) {
-          throw new RuntimeException("Timeout waiting for CDCR state to replicate: collection="+collection);
-        }
-        cnt--;
-        Thread.sleep(500);
-      }
-    }
-
-    log.info("CDCR state is identical across nodes - collection: {}", collection);
-  }
-
-  /**
-   * Assert the state of CDCR on each nodes of the given collection.
-   */
-  protected void assertState(String collection, CdcrParams.ProcessState processState, CdcrParams.BufferState bufferState)
-  throws Exception {
-    this.waitForCdcrStateReplication(collection); // ensure that cdcr state is replicated and stable
-    for (CloudJettyRunner jetty : cloudJettys.get(collection)) { // check all replicas
-      @SuppressWarnings({"rawtypes"})
-      NamedList rsp = invokeCdcrAction(jetty, CdcrParams.CdcrAction.STATUS);
-      @SuppressWarnings({"rawtypes"})
-      NamedList status = (NamedList) rsp.get(CdcrParams.CdcrAction.STATUS.toLower());
-      assertEquals(processState.toLower(), status.get(CdcrParams.ProcessState.getParam()));
-      assertEquals(bufferState.toLower(), status.get(CdcrParams.BufferState.getParam()));
-    }
-  }
-
-  /**
-   * A mapping between collection and node names. This is used when creating the collection in
-   * {@link #createCollection(String)}.
-   */
-  private Map<String, List<String>> collectionToNodeNames = new HashMap<>();
-
-  /**
-   * Starts the servers, saves and associates the node names to the source collection,
-   * and finally creates the source collection.
-   */
-  private void createSourceCollection() throws Exception {
-    List<String> nodeNames = this.startServers(shardCount * replicationFactor);
-    this.collectionToNodeNames.put(SOURCE_COLLECTION, nodeNames);
-    this.createCollection(SOURCE_COLLECTION);
-    this.waitForRecoveriesToFinish(SOURCE_COLLECTION, true);
-    this.updateMappingsFromZk(SOURCE_COLLECTION);
-  }
-
-  /**
-   * Clear the source collection. It will delete then create the collection through the collection API.
-   * The collection will have a new fresh index, i.e., including a new update log.
-   */
-  protected void clearSourceCollection() throws Exception {
-    this.deleteCollection(SOURCE_COLLECTION);
-    this.waitForCollectionToDisappear(SOURCE_COLLECTION);
-    this.createCollection(SOURCE_COLLECTION);
-    this.waitForRecoveriesToFinish(SOURCE_COLLECTION, true);
-    this.updateMappingsFromZk(SOURCE_COLLECTION);
-  }
-
-  /**
-   * Starts the servers, saves and associates the node names to the target collection,
-   * and finally creates the target collection.
-   */
-  private void createTargetCollection() throws Exception {
-    List<String> nodeNames = this.startServers(shardCount * replicationFactor);
-    this.collectionToNodeNames.put(TARGET_COLLECTION, nodeNames);
-    this.createCollection(TARGET_COLLECTION);
-    this.waitForRecoveriesToFinish(TARGET_COLLECTION, true);
-    this.updateMappingsFromZk(TARGET_COLLECTION);
-  }
-
-  /**
-   * Clear the source collection. It will delete then create the collection through the collection API.
-   * The collection will have a new fresh index, i.e., including a new update log.
-   */
-  protected void clearTargetCollection() throws Exception {
-    this.deleteCollection(TARGET_COLLECTION);
-    this.waitForCollectionToDisappear(TARGET_COLLECTION);
-    this.createCollection(TARGET_COLLECTION);
-    this.waitForRecoveriesToFinish(TARGET_COLLECTION, true);
-    this.updateMappingsFromZk(TARGET_COLLECTION);
-  }
-
-  /**
-   * Create a new collection through the Collection API. It enforces the use of one max shard per node.
-   * It will define the nodes to spread the new collection across by using the mapping {@link #collectionToNodeNames},
-   * to ensure that a node will not host more than one core (which will create problem when trying to restart servers).
-   */
-  private void createCollection(String name) throws Exception {
-    CloudSolrClient client = createCloudClient(null);
-    try {
-      // Create the target collection
-      Map<String, List<Integer>> collectionInfos = new HashMap<>();
-
-      StringBuilder sb = new StringBuilder();
-      for (String nodeName : collectionToNodeNames.get(name)) {
-        sb.append(nodeName);
-        sb.append(',');
-      }
-      sb.deleteCharAt(sb.length() - 1);
-
-      createCollection(collectionInfos, name, shardCount, replicationFactor, client, sb.toString());
-    } finally {
-      client.close();
-    }
-  }
-
-  private CollectionAdminResponse createCollection(Map<String, List<Integer>> collectionInfos,
-                                                   String collectionName, int numShards, int replicationFactor,
-                                                   SolrClient client, String createNodeSetStr)
-      throws SolrServerException, IOException {
-    return createCollection(collectionInfos, collectionName,
-        Utils.makeMap(
-            NUM_SLICES, numShards,
-            REPLICATION_FACTOR, replicationFactor,
-            CREATE_NODE_SET, createNodeSetStr),
-        client, "conf1");
-  }
-
-  private CollectionAdminResponse createCollection(Map<String, List<Integer>> collectionInfos, String collectionName,
-                                                   Map<String, Object> collectionProps, SolrClient client,
-                                                   String confSetName)
-      throws SolrServerException, IOException {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set("action", CollectionParams.CollectionAction.CREATE.toString());
-    for (Map.Entry<String, Object> entry : collectionProps.entrySet()) {
-      if (entry.getValue() != null) params.set(entry.getKey(), String.valueOf(entry.getValue()));
-    }
-    Integer numShards = (Integer) collectionProps.get(OverseerCollectionMessageHandler.NUM_SLICES);
-    if (numShards == null) {
-      String shardNames = (String) collectionProps.get(OverseerCollectionMessageHandler.SHARDS_PROP);
-      numShards = StrUtils.splitSmart(shardNames, ',').size();
-    }
-    Integer replicationFactor = (Integer) collectionProps.get(REPLICATION_FACTOR);
-    if (replicationFactor == null) {
-      replicationFactor = (Integer) OverseerCollectionMessageHandler.COLLECTION_PROPS_AND_DEFAULTS.get(REPLICATION_FACTOR);
-    }
-
-    if (confSetName != null) {
-      params.set("collection.configName", confSetName);
-    }
-
-    List<Integer> list = new ArrayList<>();
-    list.add(numShards);
-    list.add(replicationFactor);
-    if (collectionInfos != null) {
-      collectionInfos.put(collectionName, list);
-    }
-    params.set("name", collectionName);
-    @SuppressWarnings({"rawtypes"})
-    SolrRequest request = new QueryRequest(params);
-    request.setPath("/admin/collections");
-
-    CollectionAdminResponse res = new CollectionAdminResponse();
-    res.setResponse(client.request(request));
-    return res;
-  }
-
-  /**
-   * Delete a collection through the Collection API.
-   */
-  protected CollectionAdminResponse deleteCollection(String collectionName) throws Exception {
-    SolrClient client = createCloudClient(null);
-    CollectionAdminResponse res;
-
-    try {
-      ModifiableSolrParams params = new ModifiableSolrParams();
-      params.set("action", CollectionParams.CollectionAction.DELETE.toString());
-      params.set("name", collectionName);
-      QueryRequest request = new QueryRequest(params);
-      request.setPath("/admin/collections");
-
-      res = new CollectionAdminResponse();
-      res.setResponse(client.request(request));
-    } catch (Exception e) {
-      log.warn("Error while deleting the collection {}", collectionName, e);
-      return new CollectionAdminResponse();
-    } finally {
-      client.close();
-    }
-
-    return res;
-  }
-
-  private void waitForCollectionToDisappear(String collection) throws Exception {
-    CloudSolrClient client = this.createCloudClient(null);
-    try {
-      client.connect();
-      ZkStateReader zkStateReader = client.getZkStateReader();
-      AbstractDistribZkTestBase.waitForCollectionToDisappear(collection, zkStateReader, true, 15);
-    } finally {
-      client.close();
-    }
-  }
-
-  private void waitForRecoveriesToFinish(String collection, boolean verbose) throws Exception {
-    CloudSolrClient client = this.createCloudClient(null);
-    try {
-      client.connect();
-      ZkStateReader zkStateReader = client.getZkStateReader();
-      super.waitForRecoveriesToFinish(collection, zkStateReader, verbose);
-    } finally {
-      client.close();
-    }
-  }
-
-  /**
-   * Asserts that the collection has the correct number of shards and replicas
-   */
-  protected void assertCollectionExpectations(String collectionName) throws Exception {
-    CloudSolrClient client = this.createCloudClient(null);
-    try {
-      client.connect();
-      ClusterState clusterState = client.getZkStateReader().getClusterState();
-
-      assertTrue("Could not find new collection " + collectionName, clusterState.hasCollection(collectionName));
-      Map<String, Slice> shards = clusterState.getCollection(collectionName).getSlicesMap();
-      // did we find expectedSlices shards/shards?
-      assertEquals("Found new collection " + collectionName + ", but mismatch on number of shards.", shardCount, shards.size());
-      int totalShards = 0;
-      for (String shardName : shards.keySet()) {
-        totalShards += shards.get(shardName).getReplicas().size();
-      }
-      int expectedTotalShards = shardCount * replicationFactor;
-      assertEquals("Found new collection " + collectionName + " with correct number of shards, but mismatch on number " +
-          "of shards.", expectedTotalShards, totalShards);
-    } finally {
-      client.close();
-    }
-  }
-
-  /**
-   * Restart a server.
-   */
-  protected void restartServer(CloudJettyRunner server) throws Exception {
-    // it seems we need to set the collection property to have the jetty properly restarted
-    System.setProperty("collection", server.collection);
-    JettySolrRunner jetty = server.jetty;
-    jetty.stop();
-    jetty.start();
-    System.clearProperty("collection");
-    waitForRecoveriesToFinish(server.collection, true);
-    updateMappingsFromZk(server.collection); // must update the mapping as the core node name might have changed
-  }
-
-  /**
-   * Restarts a list of servers.
-   */
-  protected void restartServers(List<CloudJettyRunner> servers) throws Exception {
-    for (CloudJettyRunner server : servers) {
-      this.restartServer(server);
-    }
-  }
-
-  private List<JettySolrRunner> jettys = new ArrayList<>();
-
-  /**
-   * Creates and starts a given number of servers.
-   */
-  protected List<String> startServers(int nServer) throws Exception {
-    String temporaryCollection = "tmp_collection";
-
-    for (int i = 1; i <= nServer; i++) {
-      // give everyone there own solrhome
-      File jettyDir = createTempDir("jetty").toFile();
-      jettyDir.mkdirs();
-      setupJettySolrHome(jettyDir);
-      JettySolrRunner jetty = createJetty(jettyDir, null, "shard" + i);
-      jetty.start();
-      jettys.add(jetty);
-    }
-
-    try (SolrClient client = createCloudClient(temporaryCollection)) {
-      assertEquals(0, CollectionAdminRequest
-          .createCollection(temporaryCollection, "conf1", shardCount, 1)
-          .setCreateNodeSet("")
-          .process(client).getStatus());
-      for (int i = 0; i < jettys.size(); i++) {
-        assertTrue(CollectionAdminRequest
-            .addReplicaToShard(temporaryCollection, "shard"+((i % shardCount) + 1))
-            .setNode(jettys.get(i).getNodeName())
-            .process(client).isSuccess());
-      }
-    }
-
-    ZkStateReader zkStateReader = jettys.get(0).getCoreContainer().getZkController().getZkStateReader();
-
-    // now wait till we see the leader for each shard
-    for (int i = 1; i <= shardCount; i++) {
-      zkStateReader.getLeaderRetry(temporaryCollection, "shard" + i, 15000);
-    }
-
-    // store the node names
-    List<String> nodeNames = new ArrayList<>();
-    for (Slice shard : zkStateReader.getClusterState().getCollection(temporaryCollection).getSlices()) {
-      for (Replica replica : shard.getReplicas()) {
-        nodeNames.add(replica.getNodeName());
-      }
-    }
-
-    this.waitForRecoveriesToFinish(temporaryCollection,zkStateReader, true);
-    // delete the temporary collection - we will create our own collections later
-    this.deleteCollection(temporaryCollection);
-    this.waitForCollectionToDisappear(temporaryCollection);
-    System.clearProperty("collection");
-
-    return nodeNames;
-  }
-
-  @Override
-  protected void destroyServers() throws Exception {
-    for (JettySolrRunner runner : jettys) {
-      try {
-        runner.stop();
-      } catch (Exception e) {
-        log.error("", e);
-      }
-    }
-
-    jettys.clear();
-  }
-
-  /**
-   * Mapping from collection to jettys
-   */
-  protected Map<String, List<CloudJettyRunner>> cloudJettys = new HashMap<>();
-
-  /**
-   * Mapping from collection/shard to jettys
-   */
-  protected Map<String, Map<String, List<CloudJettyRunner>>> shardToJetty = new HashMap<>();
-
-  /**
-   * Mapping from collection/shard leader to jettys
-   */
-  protected Map<String, Map<String, CloudJettyRunner>> shardToLeaderJetty = new HashMap<>();
-
-  /**
-   * Updates the mappings between the jetty's instances and the zookeeper cluster state.
-   */
-  protected void updateMappingsFromZk(String collection) throws Exception {
-    List<CloudJettyRunner> cloudJettys = new ArrayList<>();
-    Map<String, List<CloudJettyRunner>> shardToJetty = new HashMap<>();
-    Map<String, CloudJettyRunner> shardToLeaderJetty = new HashMap<>();
-
-    CloudSolrClient cloudClient = this.createCloudClient(null);
-    try {
-      cloudClient.connect();
-      ZkStateReader zkStateReader = cloudClient.getZkStateReader();
-      ClusterState clusterState = zkStateReader.getClusterState();
-      DocCollection coll = clusterState.getCollection(collection);
-
-      for (JettySolrRunner jetty : jettys) {
-        int port = jetty.getLocalPort();
-        if (port == -1) {
-          throw new RuntimeException("Cannot find the port for jetty");
-        }
-
-        nextJetty:
-        for (Slice shard : coll.getSlices()) {
-          Set<Map.Entry<String, Replica>> entries = shard.getReplicasMap().entrySet();
-          for (Map.Entry<String, Replica> entry : entries) {
-            Replica replica = entry.getValue();
-            if (replica.getStr(ZkStateReader.BASE_URL_PROP).contains(":" + port)) {
-              if (!shardToJetty.containsKey(shard.getName())) {
-                shardToJetty.put(shard.getName(), new ArrayList<CloudJettyRunner>());
-              }
-              boolean isLeader = shard.getLeader() == replica;
-              CloudJettyRunner cjr = new CloudJettyRunner(jetty, replica, collection, shard.getName(), entry.getKey());
-              shardToJetty.get(shard.getName()).add(cjr);
-              if (isLeader) {
-                shardToLeaderJetty.put(shard.getName(), cjr);
-              }
-              cloudJettys.add(cjr);
-              break nextJetty;
-            }
-          }
-        }
-      }
-
-      List<CloudJettyRunner> oldRunners = this.cloudJettys.putIfAbsent(collection, cloudJettys);
-      if (oldRunners != null)  {
-        // must close resources for the old entries
-        for (CloudJettyRunner oldRunner : oldRunners) {
-          IOUtils.closeQuietly(oldRunner.client);
-        }
-      }
-
-      this.cloudJettys.put(collection, cloudJettys);
-      this.shardToJetty.put(collection, shardToJetty);
-      this.shardToLeaderJetty.put(collection, shardToLeaderJetty);
-    } finally {
-      cloudClient.close();
-    }
-  }
-
-  /**
-   * Wrapper around a {@link org.apache.solr.client.solrj.embedded.JettySolrRunner} to map the jetty
-   * instance to various information of the cloud cluster, such as the collection and shard
-   * that is served by the jetty instance, the node name, core node name, url, etc.
-   */
-  public static class CloudJettyRunner {
-
-    public JettySolrRunner jetty;
-    public String nodeName;
-    public String coreNodeName;
-    public String url;
-    public SolrClient client;
-    public Replica info;
-    public String shard;
-    public String collection;
-
-    public CloudJettyRunner(JettySolrRunner jetty, Replica replica,
-                            String collection, String shard, String coreNodeName) {
-      this.jetty = jetty;
-      this.info = replica;
-      this.collection = collection;
-      
-      Properties nodeProperties = jetty.getNodeProperties();
-
-      // we need to update the jetty's shard so that it registers itself to the right shard when restarted
-      this.shard = shard;
-      nodeProperties.setProperty(CoreDescriptor.CORE_SHARD, this.shard);
-
-      // we need to update the jetty's shard so that it registers itself under the right core name when restarted
-      this.coreNodeName = coreNodeName;
-      nodeProperties.setProperty(CoreDescriptor.CORE_NODE_NAME, this.coreNodeName);
-
-      this.nodeName = replica.getNodeName();
-
-      ZkCoreNodeProps coreNodeProps = new ZkCoreNodeProps(info);
-      this.url = coreNodeProps.getCoreUrl();
-
-      // strip the trailing slash as this can cause issues when executing requests
-      this.client = createNewSolrServer(this.url.substring(0, this.url.length() - 1));
-    }
-
-    @Override
-    public int hashCode() {
-      final int prime = 31;
-      int result = 1;
-      result = prime * result + ((url == null) ? 0 : url.hashCode());
-      return result;
-    }
-
-    @Override
-    public boolean equals(Object obj) {
-      if (this == obj) return true;
-      if (obj == null) return false;
-      if (getClass() != obj.getClass()) return false;
-      CloudJettyRunner other = (CloudJettyRunner) obj;
-      if (url == null) {
-        if (other.url != null) return false;
-      } else if (!url.equals(other.url)) return false;
-      return true;
-    }
-
-    @Override
-    public String toString() {
-      return "CloudJettyRunner [url=" + url + "]";
-    }
-
-  }
-
-  protected static SolrClient createNewSolrServer(String baseUrl) {
-    try {
-      // setup the server...
-      HttpSolrClient s = getHttpSolrClient(baseUrl, DEFAULT_CONNECTION_TIMEOUT);
-      return s;
-    } catch (Exception ex) {
-      throw new RuntimeException(ex);
-    }
-  }
-
-  protected void waitForBootstrapToComplete(String collectionName, String shardId) throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    NamedList rsp;// we need to wait until bootstrap is complete otherwise the replicator thread will never start
-    TimeOut timeOut = new TimeOut(60, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-    while (!timeOut.hasTimedOut())  {
-      rsp = invokeCdcrAction(shardToLeaderJetty.get(collectionName).get(shardId), CdcrParams.CdcrAction.BOOTSTRAP_STATUS);
-      if (rsp.get(RESPONSE_STATUS).toString().equals(COMPLETED))  {
-        break;
-      }
-      Thread.sleep(1000);
-    }
-  }
-
-  protected void waitForReplicationToComplete(String collectionName, String shardId) throws Exception {
-    int cnt = 15;
-    while (cnt > 0) {
-      log.info("Checking queue size @ {}:{}", collectionName, shardId);
-      long size = this.getQueueSize(collectionName, shardId);
-      if (size == 0) { // if we received -1, it means that the log reader is not yet initialised, we should wait
-        return;
-      }
-      log.info("Waiting for replication to complete. Queue size: {} @ {}:{}", size, collectionName, shardId);
-      cnt--;
-      Thread.sleep(1000); // wait a bit for the replication to complete
-    }
-    throw new RuntimeException("Timeout waiting for CDCR replication to complete @" + collectionName + ":"  + shardId);
-  }
-
-  protected long getQueueSize(String collectionName, String shardId) throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    NamedList rsp = this.invokeCdcrAction(shardToLeaderJetty.get(collectionName).get(shardId), CdcrParams.CdcrAction.QUEUES);
-    @SuppressWarnings({"rawtypes"})
-    NamedList host = (NamedList) ((NamedList) rsp.get(CdcrParams.QUEUES)).getVal(0);
-    @SuppressWarnings({"rawtypes"})
-    NamedList status = (NamedList) host.get(TARGET_COLLECTION);
-    return (Long) status.get(CdcrParams.QUEUE_SIZE);
-  }
-
-  protected CollectionInfo collectInfo(String collection) throws Exception {
-    CollectionInfo info = new CollectionInfo(collection);
-    for (String shard : shardToJetty.get(collection).keySet()) {
-      List<CloudJettyRunner> jettyRunners = shardToJetty.get(collection).get(shard);
-      for (CloudJettyRunner jettyRunner : jettyRunners) {
-        for (SolrCore core : jettyRunner.jetty.getCoreContainer().getCores()) {
-          info.addCore(core, shard, shardToLeaderJetty.get(collection).containsValue(jettyRunner));
-        }
-      }
-    }
-
-    return info;
-  }
-
-  protected static class CollectionInfo {
-
-    List<CoreInfo> coreInfos = new ArrayList<>();
-
-    String collection;
-
-    CollectionInfo(String collection) {
-      this.collection = collection;
-    }
-
-    /**
-     * @return Returns a map shard -> list of cores
-     */
-    Map<String, List<CoreInfo>> getShardToCoresMap() {
-      Map<String, List<CoreInfo>> map = new HashMap<>();
-      for (CoreInfo info : coreInfos) {
-        List<CoreInfo> list = map.get(info.shard);
-        if (list == null) {
-          list = new ArrayList<>();
-          map.put(info.shard, list);
-        }
-        list.add(info);
-      }
-      return map;
-    }
-
-    CoreInfo getLeader(String shard) {
-      List<CoreInfo> coreInfos = getShardToCoresMap().get(shard);
-      for (CoreInfo info : coreInfos) {
-        if (info.isLeader) {
-          return info;
-        }
-      }
-      assertTrue(String.format(Locale.ENGLISH, "There is no leader for collection %s shard %s", collection, shard), false);
-      return null;
-    }
-
-    List<CoreInfo> getReplicas(String shard) {
-      List<CoreInfo> coreInfos = getShardToCoresMap().get(shard);
-      coreInfos.remove(getLeader(shard));
-      return coreInfos;
-    }
-
-    void addCore(SolrCore core, String shard, boolean isLeader) throws Exception {
-      CoreInfo info = new CoreInfo();
-      info.collectionName = core.getName();
-      info.shard = shard;
-      info.isLeader = isLeader;
-      info.ulogDir = core.getUpdateHandler().getUpdateLog().getLogDir();
-
-      this.coreInfos.add(info);
-    }
-
-    public static class CoreInfo {
-      String collectionName;
-      String shard;
-      boolean isLeader;
-      String ulogDir;
-    }
-
-  }
-
-}
-
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBidirectionalTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBidirectionalTest.java
deleted file mode 100644
index 7f1db84..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBidirectionalTest.java
+++ /dev/null
@@ -1,244 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.cloud.cdcr;
-
-import java.lang.invoke.MethodHandles;
-import java.util.concurrent.TimeUnit;
-
-import com.google.common.collect.ImmutableMap;
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.request.AbstractUpdateRequest;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.MiniSolrCloudCluster;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.util.TimeSource;
-import org.apache.solr.handler.CdcrParams;
-import org.apache.solr.util.TimeOut;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class CdcrBidirectionalTest extends SolrTestCaseJ4 {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Test
-  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-12524")
-  public void testBiDir() throws Exception {
-    MiniSolrCloudCluster cluster2 = new MiniSolrCloudCluster(1, createTempDir("cdcr-cluster2"), buildJettyConfig("/solr"));
-    MiniSolrCloudCluster cluster1 = new MiniSolrCloudCluster(1, createTempDir("cdcr-cluster1"), buildJettyConfig("/solr"));
-    try {
-      if (log.isInfoEnabled()) {
-        log.info("cluster2 zkHost = {}", cluster2.getZkServer().getZkAddress());
-      }
-      System.setProperty("cdcr.cluster2.zkHost", cluster2.getZkServer().getZkAddress());
-
-      if (log.isInfoEnabled()) {
-        log.info("cluster1 zkHost = {}", cluster1.getZkServer().getZkAddress());
-      }
-      System.setProperty("cdcr.cluster1.zkHost", cluster1.getZkServer().getZkAddress());
-
-
-      cluster1.uploadConfigSet(configset("cdcr-cluster1"), "cdcr-cluster1");
-      CollectionAdminRequest.createCollection("cdcr-cluster1", "cdcr-cluster1", 2, 1)
-          .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-          .process(cluster1.getSolrClient());
-      CloudSolrClient cluster1SolrClient = cluster1.getSolrClient();
-      cluster1SolrClient.setDefaultCollection("cdcr-cluster1");
-
-      cluster2.uploadConfigSet(configset("cdcr-cluster2"), "cdcr-cluster2");
-      CollectionAdminRequest.createCollection("cdcr-cluster2", "cdcr-cluster2", 2, 1)
-          .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-          .process(cluster2.getSolrClient());
-      CloudSolrClient cluster2SolrClient = cluster2.getSolrClient();
-      cluster2SolrClient.setDefaultCollection("cdcr-cluster2");
-
-      UpdateRequest req = null;
-
-      CdcrTestsUtil.cdcrStart(cluster1SolrClient);
-      Thread.sleep(2000);
-
-      // ADD operation on cluster 1
-      int docs = (TEST_NIGHTLY ? 100 : 10);
-      int numDocs_c1 = 0;
-      for (int k = 0; k < docs; k++) {
-        req = new UpdateRequest();
-        for (; numDocs_c1 < (k + 1) * 100; numDocs_c1++) {
-          SolrInputDocument doc = new SolrInputDocument();
-          doc.addField("id", "cluster1_" + numDocs_c1);
-          doc.addField("xyz", numDocs_c1);
-          req.add(doc);
-        }
-        req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
-        log.info("Adding {} docs with commit=true, numDocs={}", docs, numDocs_c1);
-        req.process(cluster1SolrClient);
-      }
-
-      QueryResponse response = cluster1SolrClient.query(new SolrQuery("*:*"));
-      assertEquals("cluster 1 docs mismatch", numDocs_c1, response.getResults().getNumFound());
-
-      assertEquals("cluster 2 docs mismatch", numDocs_c1, CdcrTestsUtil.waitForClusterToSync(numDocs_c1, cluster2SolrClient));
-
-      CdcrTestsUtil.cdcrStart(cluster2SolrClient); // FULL BI-DIRECTIONAL CDCR FORWARDING ON
-      Thread.sleep(2000);
-
-      // ADD operation on cluster 2
-      int numDocs_c2 = 0;
-      for (int k = 0; k < docs; k++) {
-        req = new UpdateRequest();
-        for (; numDocs_c2 < (k + 1) * 100; numDocs_c2++) {
-          SolrInputDocument doc = new SolrInputDocument();
-          doc.addField("id", "cluster2_" + numDocs_c2);
-          doc.addField("xyz", numDocs_c2);
-          req.add(doc);
-        }
-        req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
-        log.info("Adding {} docs with commit=true, numDocs= {}", docs, numDocs_c2);
-        req.process(cluster2SolrClient);
-      }
-
-      int numDocs = numDocs_c1 + numDocs_c2;
-
-      response = cluster2SolrClient.query(new SolrQuery("*:*"));
-      assertEquals("cluster 2 docs mismatch", numDocs, response.getResults().getNumFound());
-
-      assertEquals("cluster 1 docs mismatch", numDocs, CdcrTestsUtil.waitForClusterToSync(numDocs, cluster1SolrClient));
-
-      // logging cdcr clusters queue response
-      response = CdcrTestsUtil.getCdcrQueue(cluster1SolrClient);
-      if (log.isInfoEnabled()) {
-        log.info("Cdcr cluster1 queue response: {}", response.getResponse());
-      }
-      response = CdcrTestsUtil.getCdcrQueue(cluster2SolrClient);
-      if (log.isInfoEnabled()) {
-        log.info("Cdcr cluster2 queue response: {}", response.getResponse());
-      }
-
-      // lets find and keep the maximum version assigned by cluster1 & cluster2 across all our updates
-
-      long maxVersion_c1 = Math.min((long)CdcrTestsUtil.getFingerPrintMaxVersion(cluster1SolrClient, "shard1", numDocs),
-          (long)CdcrTestsUtil.getFingerPrintMaxVersion(cluster1SolrClient, "shard2", numDocs));
-      long maxVersion_c2 = Math.min((long)CdcrTestsUtil.getFingerPrintMaxVersion(cluster2SolrClient, "shard1", numDocs),
-          (long)CdcrTestsUtil.getFingerPrintMaxVersion(cluster2SolrClient, "shard2", numDocs));
-
-      ModifiableSolrParams params = new ModifiableSolrParams();
-      params.set(CommonParams.ACTION, CdcrParams.CdcrAction.COLLECTIONCHECKPOINT.toString());
-      params.set(CommonParams.QT, "/cdcr");
-      response = cluster2SolrClient.query(params);
-      Long checkpoint_2 = (Long) response.getResponse().get(CdcrParams.CHECKPOINT);
-      assertNotNull(checkpoint_2);
-
-      params = new ModifiableSolrParams();
-      params.set(CommonParams.ACTION, CdcrParams.CdcrAction.COLLECTIONCHECKPOINT.toString());
-      params.set(CommonParams.QT, "/cdcr");
-      response = cluster1SolrClient.query(params);
-      Long checkpoint_1 = (Long) response.getResponse().get(CdcrParams.CHECKPOINT);
-      assertNotNull(checkpoint_1);
-
-      log.info("v1: {}\tv2: {}\tcheckpoint1: {}\tcheckpoint2: {}"
-          , maxVersion_c1, maxVersion_c2, checkpoint_1, checkpoint_2);
-
-      assertEquals("COLLECTIONCHECKPOINT from cluster2 should have returned the maximum " +
-          "version across all updates made to cluster1", maxVersion_c1, checkpoint_2.longValue());
-      assertEquals("COLLECTIONCHECKPOINT from cluster1 should have returned the maximum " +
-          "version across all updates made to cluster2", maxVersion_c2, checkpoint_1.longValue());
-      assertEquals("max versions of updates in both clusters should be same", maxVersion_c1, maxVersion_c2);
-
-      // DELETE BY QUERY
-      String deleteByQuery = "id:cluster1_" +String.valueOf(random().nextInt(numDocs_c1));
-      response = cluster1SolrClient.query(new SolrQuery(deleteByQuery));
-      assertEquals("should match exactly one doc", 1, response.getResults().getNumFound());
-      cluster1SolrClient.deleteByQuery(deleteByQuery);
-      cluster1SolrClient.commit();
-      numDocs--;
-      numDocs_c1--;
-
-      response = cluster1SolrClient.query(new SolrQuery("*:*"));
-      assertEquals("cluster 1 docs mismatch", numDocs, response.getResults().getNumFound());
-      assertEquals("cluster 2 docs mismatch", numDocs, CdcrTestsUtil.waitForClusterToSync(numDocs, cluster2SolrClient));
-
-      // DELETE BY ID
-      SolrInputDocument doc;
-      String delete_id_query = "cluster2_" + random().nextInt(numDocs_c2);
-      cluster2SolrClient.deleteById(delete_id_query);
-      cluster2SolrClient.commit();
-      numDocs--;
-      numDocs_c2--;
-      response = cluster2SolrClient.query(new SolrQuery("*:*"));
-      assertEquals("cluster 2 docs mismatch", numDocs, response.getResults().getNumFound());
-      assertEquals("cluster 1 docs mismatch", numDocs, CdcrTestsUtil.waitForClusterToSync(numDocs, cluster1SolrClient));
-
-      // ATOMIC UPDATES
-      req = new UpdateRequest();
-      doc = new SolrInputDocument();
-      String atomicFieldName = "abc";
-      String atomicUpdateId = "cluster2_" + random().nextInt(numDocs_c2);
-      doc.addField("id", atomicUpdateId);
-      doc.addField("xyz", ImmutableMap.of("delete", ""));
-      doc.addField(atomicFieldName, ImmutableMap.of("set", "ABC"));
-      req.add(doc);
-      req.process(cluster2SolrClient);
-      cluster2SolrClient.commit();
-
-      String atomicQuery = "id:" + atomicUpdateId;
-      response = cluster2SolrClient.query(new SolrQuery(atomicQuery));
-      assertEquals("cluster 2 wrong doc", "ABC", response.getResults().get(0).get(atomicFieldName));
-      assertEquals("cluster 1 wrong doc", "ABC", getDocFieldValue(cluster1SolrClient, atomicQuery, "ABC", atomicFieldName ));
-
-
-      // logging cdcr clusters queue response
-      response = CdcrTestsUtil.getCdcrQueue(cluster1SolrClient);
-      if (log.isInfoEnabled()) {
-        log.info("Cdcr cluster1 queue response at end of testcase: {}", response.getResponse());
-      }
-      response = CdcrTestsUtil.getCdcrQueue(cluster2SolrClient);
-      if (log.isInfoEnabled()) {
-        log.info("Cdcr cluster2 queue response at end of testcase: {}", response.getResponse());
-      }
-
-      CdcrTestsUtil.cdcrStop(cluster1SolrClient);
-      CdcrTestsUtil.cdcrStop(cluster2SolrClient);
-    } finally {
-      if (cluster1 != null) {
-        cluster1.shutdown();
-      }
-      if (cluster2 != null) {
-        cluster2.shutdown();
-      }
-    }
-  }
-
-  private String getDocFieldValue(CloudSolrClient clusterSolrClient, String query, String match, String field) throws Exception {
-    TimeOut waitTimeOut = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-    while (!waitTimeOut.hasTimedOut()) {
-      clusterSolrClient.commit();
-      QueryResponse response = clusterSolrClient.query(new SolrQuery(query));
-      if (response.getResults().size() > 0 && match.equals(response.getResults().get(0).get(field))) {
-        return (String) response.getResults().get(0).get(field);
-      }
-      Thread.sleep(1000);
-    }
-    return null;
-  }
-}
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBootstrapTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBootstrapTest.java
deleted file mode 100644
index 34d8287..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBootstrapTest.java
+++ /dev/null
@@ -1,373 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.cloud.cdcr;
-
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.LinkedHashMap;
-
-import org.apache.lucene.store.FSDirectory;
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.embedded.JettySolrRunner;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.request.AbstractUpdateRequest;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.AbstractDistribZkTestBase;
-import org.apache.solr.cloud.MiniSolrCloudCluster;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.handler.CdcrParams;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class CdcrBootstrapTest extends SolrTestCaseJ4 {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  /**
-   * Starts a source cluster with no CDCR configuration, indexes enough documents such that
-   * the at least one old tlog is closed and thrown away so that the source cluster does not have
-   * all updates available in tlogs only.
-   * <p>
-   * Then we start a target cluster with CDCR configuration and we change the source cluster configuration
-   * to use CDCR (i.e. CdcrUpdateLog, CdcrRequestHandler and CdcrUpdateProcessor) and restart it.
-   * <p>
-   * We test that all updates eventually make it to the target cluster and that the collectioncheckpoint
-   * call returns the same version as the last update indexed on the source.
-   */
-  @Test
-  // commented 4-Sep-2018 @LuceneTestCase.BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 2-Aug-2018
-  // commented out on: 17-Feb-2019   @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 14-Oct-2018
-  public void testConvertClusterToCdcrAndBootstrap() throws Exception {
-    // start the target first so that we know its zkhost
-    MiniSolrCloudCluster target = new MiniSolrCloudCluster(1, createTempDir("cdcr-target"), buildJettyConfig("/solr"));
-    try {
-      if (log.isInfoEnabled()) {
-        log.info("Target zkHost = {}", target.getZkServer().getZkAddress());
-      }
-      System.setProperty("cdcr.target.zkHost", target.getZkServer().getZkAddress());
-
-      // start a cluster with no cdcr
-      MiniSolrCloudCluster source = new MiniSolrCloudCluster(1, createTempDir("cdcr-source"), buildJettyConfig("/solr"));
-      try {
-        source.uploadConfigSet(configset("cdcr-source-disabled"), "cdcr-source");
-
-        // create a collection with the cdcr-source-disabled configset
-        CollectionAdminRequest.createCollection("cdcr-source", "cdcr-source", 1, 1)
-            // todo investigate why this is necessary??? because by default it selects a ram directory which deletes the tlogs on reloads?
-            .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-            .process(source.getSolrClient());
-        source.waitForActiveCollection("cdcr-source", 1, 1);
-        CloudSolrClient sourceSolrClient = source.getSolrClient();
-        int docs = (TEST_NIGHTLY ? 100 : 10);
-        int numDocs = indexDocs(sourceSolrClient, "cdcr-source", docs);
-
-        QueryResponse response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("", numDocs, response.getResults().getNumFound());
-
-        // lets find and keep the maximum version assigned by source cluster across all our updates
-        long maxVersion = Long.MIN_VALUE;
-        ModifiableSolrParams params = new ModifiableSolrParams();
-        params.set(CommonParams.QT, "/get");
-        params.set("getVersions", numDocs);
-        params.set("fingerprint", true);
-        response = sourceSolrClient.query(params);
-        maxVersion = (long)(((LinkedHashMap)response.getResponse().get("fingerprint")).get("maxVersionEncountered"));
-
-//       upload the cdcr-enabled config and restart source cluster
-        source.uploadConfigSet(configset("cdcr-source"), "cdcr-source");
-        JettySolrRunner runner = source.stopJettySolrRunner(0);
-        source.waitForJettyToStop(runner);
-        
-        source.startJettySolrRunner(runner);
-        source.waitForAllNodes(30);
-        assertTrue(runner.isRunning());
-        AbstractDistribZkTestBase.waitForRecoveriesToFinish("cdcr-source", source.getSolrClient().getZkStateReader(), true, true, 330);
-
-        response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("Document mismatch on source after restart", numDocs, response.getResults().getNumFound());
-
-        // setup the target cluster
-        target.uploadConfigSet(configset("cdcr-target"), "cdcr-target");
-        CollectionAdminRequest.createCollection("cdcr-target", "cdcr-target", 1, 2)
-            .process(target.getSolrClient());
-        target.waitForActiveCollection("cdcr-target", 1, 2);
-        CloudSolrClient targetSolrClient = target.getSolrClient();
-        targetSolrClient.setDefaultCollection("cdcr-target");
-        Thread.sleep(6000);
-
-        CdcrTestsUtil.cdcrStart(targetSolrClient);
-        CdcrTestsUtil.cdcrStart(sourceSolrClient);
-
-        response = CdcrTestsUtil.getCdcrQueue(sourceSolrClient);
-        if (log.isInfoEnabled()) {
-          log.info("Cdcr queue response: {}", response.getResponse());
-        }
-        long foundDocs = CdcrTestsUtil.waitForClusterToSync(numDocs, targetSolrClient);
-        assertEquals("Document mismatch on target after sync", numDocs, foundDocs);
-        assertTrue(CdcrTestsUtil.assertShardInSync("cdcr-target", "shard1", targetSolrClient)); // with more than 1 replica
-
-        params = new ModifiableSolrParams();
-        params.set(CommonParams.ACTION, CdcrParams.CdcrAction.COLLECTIONCHECKPOINT.toString());
-        params.set(CommonParams.QT, "/cdcr");
-        response = targetSolrClient.query(params);
-        Long checkpoint = (Long) response.getResponse().get(CdcrParams.CHECKPOINT);
-        assertNotNull(checkpoint);
-        assertEquals("COLLECTIONCHECKPOINT from target cluster should have returned the maximum " +
-            "version across all updates made to source", maxVersion, checkpoint.longValue());
-      } finally {
-        source.shutdown();
-      }
-    } finally {
-      target.shutdown();
-    }
-  }
-
-  private int indexDocs(CloudSolrClient sourceSolrClient, String collection, int batches) throws IOException, SolrServerException {
-    sourceSolrClient.setDefaultCollection(collection);
-    int numDocs = 0;
-    for (int k = 0; k < batches; k++) {
-      UpdateRequest req = new UpdateRequest();
-      for (; numDocs < (k + 1) * 100; numDocs++) {
-        SolrInputDocument doc = new SolrInputDocument();
-        doc.addField("id", "source_" + numDocs);
-        doc.addField("xyz", numDocs);
-        req.add(doc);
-      }
-      req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
-      req.process(sourceSolrClient);
-    }
-    log.info("Adding numDocs={}", numDocs);
-    return numDocs;
-  }
-  /**
-   * This test start cdcr source, adds data,starts target cluster, verifies replication,
-   * stops cdcr replication and buffering, adds more data, re-enables cdcr and verify replication
-   */
-  public void testBootstrapWithSourceCluster() throws Exception {
-    // start the target first so that we know its zkhost
-    MiniSolrCloudCluster target = new MiniSolrCloudCluster(1, createTempDir("cdcr-target"), buildJettyConfig("/solr"));
-    try {
-      System.out.println("Target zkHost = " + target.getZkServer().getZkAddress());
-      System.setProperty("cdcr.target.zkHost", target.getZkServer().getZkAddress());
-
-      MiniSolrCloudCluster source = new MiniSolrCloudCluster(1, createTempDir("cdcr-source"), buildJettyConfig("/solr"));
-      try {
-        source.uploadConfigSet(configset("cdcr-source"), "cdcr-source");
-
-        CollectionAdminRequest.createCollection("cdcr-source", "cdcr-source", 1, 1)
-            .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-            .process(source.getSolrClient());
-        source.waitForActiveCollection("cdcr-source", 1, 1);
-
-        CloudSolrClient sourceSolrClient = source.getSolrClient();
-        int docs = (TEST_NIGHTLY ? 100 : 10);
-        int numDocs = indexDocs(sourceSolrClient, "cdcr-source", docs);
-
-        QueryResponse response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("", numDocs, response.getResults().getNumFound());
-
-        // setup the target cluster
-        target.uploadConfigSet(configset("cdcr-target"), "cdcr-target");
-        CollectionAdminRequest.createCollection("cdcr-target", "cdcr-target", 1, 1)
-            .process(target.getSolrClient());
-        target.waitForActiveCollection("cdcr-target", 1, 1);
-        CloudSolrClient targetSolrClient = target.getSolrClient();
-        targetSolrClient.setDefaultCollection("cdcr-target");
-
-        CdcrTestsUtil.cdcrStart(targetSolrClient);
-        CdcrTestsUtil.cdcrStart(sourceSolrClient);
-
-        response = CdcrTestsUtil.getCdcrQueue(sourceSolrClient);
-        if (log.isInfoEnabled()) {
-          log.info("Cdcr queue response: {}", response.getResponse());
-        }
-        long foundDocs = CdcrTestsUtil.waitForClusterToSync(numDocs, targetSolrClient);
-        assertEquals("Document mismatch on target after sync", numDocs, foundDocs);
-
-        int total_tlogs_in_index = FSDirectory.open(target.getBaseDir().resolve("node1").
-            resolve("cdcr-target_shard1_replica_n1").resolve("data").
-            resolve("tlog")).listAll().length;
-
-        assertEquals("tlogs count should be ZERO",0, total_tlogs_in_index);
-
-        CdcrTestsUtil.cdcrStop(sourceSolrClient);
-        CdcrTestsUtil.cdcrDisableBuffer(sourceSolrClient);
-
-        int c = 0;
-        for (int k = 0; k < 10; k++) {
-          UpdateRequest req = new UpdateRequest();
-          for (; c < (k + 1) * 100; c++, numDocs++) {
-            SolrInputDocument doc = new SolrInputDocument();
-            doc.addField("id", "source_" + numDocs);
-            doc.addField("xyz", numDocs);
-            req.add(doc);
-          }
-          req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
-          log.info("Adding 100 docs with commit=true, numDocs={}", numDocs);
-          req.process(sourceSolrClient);
-        }
-
-        response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("", numDocs, response.getResults().getNumFound());
-
-        CdcrTestsUtil.cdcrStart(sourceSolrClient);
-        CdcrTestsUtil.cdcrEnableBuffer(sourceSolrClient);
-
-        foundDocs = CdcrTestsUtil.waitForClusterToSync(numDocs, targetSolrClient);
-        assertEquals("Document mismatch on target after sync", numDocs, foundDocs);
-
-      } finally {
-        source.shutdown();
-      }
-    } finally {
-      target.shutdown();
-    }
-  }
-
-  /**
-   * This test successfully validates the follower nodes at target copies content
-   * from their respective leaders
-   */
-  public void testBootstrapWithMultipleReplicas() throws Exception {
-    // start the target first so that we know its zkhost
-    MiniSolrCloudCluster target = new MiniSolrCloudCluster(3, createTempDir("cdcr-target"), buildJettyConfig("/solr"));
-    try {
-      System.out.println("Target zkHost = " + target.getZkServer().getZkAddress());
-      System.setProperty("cdcr.target.zkHost", target.getZkServer().getZkAddress());
-
-      MiniSolrCloudCluster source = new MiniSolrCloudCluster(3, createTempDir("cdcr-source"), buildJettyConfig("/solr"));
-      try {
-        source.uploadConfigSet(configset("cdcr-source"), "cdcr-source");
-
-        CollectionAdminRequest.createCollection("cdcr-source", "cdcr-source", 1, 3)
-            .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-            .process(source.getSolrClient());
-        source.waitForActiveCollection("cdcr-source", 1, 3);
-
-        CloudSolrClient sourceSolrClient = source.getSolrClient();
-        int docs = (TEST_NIGHTLY ? 100 : 10);
-        int numDocs = indexDocs(sourceSolrClient, "cdcr-source", docs);
-
-        QueryResponse response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("", numDocs, response.getResults().getNumFound());
-
-        // setup the target cluster
-        target.uploadConfigSet(configset("cdcr-target"), "cdcr-target");
-        CollectionAdminRequest.createCollection("cdcr-target", "cdcr-target", 1, 3)
-            .process(target.getSolrClient());
-        target.waitForActiveCollection("cdcr-target", 1, 3);
-        CloudSolrClient targetSolrClient = target.getSolrClient();
-        targetSolrClient.setDefaultCollection("cdcr-target");
-
-        CdcrTestsUtil.cdcrStart(targetSolrClient);
-        CdcrTestsUtil.cdcrStart(sourceSolrClient);
-
-        response = CdcrTestsUtil.getCdcrQueue(sourceSolrClient);
-        if (log.isInfoEnabled()) {
-          log.info("Cdcr queue response: {}", response.getResponse());
-        }
-        long foundDocs = CdcrTestsUtil.waitForClusterToSync(numDocs, targetSolrClient);
-        assertEquals("Document mismatch on target after sync", numDocs, foundDocs);
-        assertTrue("leader followers didnt' match", CdcrTestsUtil.assertShardInSync("cdcr-target", "shard1", targetSolrClient)); // with more than 1 replica
-
-      } finally {
-        source.shutdown();
-      }
-    } finally {
-      target.shutdown();
-    }
-  }
-
-  // 29-June-2018 @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028")
-  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 6-Sep-2018
-  @Test
-  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/SOLR-12028")
-  public void testBootstrapWithContinousIndexingOnSourceCluster() throws Exception {
-    // start the target first so that we know its zkhost
-    MiniSolrCloudCluster target = new MiniSolrCloudCluster(1, createTempDir("cdcr-target"), buildJettyConfig("/solr"));
-    try {
-      if (log.isInfoEnabled()) {
-        log.info("Target zkHost = {}", target.getZkServer().getZkAddress());
-      }
-      System.setProperty("cdcr.target.zkHost", target.getZkServer().getZkAddress());
-
-      MiniSolrCloudCluster source = new MiniSolrCloudCluster(1, createTempDir("cdcr-source"), buildJettyConfig("/solr"));
-      try {
-        source.uploadConfigSet(configset("cdcr-source"), "cdcr-source");
-
-        CollectionAdminRequest.createCollection("cdcr-source", "cdcr-source", 1, 1)
-            .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-            .process(source.getSolrClient());
-        source.waitForActiveCollection("cdcr-source", 1, 1);
-        CloudSolrClient sourceSolrClient = source.getSolrClient();
-        int docs = (TEST_NIGHTLY ? 100 : 10);
-        int numDocs = indexDocs(sourceSolrClient, "cdcr-source", docs);
-
-        QueryResponse response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("", numDocs, response.getResults().getNumFound());
-
-        // setup the target cluster
-        target.uploadConfigSet(configset("cdcr-target"), "cdcr-target");
-        CollectionAdminRequest.createCollection("cdcr-target", "cdcr-target", 1, 1)
-            .process(target.getSolrClient());
-        target.waitForActiveCollection("cdcr-target", 1, 1);
-        CloudSolrClient targetSolrClient = target.getSolrClient();
-        targetSolrClient.setDefaultCollection("cdcr-target");
-        Thread.sleep(1000);
-
-        CdcrTestsUtil.cdcrStart(targetSolrClient);
-        CdcrTestsUtil.cdcrStart(sourceSolrClient);
-        int c = 0;
-        for (int k = 0; k < docs; k++) {
-          UpdateRequest req = new UpdateRequest();
-          for (; c < (k + 1) * 100; c++, numDocs++) {
-            SolrInputDocument doc = new SolrInputDocument();
-            doc.addField("id", "source_" + numDocs);
-            doc.addField("xyz", numDocs);
-            req.add(doc);
-          }
-          req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
-          log.info("Adding {} docs with commit=true, numDocs={}", docs, numDocs);
-          req.process(sourceSolrClient);
-        }
-
-        response = sourceSolrClient.query(new SolrQuery("*:*"));
-        assertEquals("", numDocs, response.getResults().getNumFound());
-
-        response = CdcrTestsUtil.getCdcrQueue(sourceSolrClient);
-        if (log.isInfoEnabled()) {
-          log.info("Cdcr queue response: {}", response.getResponse());
-        }
-        long foundDocs = CdcrTestsUtil.waitForClusterToSync(numDocs, targetSolrClient);
-        assertEquals("Document mismatch on target after sync", numDocs, foundDocs);
-
-      } finally {
-        source.shutdown();
-      }
-    } finally {
-      target.shutdown();
-    }
-  }
-}
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrOpsAndBoundariesTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrOpsAndBoundariesTest.java
deleted file mode 100644
index 2eb8d9f..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrOpsAndBoundariesTest.java
+++ /dev/null
@@ -1,332 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.cloud.cdcr;
-
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.MiniSolrCloudCluster;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.handler.CdcrParams;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.carrotsearch.randomizedtesting.annotations.Nightly;
-
-@Nightly // test is too long for non nightly
-public class CdcrOpsAndBoundariesTest extends SolrTestCaseJ4 {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  MiniSolrCloudCluster target, source;
-  CloudSolrClient sourceSolrClient, targetSolrClient;
-  private static String SOURCE_COLLECTION = "cdcr-source";
-  private static String TARGET_COLLECTION = "cdcr-target";
-  private static String ALL_Q = "*:*";
-
-  @Before
-  public void before() throws Exception {
-    target = new MiniSolrCloudCluster(1, createTempDir(TARGET_COLLECTION), buildJettyConfig("/solr"));
-    System.setProperty("cdcr.target.zkHost", target.getZkServer().getZkAddress());
-    source = new MiniSolrCloudCluster(1, createTempDir(SOURCE_COLLECTION), buildJettyConfig("/solr"));
-  }
-
-  @After
-  public void after() throws Exception {
-    if (null != target) {
-      target.shutdown();
-      target = null;
-    }
-    if (null != source) {
-      source.shutdown();
-      source = null;
-    }
-  }
-
-  /**
-   * Check the ops statistics.
-   */
-  @Test
-  @SuppressWarnings({"rawtypes"})
-  public void testOps() throws Exception {
-    createCollections();
-
-    // Start CDCR
-    CdcrTestsUtil.cdcrRestart(sourceSolrClient);
-
-    // Index documents
-    CdcrTestsUtil.indexRandomDocs(100, sourceSolrClient);
-    double opsAll = 0.0;
-    NamedList ops = null;
-
-    // calculate ops
-    int itr = 10;
-    while (itr-- > 0 && opsAll == 0.0) {
-      NamedList rsp = CdcrTestsUtil.invokeCdcrAction(sourceSolrClient, CdcrParams.CdcrAction.OPS).getResponse();
-      NamedList collections = (NamedList) ((NamedList) rsp.get(CdcrParams.OPERATIONS_PER_SECOND)).getVal(0);
-      ops = (NamedList) collections.get(TARGET_COLLECTION);
-      opsAll = (Double) ops.get(CdcrParams.COUNTER_ALL);
-      Thread.sleep(250); // wait for cdcr to complete and check
-    }
-    // asserts ops values
-    double opsAdds = (Double) ops.get(CdcrParams.COUNTER_ADDS);
-    assertTrue(opsAll > 0);
-    assertTrue(opsAdds > 0);
-    double opsDeletes = (Double) ops.get(CdcrParams.COUNTER_DELETES);
-    assertEquals(0, opsDeletes, 0);
-
-    // Delete 10 documents: 10-19
-    List<String> ids;
-    for (int id = 0; id < 50; id++) {
-      ids = new ArrayList<>();
-      ids.add(Integer.toString(id));
-      sourceSolrClient.deleteById(ids, 1);
-      int dbq_id = 50 + id;
-      sourceSolrClient.deleteByQuery("id:" + dbq_id, 1);
-    }
-
-    itr = 10;
-    while (itr-- > 0) {
-      NamedList rsp = CdcrTestsUtil.invokeCdcrAction(sourceSolrClient, CdcrParams.CdcrAction.OPS).getResponse();
-      NamedList collections = (NamedList) ((NamedList) rsp.get(CdcrParams.OPERATIONS_PER_SECOND)).getVal(0);
-      ops = (NamedList) collections.get(TARGET_COLLECTION);
-      opsAll = (Double) ops.get(CdcrParams.COUNTER_ALL);
-      Thread.sleep(250); // wait for cdcr to complete and check
-    }
-    // asserts ops values
-    opsAdds = (Double) ops.get(CdcrParams.COUNTER_ADDS);
-    opsDeletes = (Double) ops.get(CdcrParams.COUNTER_DELETES);
-    assertTrue(opsAll > 0);
-    assertTrue(opsAdds > 0);
-    assertTrue(opsDeletes > 0);
-
-    deleteCollections();
-  }
-
-  @Test
-  public void testTargetCollectionNotAvailable() throws Exception {
-    createCollections();
-
-    // send start action to first shard
-    CdcrTestsUtil.cdcrStart(sourceSolrClient);
-
-    assertNotSame(null, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    // sleep for a bit to ensure that replicator threads are started
-    Thread.sleep(3000);
-
-    target.deleteAllCollections();
-
-    CdcrTestsUtil.indexRandomDocs(6, sourceSolrClient);
-    assertEquals(6L, sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound());
-
-    // we need to wait until the replicator thread is triggered
-    int cnt = 15; // timeout after 15 seconds
-    AssertionError lastAssertionError = null;
-    while (cnt > 0) {
-      try {
-        QueryResponse rsp = CdcrTestsUtil.invokeCdcrAction(sourceSolrClient, CdcrParams.CdcrAction.ERRORS);
-        @SuppressWarnings({"rawtypes"})
-        NamedList collections = (NamedList) ((NamedList) rsp.getResponse().get(CdcrParams.ERRORS)).getVal(0);
-        @SuppressWarnings({"rawtypes"})
-        NamedList errors = (NamedList) collections.get(TARGET_COLLECTION);
-        assertTrue(0 < (Long) errors.get(CdcrParams.CONSECUTIVE_ERRORS));
-        @SuppressWarnings({"rawtypes"})
-        NamedList lastErrors = (NamedList) errors.get(CdcrParams.LAST);
-        assertNotNull(lastErrors);
-        assertTrue(0 < lastErrors.size());
-        deleteCollections();
-        return;
-      } catch (AssertionError e) {
-        lastAssertionError = e;
-        cnt--;
-        Thread.sleep(1000);
-      }
-    }
-
-    deleteCollections();
-    throw new AssertionError("Timeout while trying to assert replication errors", lastAssertionError);
-  }
-
-  @Test
-  public void testReplicationStartStop() throws Exception {
-    createCollections();
-
-    CdcrTestsUtil.indexRandomDocs(10, sourceSolrClient);
-    CdcrTestsUtil.cdcrStart(sourceSolrClient);
-
-    assertEquals(10, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    CdcrTestsUtil.cdcrStop(sourceSolrClient);
-
-    CdcrTestsUtil.indexRandomDocs(110, sourceSolrClient);
-
-    // Start again CDCR, the source cluster should reinitialise its log readers
-    // with the latest checkpoints
-
-    CdcrTestsUtil.cdcrRestart(sourceSolrClient);
-
-    assertEquals(110, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    deleteCollections();
-  }
-
-  /**
-   * Check that batch updates with deletes
-   */
-  @Test
-  public void testBatchAddsWithDelete() throws Exception {
-    createCollections();
-
-    // Start CDCR
-    CdcrTestsUtil.cdcrRestart(sourceSolrClient);
-    // Index 50 documents
-    CdcrTestsUtil.indexRandomDocs(50, sourceSolrClient);
-
-    // Delete 10 documents: 10-19
-    List<String> ids = new ArrayList<>();
-    for (int id = 10; id < 20; id++) {
-      ids.add(Integer.toString(id));
-    }
-    sourceSolrClient.deleteById(ids, 10);
-
-    CdcrTestsUtil.indexRandomDocs(50, 60, sourceSolrClient);
-
-    // Delete 1 document: 50
-    ids = new ArrayList<>();
-    ids.add(Integer.toString(50));
-    sourceSolrClient.deleteById(ids, 10);
-
-    CdcrTestsUtil.indexRandomDocs(60, 70, sourceSolrClient);
-
-    assertEquals(59, CdcrTestsUtil.waitForClusterToSync(59, sourceSolrClient));
-    assertEquals(59, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    deleteCollections();
-  }
-
-  /**
-   * Checks that batches are correctly constructed when batch boundaries are reached.
-   */
-  @Test
-  public void testBatchBoundaries() throws Exception {
-    createCollections();
-
-    // Start CDCR
-    CdcrTestsUtil.cdcrRestart(sourceSolrClient);
-
-    log.info("Indexing documents");
-
-    CdcrTestsUtil.indexRandomDocs(1000, sourceSolrClient);
-
-    assertEquals(1000, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    deleteCollections();
-  }
-
-  /**
-   * Check resilience of replication with delete by query executed on targets
-   */
-  @Test
-  public void testResilienceWithDeleteByQueryOnTarget() throws Exception {
-    createCollections();
-
-    // Start CDCR
-    CdcrTestsUtil.cdcrRestart(sourceSolrClient);
-
-    CdcrTestsUtil.indexRandomDocs(50, sourceSolrClient);
-
-    assertEquals(50, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    sourceSolrClient.deleteByQuery(ALL_Q, 1);
-
-    assertEquals(0, CdcrTestsUtil.waitForClusterToSync(0, sourceSolrClient));
-    assertEquals(0, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    CdcrTestsUtil.indexRandomDocs(51, 101, sourceSolrClient);
-
-    assertEquals(50, CdcrTestsUtil.waitForClusterToSync
-        (sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound(), targetSolrClient));
-
-    targetSolrClient.deleteByQuery(ALL_Q, 1);
-
-    assertEquals(50, sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound());
-    assertEquals(0, CdcrTestsUtil.waitForClusterToSync(0, targetSolrClient));
-
-    CdcrTestsUtil.indexRandomDocs(102, 152, sourceSolrClient);
-
-    assertEquals(100, sourceSolrClient.query(new SolrQuery(ALL_Q)).getResults().getNumFound());
-    assertEquals(50, CdcrTestsUtil.waitForClusterToSync(50, targetSolrClient));
-
-    deleteCollections();
-  }
-
-  private void createSourceCollection() throws Exception {
-    source.uploadConfigSet(configset(SOURCE_COLLECTION), SOURCE_COLLECTION);
-    CollectionAdminRequest.createCollection(SOURCE_COLLECTION, SOURCE_COLLECTION, 1, 1)
-        .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-        .process(source.getSolrClient());
-    Thread.sleep(1000);
-    sourceSolrClient = source.getSolrClient();
-    sourceSolrClient.setDefaultCollection(SOURCE_COLLECTION);
-  }
-
-  private void createTargetCollection() throws Exception {
-    target.uploadConfigSet(configset(TARGET_COLLECTION), TARGET_COLLECTION);
-    CollectionAdminRequest.createCollection(TARGET_COLLECTION, TARGET_COLLECTION, 1, 1)
-        .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-        .process(target.getSolrClient());
-    Thread.sleep(1000);
-    targetSolrClient = target.getSolrClient();
-    targetSolrClient.setDefaultCollection(TARGET_COLLECTION);
-  }
-
-  private void deleteSourceCollection() throws Exception {
-    source.deleteAllCollections();
-  }
-
-  private void deleteTargetcollection() throws Exception {
-    target.deleteAllCollections();
-  }
-
-  private void createCollections() throws Exception {
-    createTargetCollection();
-    createSourceCollection();
-  }
-
-  private void deleteCollections() throws Exception {
-    deleteSourceCollection();
-    deleteTargetcollection();
-  }
-
-}
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrReplicationHandlerTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrReplicationHandlerTest.java
deleted file mode 100644
index 7bd371f..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrReplicationHandlerTest.java
+++ /dev/null
@@ -1,332 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.cloud.cdcr;
-
-import java.io.File;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicInteger;
-
-import org.apache.lucene.util.LuceneTestCase.Nightly;
-import org.apache.solr.client.solrj.SolrClient;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * This class is testing the cdcr extension to the {@link org.apache.solr.handler.ReplicationHandler} and
- * {@link org.apache.solr.handler.IndexFetcher}.
- */
-@Nightly
-public class CdcrReplicationHandlerTest extends BaseCdcrDistributedZkTest {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  @Override
-  public void distribSetUp() throws Exception {
-    schemaString = "schema15.xml";      // we need a string id
-    createTargetCollection = false;     // we do not need the target cluster
-    shardCount = 1; // we need only one shard
-    // we need a persistent directory, otherwise the UpdateHandler will erase existing tlog files after restarting a node
-    System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
-    super.distribSetUp();
-  }
-
-  /**
-   * Test the scenario where the slave is killed from the start. The replication
-   * strategy should fetch all the missing tlog files from the leader.
-   */
-  @Test
-  @ShardsFixed(num = 2)
-  public void testFullReplication() throws Exception {
-    List<CloudJettyRunner> slaves = this.getShardToSlaveJetty(SOURCE_COLLECTION, SHARD1);
-    slaves.get(0).jetty.stop();
-
-    for (int i = 0; i < 10; i++) {
-      List<SolrInputDocument> docs = new ArrayList<>();
-      for (int j = i * 10; j < (i * 10) + 10; j++) {
-        docs.add(getDoc(id, Integer.toString(j)));
-      }
-      index(SOURCE_COLLECTION, docs);
-    }
-
-    assertNumDocs(100, SOURCE_COLLECTION);
-
-    // Restart the slave node to trigger Replication strategy
-    this.restartServer(slaves.get(0));
-
-    this.assertUpdateLogsEquals(SOURCE_COLLECTION, 10);
-  }
-
-  /**
-   * Test the scenario where the slave is killed before receiving all the documents. The replication
-   * strategy should fetch all the missing tlog files from the leader.
-   */
-  @Test
-  @ShardsFixed(num = 2)
-  public void testPartialReplication() throws Exception {
-    for (int i = 0; i < 5; i++) {
-      List<SolrInputDocument> docs = new ArrayList<>();
-      for (int j = i * 20; j < (i * 20) + 20; j++) {
-        docs.add(getDoc(id, Integer.toString(j)));
-      }
-      index(SOURCE_COLLECTION, docs);
-    }
-
-    List<CloudJettyRunner> slaves = this.getShardToSlaveJetty(SOURCE_COLLECTION, SHARD1);
-    slaves.get(0).jetty.stop();
-
-    for (int i = 5; i < 10; i++) {
-      List<SolrInputDocument> docs = new ArrayList<>();
-      for (int j = i * 20; j < (i * 20) + 20; j++) {
-        docs.add(getDoc(id, Integer.toString(j)));
-      }
-      index(SOURCE_COLLECTION, docs);
-    }
-
-    assertNumDocs(200, SOURCE_COLLECTION);
-
-    // Restart the slave node to trigger Replication strategy
-    this.restartServer(slaves.get(0));
-
-    // at this stage, the slave should have replicated the 5 missing tlog files
-    this.assertUpdateLogsEquals(SOURCE_COLLECTION, 10);
-  }
-
-  /**
-   * Test the scenario where the slave is killed before receiving a commit. This creates a truncated tlog
-   * file on the slave node. The replication strategy should detect this truncated file, and fetch the
-   * non-truncated file from the leader.
-   */
-  @Test
-  @ShardsFixed(num = 2)
-  public void testPartialReplicationWithTruncatedTlog() throws Exception {
-    CloudSolrClient client = createCloudClient(SOURCE_COLLECTION);
-    List<CloudJettyRunner> slaves = this.getShardToSlaveJetty(SOURCE_COLLECTION, SHARD1);
-
-    try {
-      for (int i = 0; i < 10; i++) {
-        for (int j = i * 20; j < (i * 20) + 20; j++) {
-          client.add(getDoc(id, Integer.toString(j)));
-
-          // Stop the slave in the middle of a batch to create a truncated tlog on the slave
-          if (j == 45) {
-            slaves.get(0).jetty.stop();
-          }
-
-        }
-        commit(SOURCE_COLLECTION);
-      }
-    } finally {
-      client.close();
-    }
-
-    assertNumDocs(200, SOURCE_COLLECTION);
-
-    // Restart the slave node to trigger Replication recovery
-    this.restartServer(slaves.get(0));
-
-    // at this stage, the slave should have replicated the 5 missing tlog files
-    this.assertUpdateLogsEquals(SOURCE_COLLECTION, 10);
-  }
-
-  /**
-   * Test the scenario where the slave first recovered with a PeerSync strategy, then with a Replication strategy.
-   * The PeerSync strategy will generate a single tlog file for all the missing updates on the slave node.
-   * If a Replication strategy occurs at a later stage, it should remove this tlog file generated by PeerSync
-   * and fetch the corresponding tlog files from the leader.
-   */
-  @Test
-  @ShardsFixed(num = 2)
-  public void testPartialReplicationAfterPeerSync() throws Exception {
-    for (int i = 0; i < 5; i++) {
-      List<SolrInputDocument> docs = new ArrayList<>();
-      for (int j = i * 10; j < (i * 10) + 10; j++) {
-        docs.add(getDoc(id, Integer.toString(j)));
-      }
-      index(SOURCE_COLLECTION, docs);
-    }
-
-    List<CloudJettyRunner> slaves = this.getShardToSlaveJetty(SOURCE_COLLECTION, SHARD1);
-    slaves.get(0).jetty.stop();
-
-    for (int i = 5; i < 10; i++) {
-      List<SolrInputDocument> docs = new ArrayList<>();
-      for (int j = i * 10; j < (i * 10) + 10; j++) {
-        docs.add(getDoc(id, Integer.toString(j)));
-      }
-      index(SOURCE_COLLECTION, docs);
-    }
-
-    assertNumDocs(100, SOURCE_COLLECTION);
-
-    // Restart the slave node to trigger PeerSync recovery
-    // (the update windows between leader and slave is small enough)
-    this.restartServer(slaves.get(0));
-
-    slaves.get(0).jetty.stop();
-
-    for (int i = 10; i < 15; i++) {
-      List<SolrInputDocument> docs = new ArrayList<>();
-      for (int j = i * 20; j < (i * 20) + 20; j++) {
-        docs.add(getDoc(id, Integer.toString(j)));
-      }
-      index(SOURCE_COLLECTION, docs);
-    }
-
-    // restart the slave node to trigger Replication recovery
-    this.restartServer(slaves.get(0));
-
-    // at this stage, the slave should have replicated the 5 missing tlog files
-    this.assertUpdateLogsEquals(SOURCE_COLLECTION, 15);
-  }
-
-  /**
-   * Test the scenario where the slave is killed while the leader is still receiving updates.
-   * The slave should buffer updates while in recovery, then replay them at the end of the recovery.
-   * If updates were properly buffered and replayed, then the slave should have the same number of documents
-   * than the leader. This checks if cdcr tlog replication interferes with buffered updates - SOLR-8263.
-   */
-  @Test
-  @ShardsFixed(num = 2)
-  public void testReplicationWithBufferedUpdates() throws Exception {
-    List<CloudJettyRunner> slaves = this.getShardToSlaveJetty(SOURCE_COLLECTION, SHARD1);
-
-    AtomicInteger numDocs = new AtomicInteger(0);
-    ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(new SolrNamedThreadFactory("cdcr-test-update-scheduler"));
-    executor.scheduleWithFixedDelay(new UpdateThread(numDocs), 10, 10, TimeUnit.MILLISECONDS);
-
-    // Restart the slave node to trigger Replication strategy
-    this.restartServer(slaves.get(0));
-
-    // shutdown the update thread and wait for its completion
-    executor.shutdown();
-    executor.awaitTermination(500, TimeUnit.MILLISECONDS);
-
-    // check that we have the expected number of documents in the cluster
-    assertNumDocs(numDocs.get(), SOURCE_COLLECTION);
-
-    // check that we have the expected number of documents on the slave
-    assertNumDocs(numDocs.get(), slaves.get(0));
-  }
-
-  private void assertNumDocs(int expectedNumDocs, CloudJettyRunner jetty)
-  throws InterruptedException, IOException, SolrServerException {
-    SolrClient client = createNewSolrServer(jetty.url);
-    try {
-      int cnt = 30; // timeout after 15 seconds
-      AssertionError lastAssertionError = null;
-      while (cnt > 0) {
-        try {
-          assertEquals(expectedNumDocs, client.query(new SolrQuery("*:*")).getResults().getNumFound());
-          return;
-        }
-        catch (AssertionError e) {
-          lastAssertionError = e;
-          cnt--;
-          Thread.sleep(500);
-        }
-      }
-      throw new AssertionError("Timeout while trying to assert number of documents @ " + jetty.url, lastAssertionError);
-    } finally {
-      client.close();
-    }
-  }
-
-  private class UpdateThread implements Runnable {
-
-    private AtomicInteger numDocs;
-
-    private UpdateThread(AtomicInteger numDocs) {
-      this.numDocs = numDocs;
-    }
-
-    @Override
-    public void run() {
-      try {
-        List<SolrInputDocument> docs = new ArrayList<>();
-        for (int j = numDocs.get(); j < (numDocs.get() + 10); j++) {
-          docs.add(getDoc(id, Integer.toString(j)));
-        }
-        index(SOURCE_COLLECTION, docs);
-        numDocs.getAndAdd(10);
-        if (log.isInfoEnabled()) {
-          log.info("Sent batch of {} updates - numDocs:{}", docs.size(), numDocs);
-        }
-      }
-      catch (Exception e) {
-        throw new RuntimeException(e);
-      }
-    }
-
-  }
-
-  private List<CloudJettyRunner> getShardToSlaveJetty(String collection, String shard) {
-    List<CloudJettyRunner> jetties = new ArrayList<>(shardToJetty.get(collection).get(shard));
-    CloudJettyRunner leader = shardToLeaderJetty.get(collection).get(shard);
-    jetties.remove(leader);
-    return jetties;
-  }
-
-  /**
-   * Asserts that the update logs are in sync between the leader and slave. The leader and the slaves
-   * must have identical tlog files.
-   */
-  protected void assertUpdateLogsEquals(String collection, int numberOfTLogs) throws Exception {
-    CollectionInfo info = collectInfo(collection);
-    Map<String, List<CollectionInfo.CoreInfo>> shardToCoresMap = info.getShardToCoresMap();
-
-    for (String shard : shardToCoresMap.keySet()) {
-      Map<Long, Long> leaderFilesMeta = this.getFilesMeta(info.getLeader(shard).ulogDir);
-      Map<Long, Long> slaveFilesMeta = this.getFilesMeta(info.getReplicas(shard).get(0).ulogDir);
-
-      assertEquals("Incorrect number of tlog files on the leader", numberOfTLogs, leaderFilesMeta.size());
-      assertEquals("Incorrect number of tlog files on the slave", numberOfTLogs, slaveFilesMeta.size());
-
-      for (Long leaderFileVersion : leaderFilesMeta.keySet()) {
-        assertTrue("Slave is missing a tlog for version " + leaderFileVersion, slaveFilesMeta.containsKey(leaderFileVersion));
-        assertEquals("Slave's tlog file size differs for version " + leaderFileVersion, leaderFilesMeta.get(leaderFileVersion), slaveFilesMeta.get(leaderFileVersion));
-      }
-    }
-  }
-
-  private Map<Long, Long> getFilesMeta(String dir) {
-    File file = new File(dir);
-    if (!file.isDirectory()) {
-      assertTrue("Path to tlog " + dir + " does not exists or it's not a directory.", false);
-    }
-
-    Map<Long, Long> filesMeta = new HashMap<>();
-    for (File tlogFile : file.listFiles()) {
-      filesMeta.put(Math.abs(Long.parseLong(tlogFile.getName().substring(tlogFile.getName().lastIndexOf('.') + 1))), tlogFile.length());
-    }
-    return filesMeta;
-  }
-
-}
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrRequestHandlerTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrRequestHandlerTest.java
deleted file mode 100644
index 0944a61..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrRequestHandlerTest.java
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.cloud.cdcr;
-
-import java.util.Arrays;
-import com.google.common.collect.ImmutableMap;
-import org.apache.lucene.util.LuceneTestCase.Nightly;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.handler.CdcrParams;
-import org.junit.Test;
-
-@Nightly
-public class CdcrRequestHandlerTest extends BaseCdcrDistributedZkTest {
-
-  @Override
-  public void distribSetUp() throws Exception {
-    schemaString = "schema15.xml";      // we need a string id
-    createTargetCollection = false;     // we do not need the target cluster
-    super.distribSetUp();
-  }
-
-  // check that the life-cycle state is properly synchronised across nodes
-  @Test
-  @ShardsFixed(num = 2)
-  public void testLifeCycleActions() throws Exception {
-    // check initial status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STOPPED, CdcrParams.BufferState.ENABLED);
-
-    // send start action to first shard
-    @SuppressWarnings({"rawtypes"})
-    NamedList rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.START);
-    @SuppressWarnings({"rawtypes"})
-    NamedList status = (NamedList) rsp.get(CdcrParams.CdcrAction.STATUS.toLower());
-    assertEquals(CdcrParams.ProcessState.STARTED.toLower(), status.get(CdcrParams.ProcessState.getParam()));
-
-    // check status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STARTED, CdcrParams.BufferState.ENABLED);
-
-    // Restart the leader of shard 1
-    this.restartServer(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1));
-
-    // check status - the node that died should have picked up the original state
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STARTED, CdcrParams.BufferState.ENABLED);
-
-    // send stop action to second shard
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.STOP);
-    status = (NamedList) rsp.get(CdcrParams.CdcrAction.STATUS.toLower());
-    assertEquals(CdcrParams.ProcessState.STOPPED.toLower(), status.get(CdcrParams.ProcessState.getParam()));
-
-    // check status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STOPPED, CdcrParams.BufferState.ENABLED);
-  }
-
-  // check the checkpoint API
-  @Test
-  @ShardsFixed(num = 2)
-  public void testCheckpointActions() throws Exception {
-    // initial request on an empty index, must return -1
-    @SuppressWarnings({"rawtypes"})
-    NamedList rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    assertEquals(-1l, rsp.get(CdcrParams.CHECKPOINT));
-
-    index(SOURCE_COLLECTION, getDoc(id, "a","test_i_dvo",10)); // shard 2
-
-    // only one document indexed in shard 2, the checkpoint must be still -1
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    assertEquals(-1l, rsp.get(CdcrParams.CHECKPOINT));
-
-    index(SOURCE_COLLECTION, getDoc(id, "b")); // shard 1
-
-    // a second document indexed in shard 1, the checkpoint must come from shard 2
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    long checkpoint1 = (Long) rsp.get(CdcrParams.CHECKPOINT);
-    long expected = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    assertEquals(expected, checkpoint1);
-
-    index(SOURCE_COLLECTION, getDoc(id, "c")); // shard 1
-
-    // a third document indexed in shard 1, the checkpoint must still come from shard 2
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    assertEquals(checkpoint1, rsp.get(CdcrParams.CHECKPOINT));
-
-    index(SOURCE_COLLECTION, getDoc(id, "d")); // shard 2
-
-    // a fourth document indexed in shard 2, the checkpoint must come from shard 1
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    long checkpoint2 = (Long) rsp.get(CdcrParams.CHECKPOINT);
-    expected = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    assertEquals(expected, checkpoint2);
-
-    // send a delete by id
-    long pre_op = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    deleteById(SOURCE_COLLECTION, Arrays.asList(new String[]{"c"})); //shard1
-    // document deleted in shard1, checkpoint should come from shard2
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    long checkpoint3 = (Long) rsp.get(CdcrParams.CHECKPOINT);
-    expected = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    assertEquals(pre_op, expected);
-    assertEquals(expected, checkpoint3);
-
-    // send a in-place update
-    SolrInputDocument in_place_doc = new SolrInputDocument();
-    in_place_doc.setField(id, "a");
-    in_place_doc.setField("test_i_dvo", ImmutableMap.of("inc", 10)); //shard2
-    index(SOURCE_COLLECTION, in_place_doc);
-    // document updated in shard2, checkpoint should come from shard1
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    long checkpoint4 = (Long) rsp.get(CdcrParams.CHECKPOINT);
-    expected = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    assertEquals(expected, checkpoint4);
-
-    // send a delete by query
-    deleteByQuery(SOURCE_COLLECTION, "*:*");
-
-    // all the checkpoints must come from the DBQ
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.COLLECTIONCHECKPOINT);
-    long checkpoint5= (Long) rsp.get(CdcrParams.CHECKPOINT);
-    assertTrue(checkpoint5 > 0); // ensure that checkpoints from deletes are in absolute form
-    checkpoint5 = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    assertTrue(checkpoint5 > 0); // ensure that checkpoints from deletes are in absolute form
-    checkpoint5 = (Long) invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.SHARDCHECKPOINT).get(CdcrParams.CHECKPOINT);
-    assertTrue(checkpoint5 > 0); // ensure that checkpoints from deletes are in absolute form
-
-
-    // replication never started, lastProcessedVersion should be -1 for both shards
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.LASTPROCESSEDVERSION);
-    long lastVersion = (Long) rsp.get(CdcrParams.LAST_PROCESSED_VERSION);
-    assertEquals(-1l, lastVersion);
-
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.LASTPROCESSEDVERSION);
-    lastVersion = (Long) rsp.get(CdcrParams.LAST_PROCESSED_VERSION);
-    assertEquals(-1l, lastVersion);
-  }
-
-  // check that the buffer state is properly synchronised across nodes
-  @Test
-  @ShardsFixed(num = 2)
-  public void testBufferActions() throws Exception {
-    // check initial status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STOPPED, CdcrParams.BufferState.ENABLED);
-
-    // send disable buffer action to first shard
-    @SuppressWarnings({"rawtypes"})
-    NamedList rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1), CdcrParams.CdcrAction.DISABLEBUFFER);
-    @SuppressWarnings({"rawtypes"})
-    NamedList status = (NamedList) rsp.get(CdcrParams.CdcrAction.STATUS.toLower());
-    assertEquals(CdcrParams.BufferState.DISABLED.toLower(), status.get(CdcrParams.BufferState.getParam()));
-
-    // check status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STOPPED, CdcrParams.BufferState.DISABLED);
-
-    // Restart the leader of shard 1
-    this.restartServer(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD1));
-
-    // check status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STOPPED, CdcrParams.BufferState.DISABLED);
-
-    // send enable buffer action to second shard
-    rsp = invokeCdcrAction(shardToLeaderJetty.get(SOURCE_COLLECTION).get(SHARD2), CdcrParams.CdcrAction.ENABLEBUFFER);
-    status = (NamedList) rsp.get(CdcrParams.CdcrAction.STATUS.toLower());
-    assertEquals(CdcrParams.BufferState.ENABLED.toLower(), status.get(CdcrParams.BufferState.getParam()));
-
-    // check status
-    this.assertState(SOURCE_COLLECTION, CdcrParams.ProcessState.STOPPED, CdcrParams.BufferState.ENABLED);
-  }
-
-}
-
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrTestsUtil.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrTestsUtil.java
deleted file mode 100644
index 869e5be..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrTestsUtil.java
+++ /dev/null
@@ -1,274 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.cloud.cdcr;
-
-import java.io.File;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.LinkedHashMap;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.embedded.JettySolrRunner;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.AbstractUpdateRequest;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.MiniSolrCloudCluster;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.cloud.DocCollection;
-import org.apache.solr.common.cloud.Replica;
-import org.apache.solr.common.cloud.Slice;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.params.ModifiableSolrParams;
-import org.apache.solr.common.util.NamedList;
-import org.apache.solr.common.util.TimeSource;
-import org.apache.solr.core.SolrCore;
-import org.apache.solr.handler.CdcrParams;
-import org.apache.solr.util.TimeOut;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class CdcrTestsUtil extends SolrTestCaseJ4 {
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  protected static void cdcrRestart(CloudSolrClient client) throws SolrServerException, IOException {
-    cdcrStop(client);
-    cdcrStart(client);
-  }
-
-  protected static void cdcrStart(CloudSolrClient client) throws SolrServerException, IOException {
-    QueryResponse response = invokeCdcrAction(client, CdcrParams.CdcrAction.START);
-    assertEquals("started", ((NamedList) response.getResponse().get("status")).get("process"));
-  }
-
-  protected static void cdcrStop(CloudSolrClient client) throws SolrServerException, IOException {
-    QueryResponse response = invokeCdcrAction(client, CdcrParams.CdcrAction.STOP);
-    assertEquals("stopped", ((NamedList) response.getResponse().get("status")).get("process"));
-  }
-
-  protected static void cdcrEnableBuffer(CloudSolrClient client) throws IOException, SolrServerException {
-    QueryResponse response = invokeCdcrAction(client, CdcrParams.CdcrAction.ENABLEBUFFER);
-    assertEquals("enabled", ((NamedList) response.getResponse().get("status")).get("buffer"));
-  }
-
-  protected static void cdcrDisableBuffer(CloudSolrClient client) throws IOException, SolrServerException {
-    QueryResponse response = invokeCdcrAction(client, CdcrParams.CdcrAction.DISABLEBUFFER);
-    assertEquals("disabled", ((NamedList) response.getResponse().get("status")).get("buffer"));
-  }
-
-  protected static QueryResponse invokeCdcrAction(CloudSolrClient client, CdcrParams.CdcrAction action) throws IOException, SolrServerException {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set(CommonParams.QT, "/cdcr");
-    params.set(CommonParams.ACTION, action.toLower());
-    return client.query(params);
-  }
-
-  protected static QueryResponse getCdcrQueue(CloudSolrClient client) throws SolrServerException, IOException {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set(CommonParams.QT, "/cdcr");
-    params.set(CommonParams.ACTION, CdcrParams.QUEUES);
-    return client.query(params);
-  }
-
-  protected static Object getFingerPrintMaxVersion(CloudSolrClient client, String shardNames, int numDocs) throws SolrServerException, IOException, InterruptedException {
-    ModifiableSolrParams params = new ModifiableSolrParams();
-    params.set(CommonParams.QT, "/get");
-    params.set("fingerprint", true);
-    params.set("shards", shardNames);
-    params.set("getVersions", numDocs);
-
-    QueryResponse response = null;
-    long start = System.nanoTime();
-    while (System.nanoTime() - start <= TimeUnit.NANOSECONDS.convert(20, TimeUnit.SECONDS)) {
-      response = client.query(params);
-      if (response.getResponse() != null && response.getResponse().get("fingerprint") != null) {
-        return (long) ((LinkedHashMap) response.getResponse().get("fingerprint")).get("maxVersionEncountered");
-      }
-      Thread.sleep(200);
-    }
-    log.error("maxVersionEncountered not found for client : {} in 20 attempts", client);
-    return null;
-  }
-
-  protected static long waitForClusterToSync(long numDocs, CloudSolrClient clusterSolrClient) throws Exception {
-    return waitForClusterToSync((int) numDocs, clusterSolrClient, "*:*");
-  }
-
-  protected static long waitForClusterToSync(int numDocs, CloudSolrClient clusterSolrClient) throws Exception {
-    return waitForClusterToSync(numDocs, clusterSolrClient, "*:*");
-  }
-
-  protected static long waitForClusterToSync(int numDocs, CloudSolrClient clusterSolrClient, String query) throws Exception {
-    long start = System.nanoTime();
-    QueryResponse response = null;
-    while (System.nanoTime() - start <= TimeUnit.NANOSECONDS.convert(120, TimeUnit.SECONDS)) {
-      clusterSolrClient.commit();
-      response = clusterSolrClient.query(new SolrQuery(query));
-      if (response.getResults().getNumFound() == numDocs) {
-        break;
-      }
-      Thread.sleep(1000);
-    }
-    return response != null ? response.getResults().getNumFound() : 0;
-  }
-
-  protected static boolean assertShardInSync(String collection, String shard, CloudSolrClient client) throws IOException, SolrServerException {
-    TimeOut waitTimeOut = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-    DocCollection docCollection = client.getZkStateReader().getClusterState().getCollection(collection);
-    Slice correctSlice = null;
-    for (Slice slice : docCollection.getSlices()) {
-      if (shard.equals(slice.getName())) {
-        correctSlice = slice;
-        break;
-      }
-    }
-    assertNotNull(correctSlice);
-
-    long leaderDocCount;
-    try (HttpSolrClient leaderClient = new HttpSolrClient.Builder(correctSlice.getLeader().getCoreUrl()).withHttpClient(client.getHttpClient()).build()) {
-      leaderDocCount = leaderClient.query(new SolrQuery("*:*").setParam("distrib", "false")).getResults().getNumFound();
-    }
-
-    while (!waitTimeOut.hasTimedOut()) {
-      int replicasInSync = 0;
-      for (Replica replica : correctSlice.getReplicas()) {
-        try (HttpSolrClient leaderClient = new HttpSolrClient.Builder(replica.getCoreUrl()).withHttpClient(client.getHttpClient()).build()) {
-          long replicaDocCount = leaderClient.query(new SolrQuery("*:*").setParam("distrib", "false")).getResults().getNumFound();
-          if (replicaDocCount == leaderDocCount) replicasInSync++;
-        }
-      }
-      if (replicasInSync == correctSlice.getReplicas().size()) {
-        return true;
-      }
-    }
-    return false;
-  }
-
-  public static void indexRandomDocs(Integer start, Integer count, CloudSolrClient solrClient) throws Exception {
-    // ADD operation on cluster 1
-    int docs = 0;
-    if (count == 0) {
-      docs = (TEST_NIGHTLY ? 100 : 10);
-    } else {
-      docs = count;
-    }
-    for (int k = start; k < docs; k++) {
-      UpdateRequest req = new UpdateRequest();
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", k);
-      req.add(doc);
-
-      req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
-      req.process(solrClient);
-    }
-  }
-
-  public static void indexRandomDocs(Integer count, CloudSolrClient solrClient) throws Exception {
-    indexRandomDocs(0, count, solrClient);
-  }
-
-  public static void index(MiniSolrCloudCluster cluster, String collection, SolrInputDocument doc, boolean doCommit) throws IOException, SolrServerException {
-    CloudSolrClient client = createCloudClient(cluster, collection);
-    try {
-      client.add(doc);
-      if (doCommit) {
-        client.commit(true, true);
-      } else {
-        client.commit(true, false);
-      }
-    } finally {
-      client.close();
-    }
-  }
-
-  public static void index(MiniSolrCloudCluster cluster, String collection, SolrInputDocument doc) throws IOException, SolrServerException {
-    index(cluster, collection, doc, false);
-  }
-
-  public static CloudSolrClient createCloudClient(MiniSolrCloudCluster cluster, String defaultCollection) {
-    CloudSolrClient server = getCloudSolrClient(cluster.getZkServer().getZkAddress(), random().nextBoolean());
-    if (defaultCollection != null) server.setDefaultCollection(defaultCollection);
-    return server;
-  }
-
-
-  public static void restartClusterNode(MiniSolrCloudCluster cluster, String collection, int index) throws Exception {
-    System.setProperty("collection", collection);
-    restartNode(cluster.getJettySolrRunner(index));
-    System.clearProperty("collection");
-  }
-
-  public static void restartClusterNodes(MiniSolrCloudCluster cluster, String collection) throws Exception {
-    System.setProperty("collection", collection);
-    for (JettySolrRunner jetty : cluster.getJettySolrRunners()) {
-      restartNode(jetty);
-    }
-    System.clearProperty("collection");
-  }
-
-  public static void restartNode(JettySolrRunner jetty) throws Exception {
-    jetty.stop();
-    jetty.start();
-    Thread.sleep(10000);
-  }
-
-  public static int numberOfFiles(String dir) {
-    File file = new File(dir);
-    if (!file.isDirectory()) {
-      assertTrue("Path to tlog " + dir + " does not exists or it's not a directory.", false);
-    }
-    if (log.isDebugEnabled()) {
-      log.debug("Update log dir {} contains: {}", dir, file.listFiles());
-    }
-    return file.listFiles().length;
-  }
-
-  public static int getNumberOfTlogFilesOnReplicas(MiniSolrCloudCluster cluster) throws Exception {
-    int count = 0;
-    for (JettySolrRunner jetty : cluster.getJettySolrRunners()) {
-      for (SolrCore core : jetty.getCoreContainer().getCores()) {
-        count += numberOfFiles(core.getUlogDir() + "/tlog");
-      }
-    }
-    return count;
-  }
-
-  public static String getNonLeaderNode(MiniSolrCloudCluster cluster, String collection) throws Exception {
-    String leaderNode = getLeaderNode(cluster, collection);
-    for (JettySolrRunner jetty : cluster.getJettySolrRunners()) {
-      if (!jetty.getNodeName().equals(leaderNode)) {
-        return jetty.getNodeName();
-      }
-    }
-    return cluster.getJettySolrRunners().get(0).getNodeName();
-  }
-
-  public static String getLeaderNode(MiniSolrCloudCluster cluster, String collection) throws Exception {
-    for (Replica replica : cluster.getSolrClient().getClusterStateProvider().getCollection(collection).getReplicas()) {
-      if (cluster.getSolrClient().getClusterStateProvider().getCollection(collection).getLeader("shard1") == replica) {
-        return replica.getNodeName();
-      }
-    }
-    return "";
-  }
-
-}
\ No newline at end of file
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrVersionReplicationTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrVersionReplicationTest.java
deleted file mode 100644
index 6953a32..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrVersionReplicationTest.java
+++ /dev/null
@@ -1,307 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.cloud.cdcr;
-
-import java.lang.invoke.MethodHandles;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.solr.client.solrj.SolrClient;
-import org.apache.solr.client.solrj.SolrServerException;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.common.SolrDocument;
-import org.apache.solr.common.SolrException;
-import org.apache.solr.common.params.CommonParams;
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.update.processor.CdcrUpdateProcessor;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-
-public class CdcrVersionReplicationTest extends BaseCdcrDistributedZkTest {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  private static final String vfield = CommonParams.VERSION_FIELD;
-  SolrClient solrServer;
-
-  public CdcrVersionReplicationTest() {
-    schemaString = "schema15.xml";      // we need a string id
-    super.createTargetCollection = false;
-  }
-
-  SolrClient createClientRandomly() throws Exception {
-    int r = random().nextInt(100);
-
-    // testing the smart cloud client (requests to leaders) is more important than testing the forwarding logic
-    if (r < 80) {
-      return createCloudClient(SOURCE_COLLECTION);
-    }
-
-    if (r < 90) {
-      return createNewSolrServer(shardToJetty.get(SOURCE_COLLECTION).get(SHARD1).get(random().nextInt(2)).url);
-    }
-
-    return createNewSolrServer(shardToJetty.get(SOURCE_COLLECTION).get(SHARD2).get(random().nextInt(2)).url);
-  }
-
-  @Test
-  @ShardsFixed(num = 4)
-  public void testCdcrDocVersions() throws Exception {
-    SolrClient client = createClientRandomly();
-    try {
-      handle.clear();
-      handle.put("timestamp", SKIPVAL);
-
-      doTestCdcrDocVersions(client);
-
-      commit(SOURCE_COLLECTION); // work arround SOLR-5628
-    } finally {
-      client.close();
-    }
-  }
-
-  private void doTestCdcrDocVersions(SolrClient solrClient) throws Exception {
-    this.solrServer = solrClient;
-
-    log.info("### STARTING doCdcrTestDocVersions - Add commands, client: {}", solrClient);
-
-    vadd("doc1", 10, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "10");
-    vadd("doc2", 11, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "11");
-    vadd("doc3", 10, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "10");
-    vadd("doc4", 11, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "11");
-    commit(SOURCE_COLLECTION);
-
-    // versions are preserved and verifiable both by query and by real-time get
-    doQuery(solrClient, "doc1,10,doc2,11,doc3,10,doc4,11", "q", "*:*");
-    doRealTimeGet("doc1,doc2,doc3,doc4", "10,11,10,11");
-
-    vadd("doc1", 5, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "5");
-    vadd("doc2", 10, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "10");
-    vadd("doc3", 9, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "9");
-    vadd("doc4", 8, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "8");
-
-    // lower versions are ignored
-    doRealTimeGet("doc1,doc2,doc3,doc4", "10,11,10,11");
-
-    vadd("doc1", 12, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "12");
-    vadd("doc2", 12, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "12");
-    vadd("doc3", 12, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "12");
-    vadd("doc4", 12, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "12");
-
-    // higher versions are accepted
-    doRealTimeGet("doc1,doc2,doc3,doc4", "12,12,12,12");
-
-    // non-cdcr update requests throw a version conflict exception for non-equal versions (optimistic locking feature)
-    vaddFail("doc1", 13, 409);
-    vaddFail("doc2", 13, 409);
-    vaddFail("doc3", 13, 409);
-
-    commit(SOURCE_COLLECTION);
-
-    // versions are still as they were
-    doQuery(solrClient, "doc1,12,doc2,12,doc3,12,doc4,12", "q", "*:*");
-
-    // query all shard replicas individually
-    doQueryShardReplica(SHARD1, "doc1,12,doc2,12,doc3,12,doc4,12", "q", "*:*");
-    doQueryShardReplica(SHARD2, "doc1,12,doc2,12,doc3,12,doc4,12", "q", "*:*");
-
-    // optimistic locking update
-    vadd("doc4", 12);
-    commit(SOURCE_COLLECTION);
-
-    QueryResponse rsp = solrClient.query(params("qt", "/get", "ids", "doc4"));
-    long version = (long) rsp.getResults().get(0).get(vfield);
-
-    // update accepted and a new version number was generated
-    assertTrue(version > 1_000_000_000_000l);
-
-    log.info("### STARTING doCdcrTestDocVersions - Delete commands");
-
-    // send a delete update with an older version number
-    vdelete("doc1", 5, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "5");
-    // must ignore the delete
-    doRealTimeGet("doc1", "12");
-
-    // send a delete update with a higher version number
-    vdelete("doc1", 13, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "13");
-    // must be deleted
-    doRealTimeGet("doc1", "");
-
-    // send a delete update with a higher version number
-    vdelete("doc4", version + 1, CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, "" + (version + 1));
-    // must be deleted
-    doRealTimeGet("doc4", "");
-
-    commit(SOURCE_COLLECTION);
-
-    // query each shard replica individually
-    doQueryShardReplica(SHARD1, "doc2,12,doc3,12", "q", "*:*");
-    doQueryShardReplica(SHARD2, "doc2,12,doc3,12", "q", "*:*");
-
-    // version conflict thanks to optimistic locking
-    if (solrClient instanceof CloudSolrClient) // TODO: it seems that optimistic locking doesn't work with forwarding, test with shard2 client
-      vdeleteFail("doc2", 50, 409);
-
-    // cleanup after ourselves for the next run
-    // deleteByQuery should work as usual with the CDCR_UPDATE param
-    doDeleteByQuery("id:doc*", CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, Long.toString(1));
-    commit(SOURCE_COLLECTION);
-
-    // deleteByQuery with a version lower than anything else should have no effect
-    doQuery(solrClient, "doc2,12,doc3,12", "q", "*:*");
-
-    doDeleteByQuery("id:doc*", CdcrUpdateProcessor.CDCR_UPDATE, "", vfield, Long.toString(51));
-    commit(SOURCE_COLLECTION);
-
-    // deleteByQuery with a version higher than everything else should delete all remaining docs
-    doQuery(solrClient, "", "q", "*:*");
-
-    // check that replicas are as expected too
-    doQueryShardReplica(SHARD1, "", "q", "*:*");
-    doQueryShardReplica(SHARD2, "", "q", "*:*");
-  }
-
-
-  // ------------------ auxiliary methods ------------------
-
-
-  void doQueryShardReplica(String shard, String expectedDocs, String... queryParams) throws Exception {
-    for (CloudJettyRunner jetty : shardToJetty.get(SOURCE_COLLECTION).get(shard)) {
-      doQuery(jetty.client, expectedDocs, queryParams);
-    }
-  }
-
-  void vdelete(String id, long version, String... params) throws Exception {
-    UpdateRequest req = new UpdateRequest();
-    req.deleteById(id);
-    req.setParam(vfield, Long.toString(version));
-
-    for (int i = 0; i < params.length; i += 2) {
-      req.setParam(params[i], params[i + 1]);
-    }
-    solrServer.request(req);
-  }
-
-  void vdeleteFail(String id, long version, int errCode, String... params) throws Exception {
-    boolean failed = false;
-    try {
-      vdelete(id, version, params);
-    } catch (SolrException e) {
-      failed = true;
-      if (e.getCause() instanceof SolrException && e.getCause() != e) {
-        e = (SolrException) e.getCause();
-      }
-      assertEquals(errCode, e.code());
-    } catch (SolrServerException ex) {
-      Throwable t = ex.getCause();
-      if (t instanceof SolrException) {
-        failed = true;
-        SolrException exception = (SolrException) t;
-        assertEquals(errCode, exception.code());
-      }
-    } catch (Exception e) {
-      log.error("ERROR", e);
-    }
-    assertTrue(failed);
-  }
-
-  void vadd(String id, long version, String... params) throws Exception {
-    UpdateRequest req = new UpdateRequest();
-    req.add(sdoc("id", id, vfield, version));
-    for (int i = 0; i < params.length; i += 2) {
-      req.setParam(params[i], params[i + 1]);
-    }
-    solrServer.request(req);
-  }
-
-  void vaddFail(String id, long version, int errCode, String... params) throws Exception {
-    boolean failed = false;
-    try {
-      vadd(id, version, params);
-    } catch (SolrException e) {
-      failed = true;
-      if (e.getCause() instanceof SolrException && e.getCause() != e) {
-        e = (SolrException) e.getCause();
-      }
-      assertEquals(errCode, e.code());
-    } catch (SolrServerException ex) {
-      Throwable t = ex.getCause();
-      if (t instanceof SolrException) {
-        failed = true;
-        SolrException exception = (SolrException) t;
-        assertEquals(errCode, exception.code());
-      }
-    } catch (Exception e) {
-      log.error("ERROR", e);
-    }
-    assertTrue(failed);
-  }
-
-  void doQuery(SolrClient ss, String expectedDocs, String... queryParams) throws Exception {
-
-    List<String> strs = StrUtils.splitSmart(expectedDocs, ",", true);
-    Map<String, Object> expectedIds = new HashMap<>();
-    for (int i = 0; i < strs.size(); i += 2) {
-      String id = strs.get(i);
-      String vS = strs.get(i + 1);
-      Long v = Long.valueOf(vS);
-      expectedIds.put(id, v);
-    }
-
-    QueryResponse rsp = ss.query(params(queryParams));
-    Map<String, Object> obtainedIds = new HashMap<>();
-    for (SolrDocument doc : rsp.getResults()) {
-      obtainedIds.put((String) doc.get("id"), doc.get(vfield));
-    }
-
-    assertEquals(expectedIds, obtainedIds);
-  }
-
-
-  void doRealTimeGet(String ids, String versions) throws Exception {
-    Map<String, Object> expectedIds = new HashMap<>();
-    List<String> strs = StrUtils.splitSmart(ids, ",", true);
-    List<String> verS = StrUtils.splitSmart(versions, ",", true);
-    for (int i = 0; i < strs.size(); i++) {
-      if (!verS.isEmpty()) {
-        expectedIds.put(strs.get(i), Long.valueOf(verS.get(i)));
-      }
-    }
-
-    QueryResponse rsp = solrServer.query(params("qt", "/get", "ids", ids));
-    Map<String, Object> obtainedIds = new HashMap<>();
-    for (SolrDocument doc : rsp.getResults()) {
-      obtainedIds.put((String) doc.get("id"), doc.get(vfield));
-    }
-
-    assertEquals(expectedIds, obtainedIds);
-  }
-
-  void doDeleteByQuery(String q, String... reqParams) throws Exception {
-    UpdateRequest req = new UpdateRequest();
-    req.deleteByQuery(q);
-    req.setParams(params(reqParams));
-    req.process(solrServer);
-  }
-
-}
-
diff --git a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrWithNodesRestartsTest.java b/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrWithNodesRestartsTest.java
deleted file mode 100644
index 22ebc9f..0000000
--- a/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrWithNodesRestartsTest.java
+++ /dev/null
@@ -1,359 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.cloud.cdcr;
-
-import java.lang.invoke.MethodHandles;
-
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.impl.CloudSolrClient;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.MiniSolrCloudCluster;
-import org.apache.solr.common.SolrInputDocument;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.carrotsearch.randomizedtesting.annotations.Nightly;
-
-@Nightly // test is too long for non nightly
-public class CdcrWithNodesRestartsTest extends SolrTestCaseJ4 {
-
-  private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-
-  MiniSolrCloudCluster target, source;
-  CloudSolrClient sourceSolrClient, targetSolrClient;
-  private static String SOURCE_COLLECTION = "cdcr-source";
-  private static String TARGET_COLLECTION = "cdcr-target";
-  private static String ALL_Q = "*:*";
-
-  @BeforeClass
-  public static void beforeClass() {
-    System.clearProperty("solr.httpclient.retries");
-    System.clearProperty("solr.retries.on.forward");
-    System.clearProperty("solr.retries.to.followers"); 
-  }
-  
-  @Before
-  public void before() throws Exception {
-    target = new MiniSolrCloudCluster(2, createTempDir(TARGET_COLLECTION), buildJettyConfig("/solr"));
-    System.setProperty("cdcr.target.zkHost", target.getZkServer().getZkAddress());
-    source = new MiniSolrCloudCluster(2, createTempDir(SOURCE_COLLECTION), buildJettyConfig("/solr"));
-  }
-
-  @After
-  public void after() throws Exception {
-    if (null != target) {
-      target.shutdown();
-      target = null;
-    }
-    if (null != source) {
-      source.shutdown();
-      source = null;
-    }
-  }
-
-  @Test
-  public void testBufferOnNonLeader() throws Exception {
-    createCollections();
-    CdcrTestsUtil.cdcrDisableBuffer(sourceSolrClient);
-    CdcrTestsUtil.cdcrStart(sourceSolrClient);
-    Thread.sleep(2000);
-
-    // index 100 docs
-    for (int i = 0; i < 100; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // restart all the nodes at source cluster, one by one
-    CdcrTestsUtil.restartClusterNodes(source, SOURCE_COLLECTION);
-
-    //verify cdcr has replicated docs
-    QueryResponse response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 100, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 100, CdcrTestsUtil.waitForClusterToSync(100, targetSolrClient));
-    CdcrTestsUtil.assertShardInSync(SOURCE_COLLECTION, "shard1", sourceSolrClient);
-    CdcrTestsUtil.assertShardInSync(TARGET_COLLECTION, "shard1", targetSolrClient);
-
-    CdcrTestsUtil.cdcrStop(sourceSolrClient);
-    CdcrTestsUtil.cdcrStop(targetSolrClient);
-
-    deleteCollections();
-  }
-
-  @Test
-  public void testUpdateLogSynchronisation() throws Exception {
-    createCollections();
-    CdcrTestsUtil.cdcrStart(sourceSolrClient);
-    Thread.sleep(2000);
-
-    // index 100 docs
-    for (int i = 0; i < 100; i++) {
-      // will perform a commit for every document and will create one tlog file per commit
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc, true);
-    }
-    Thread.sleep(2000);
-
-    //verify cdcr has replicated docs
-    QueryResponse response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 100, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 100, CdcrTestsUtil.waitForClusterToSync(100, targetSolrClient));
-
-    // Get the number of tlog files on the replicas (should be equal to the number of documents indexed)
-    int nTlogs = CdcrTestsUtil.getNumberOfTlogFilesOnReplicas(source);
-
-    // Disable the buffer - ulog synch should start on non-leader nodes
-    CdcrTestsUtil.cdcrDisableBuffer(sourceSolrClient);
-    Thread.sleep(2000);
-
-    int cnt = 15; // timeout after 15 seconds
-    int n = 0;
-    while (cnt > 0) {
-      // Index a new document with a commit to trigger update log cleaning
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + random().nextLong());
-      CdcrTestsUtil.index(source, "cdcr-source", doc, true);
-
-      // Check the update logs on non-leader nodes, the number of tlog files should decrease
-      n = CdcrTestsUtil.getNumberOfTlogFilesOnReplicas(source);
-      if (n < nTlogs) {
-        cnt = Integer.MIN_VALUE;
-        break;
-      }
-      cnt--;
-      Thread.sleep(1000);
-    }
-    if (cnt == 0) {
-      throw new AssertionError("Timeout while trying to assert update logs @ source_collection, " + n + " " + nTlogs);
-    }
-
-    CdcrTestsUtil.cdcrStop(sourceSolrClient);
-    CdcrTestsUtil.cdcrStop(targetSolrClient);
-
-    deleteCollections();
-  }
-
-  @Test
-  public void testReplicationAfterRestart() throws Exception {
-    createCollections();
-    CdcrTestsUtil.cdcrStart(sourceSolrClient); // start CDCR
-    Thread.sleep(2000);
-
-    //index 100 docs
-    for (int i = 0; i < 100; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify cdcr has replicated docs
-    QueryResponse response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 100, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 100, CdcrTestsUtil.waitForClusterToSync(100, targetSolrClient));
-    CdcrTestsUtil.assertShardInSync("cdcr-source", "shard1", sourceSolrClient);
-
-    // restart all the source cluster nodes
-    CdcrTestsUtil.restartClusterNodes(source, "cdcr-source");
-    sourceSolrClient = source.getSolrClient();
-    sourceSolrClient.setDefaultCollection("cdcr-source");
-
-    // verify still the docs are there
-    response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 100, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 100, CdcrTestsUtil.waitForClusterToSync(100, targetSolrClient));
-
-    // index 100 more
-    for (int i = 100; i < 200; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify still the docs are there
-    response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 200, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 200, CdcrTestsUtil.waitForClusterToSync(200, targetSolrClient));
-
-    CdcrTestsUtil.cdcrStop(sourceSolrClient);
-    CdcrTestsUtil.cdcrStop(targetSolrClient);
-
-    deleteCollections();
-  }
-
-  @Test
-  // commented out on: 24-Dec-2018   @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // 14-Oct-2018
-  public void testReplicationAfterLeaderChange() throws Exception {
-    createCollections();
-    CdcrTestsUtil.cdcrStart(sourceSolrClient);
-    Thread.sleep(2000);
-
-    // index 100 docs
-    for (int i = 0; i < 100; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify cdcr has replicated docs
-    QueryResponse response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 100, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 100, CdcrTestsUtil.waitForClusterToSync(100, targetSolrClient));
-    CdcrTestsUtil.assertShardInSync("cdcr-source", "shard1", sourceSolrClient);
-
-    // restart one of the source cluster nodes
-    CdcrTestsUtil.restartClusterNode(source, "cdcr-source", 0);
-    sourceSolrClient = source.getSolrClient();
-    sourceSolrClient.setDefaultCollection("cdcr-source");
-
-    // add `100 more docs, 200 until now
-    for (int i = 100; i < 200; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify cdcr has replicated docs
-    response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 200, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 200, CdcrTestsUtil.waitForClusterToSync(200, targetSolrClient));
-
-    // restart the other source cluster node
-    CdcrTestsUtil.restartClusterNode(source, "cdcr-source", 1);
-    sourceSolrClient = source.getSolrClient();
-    sourceSolrClient.setDefaultCollection("cdcr-source");
-
-    // add `100 more docs, 300 until now
-    for (int i = 200; i < 300; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify cdcr has replicated docs
-    response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 300, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 300, CdcrTestsUtil.waitForClusterToSync(300, targetSolrClient));
-
-    // add a replica to 'target' collection
-    CollectionAdminRequest.addReplicaToShard(TARGET_COLLECTION, "shard1").
-        setNode(CdcrTestsUtil.getNonLeaderNode(target, TARGET_COLLECTION)).process(targetSolrClient);
-    Thread.sleep(2000);
-
-    // restart one of the target nodes
-    CdcrTestsUtil.restartClusterNode(source, "cdcr-target", 0);
-    targetSolrClient = target.getSolrClient();
-    targetSolrClient.setDefaultCollection("cdcr-target");
-
-    // add `100 more docs, 400 until now
-    for (int i = 300; i < 400; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify cdcr has replicated docs
-    response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 400, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 400, CdcrTestsUtil.waitForClusterToSync(400, targetSolrClient));
-
-    // restart the other target cluster node
-    CdcrTestsUtil.restartClusterNode(source, "cdcr-target", 1);
-    targetSolrClient = target.getSolrClient();
-    targetSolrClient.setDefaultCollection("cdcr-target");
-
-    // add `100 more docs, 500 until now
-    for (int i = 400; i < 500; i++) {
-      SolrInputDocument doc = new SolrInputDocument();
-      doc.addField("id", "doc_" + i);
-      CdcrTestsUtil.index(source, "cdcr-source", doc);
-      sourceSolrClient.commit();
-    }
-    Thread.sleep(2000);
-
-    // verify cdcr has replicated docs
-    response = sourceSolrClient.query(new SolrQuery(ALL_Q));
-    assertEquals("source docs mismatch", 500, response.getResults().getNumFound());
-    assertEquals("target docs mismatch", 500, CdcrTestsUtil.waitForClusterToSync(500, targetSolrClient));
-
-    CdcrTestsUtil.cdcrStop(sourceSolrClient);
-    CdcrTestsUtil.cdcrStop(targetSolrClient);
-
-    deleteCollections();
-  }
-
-  private void createSourceCollection() throws Exception {
-    source.uploadConfigSet(configset("cdcr-source"), "cdcr-source");
-    CollectionAdminRequest.createCollection("cdcr-source", "cdcr-source", 1, 2)
-        .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-        .process(source.getSolrClient());
-    Thread.sleep(1000);
-    sourceSolrClient = source.getSolrClient();
-    sourceSolrClient.setDefaultCollection("cdcr-source");
-  }
-
-  private void createTargetCollection() throws Exception {
-    target.uploadConfigSet(configset("cdcr-target"), "cdcr-target");
-    CollectionAdminRequest.createCollection("cdcr-target", "cdcr-target", 1, 1)
-        .withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")
-        .process(target.getSolrClient());
-    Thread.sleep(1000);
-    targetSolrClient = target.getSolrClient();
-    targetSolrClient.setDefaultCollection("cdcr-target");
-  }
-
-  private void deleteSourceCollection() throws Exception {
-    source.deleteAllCollections();
-  }
-
-  private void deleteTargetcollection() throws Exception {
-    target.deleteAllCollections();
-  }
-
-  private void createCollections() throws Exception {
-    createTargetCollection();
-    createSourceCollection();
-  }
-
-  private void deleteCollections() throws Exception {
-    deleteSourceCollection();
-    deleteTargetcollection();
-  }
-
-}
diff --git a/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java b/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java
index 16dad97..18922f0 100644
--- a/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java
+++ b/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java
@@ -131,7 +131,7 @@
   @Test
   public void testGetBlobIncrRefByUrl() throws Exception{
     when(mockContainer.isZooKeeperAware()).thenReturn(true);
-    filecontent = TestDynamicLoading.getFileContent("runtimecode/runtimelibs_v2.jar.bin");
+    filecontent = TestSolrConfigHandler.getFileContent("runtimecode/runtimelibs_v2.jar.bin");
     url = "http://localhost:8080/myjar/location.jar";
     @SuppressWarnings({"rawtypes"})
     BlobRepository.BlobContentRef ref = repository.getBlobIncRef( "filefoo",null,url,
diff --git a/solr/core/src/test/org/apache/solr/core/SolrCoreTest.java b/solr/core/src/test/org/apache/solr/core/SolrCoreTest.java
index 8b20471..b42a4aa 100644
--- a/solr/core/src/test/org/apache/solr/core/SolrCoreTest.java
+++ b/solr/core/src/test/org/apache/solr/core/SolrCoreTest.java
@@ -87,7 +87,6 @@
 
     int ihCount = 0;
     {
-      ++ihCount; assertEquals(pathToClassMap.get("/admin/health"), "solr.HealthCheckHandler");
       ++ihCount; assertEquals(pathToClassMap.get("/admin/file"), "solr.ShowFileRequestHandler");
       ++ihCount; assertEquals(pathToClassMap.get("/admin/logging"), "solr.LoggingHandler");
       ++ihCount; assertEquals(pathToClassMap.get("/admin/luke"), "solr.LukeRequestHandler");
@@ -266,8 +265,6 @@
     assertEquals("wrong config for maxBooleanClauses", 1024, solrConfig.booleanQueryMaxClauseCount);
     assertEquals("wrong config for enableLazyFieldLoading", true, solrConfig.enableLazyFieldLoading);
     assertEquals("wrong config for queryResultWindowSize", 10, solrConfig.queryResultWindowSize);
-    assertEquals("wrong config for useCircuitBreakers", false, solrConfig.useCircuitBreakers);
-    assertEquals("wrong config for memoryCircuitBreakerThresholdPct", 95, solrConfig.memoryCircuitBreakerThresholdPct);
   }
 
   /**
diff --git a/solr/core/src/test/org/apache/solr/core/TestCodecSupport.java b/solr/core/src/test/org/apache/solr/core/TestCodecSupport.java
index 3ec43c9..58e5f55 100644
--- a/solr/core/src/test/org/apache/solr/core/TestCodecSupport.java
+++ b/solr/core/src/test/org/apache/solr/core/TestCodecSupport.java
@@ -20,8 +20,8 @@
 import java.util.Map;
 
 import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat;
-import org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.Mode;
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat;
+import org.apache.lucene.codecs.lucene87.Lucene87StoredFieldsFormat.Mode;
 import org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat;
 import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;
 import org.apache.lucene.index.SegmentInfo;
@@ -117,11 +117,11 @@
       SegmentInfos infos = SegmentInfos.readLatestCommit(searcher.getIndexReader().directory());
       SegmentInfo info = infos.info(infos.size() - 1).info;
       assertEquals("Expecting compression mode string to be " + expectedModeString +
-              " but got: " + info.getAttribute(Lucene50StoredFieldsFormat.MODE_KEY) +
+              " but got: " + info.getAttribute(Lucene87StoredFieldsFormat.MODE_KEY) +
               "\n SegmentInfo: " + info +
               "\n SegmentInfos: " + infos +
               "\n Codec: " + core.getCodec(),
-          expectedModeString, info.getAttribute(Lucene50StoredFieldsFormat.MODE_KEY));
+          expectedModeString, info.getAttribute(Lucene87StoredFieldsFormat.MODE_KEY));
       return null;
     });
   }
diff --git a/solr/core/src/test/org/apache/solr/core/TestConfig.java b/solr/core/src/test/org/apache/solr/core/TestConfig.java
index fe56488..a1404e1 100644
--- a/solr/core/src/test/org/apache/solr/core/TestConfig.java
+++ b/solr/core/src/test/org/apache/solr/core/TestConfig.java
@@ -184,6 +184,8 @@
 
     ++numDefaultsTested; assertNotNull("default metrics", sic.metricsInfo);
 
+    ++numDefaultsTested; assertEquals("default maxCommitMergeWaitTime", -1, sic.maxCommitMergeWaitMillis);
+
     ++numDefaultsTested; ++numNullDefaults;
     assertNull("default mergePolicyFactoryInfo", sic.mergePolicyFactoryInfo);
 
diff --git a/solr/core/src/test/org/apache/solr/core/TestConfigOverlay.java b/solr/core/src/test/org/apache/solr/core/TestConfigOverlay.java
index cfdc1e3..1c39fa9 100644
--- a/solr/core/src/test/org/apache/solr/core/TestConfigOverlay.java
+++ b/solr/core/src/test/org/apache/solr/core/TestConfigOverlay.java
@@ -46,8 +46,6 @@
     assertTrue(isEditableProp("query.queryResultMaxDocsCached", false, null));
     assertTrue(isEditableProp("query.enableLazyFieldLoading", false, null));
     assertTrue(isEditableProp("query.boolTofilterOptimizer", false, null));
-    assertTrue(isEditableProp("query.useCircuitBreakers", false, null));
-    assertTrue(isEditableProp("query.memoryCircuitBreakerThresholdPct", false, null));
     assertTrue(isEditableProp("jmx.agentId", false, null));
     assertTrue(isEditableProp("jmx.serviceUrl", false, null));
     assertTrue(isEditableProp("jmx.rootName", false, null));
diff --git a/solr/core/src/test/org/apache/solr/core/TestDynamicLoading.java b/solr/core/src/test/org/apache/solr/core/TestDynamicLoading.java
deleted file mode 100644
index d9a3bf4..0000000
--- a/solr/core/src/test/org/apache/solr/core/TestDynamicLoading.java
+++ /dev/null
@@ -1,290 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.core;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.Map;
-import java.util.zip.ZipEntry;
-import java.util.zip.ZipOutputStream;
-
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.cloud.AbstractFullDistribZkTestBase;
-import org.apache.solr.handler.TestBlobHandler;
-import org.apache.solr.util.RestTestHarness;
-import org.apache.solr.util.SimplePostTool;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import static java.util.Arrays.asList;
-import static org.apache.solr.handler.TestSolrConfigHandlerCloud.compareValues;
-
-public class TestDynamicLoading extends AbstractFullDistribZkTestBase {
-
-  @BeforeClass
-  public static void enableRuntimeLib() throws Exception {
-    System.setProperty("enable.runtime.lib", "true");
-  }
-
-  @Test
-  // 12-Jun-2018 @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028")
-  //17-Aug-2018 commented @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // added 20-Jul-2018
-  public void testDynamicLoading() throws Exception {
-    System.setProperty("enable.runtime.lib", "true");
-    setupRestTestHarnesses();
-
-    String blobName = "colltest";
-    boolean success = false;
-
-
-    HttpSolrClient randomClient = (HttpSolrClient) clients.get(random().nextInt(clients.size()));
-    String baseURL = randomClient.getBaseURL();
-    baseURL = baseURL.substring(0, baseURL.lastIndexOf('/'));
-    String payload = "{\n" +
-        "'add-runtimelib' : { 'name' : 'colltest' ,'version':1}\n" +
-        "}";
-    RestTestHarness client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "version"),
-        1l, 10);
-
-
-    payload = "{\n" +
-        "'create-requesthandler' : { 'name' : '/test1', 'class': 'org.apache.solr.core.BlobStoreTestRequestHandler' ,registerPath: '/solr,/v2',  'runtimeLib' : true }\n" +
-        "}";
-
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client,"/config",payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "requestHandler", "/test1", "class"),
-        "org.apache.solr.core.BlobStoreTestRequestHandler",10);
-
-    @SuppressWarnings({"rawtypes"})
-    Map map = TestSolrConfigHandler.getRespMap("/test1", client);
-
-    assertNotNull(map.toString(), map = (Map) map.get("error"));
-    assertTrue(map.toString(), map.get("msg").toString().contains(".system collection not available"));
-
-
-    TestBlobHandler.createSystemCollection(getHttpSolrClient(baseURL, randomClient.getHttpClient()));
-    waitForRecoveriesToFinish(".system", true);
-
-    map = TestSolrConfigHandler.getRespMap("/test1", client);
-
-
-    assertNotNull(map = (Map) map.get("error"));
-    assertTrue("full output " + map, map.get("msg").toString().contains("no such blob or version available: colltest/1" ));
-    payload = " {\n" +
-        "  'set' : {'watched': {" +
-        "                    'x':'X val',\n" +
-        "                    'y': 'Y val'}\n" +
-        "             }\n" +
-        "  }";
-
-    TestSolrConfigHandler.runConfigCommand(client,"/config/params",payload);
-    TestSolrConfigHandler.testForResponseElement(
-        client,
-        null,
-        "/config/params",
-        cloudClient,
-        Arrays.asList("response", "params", "watched", "x"),
-        "X val",
-        10);
-
-
-
-
-    for(int i=0;i<100;i++) {
-      map = TestSolrConfigHandler.getRespMap("/test1", client);
-      if("X val".equals(map.get("x"))){
-         success = true;
-         break;
-      }
-      Thread.sleep(100);
-    }
-    ByteBuffer jar = null;
-
-//     jar = persistZip("/tmp/runtimelibs.jar.bin", TestDynamicLoading.class, RuntimeLibReqHandler.class, RuntimeLibResponseWriter.class, RuntimeLibSearchComponent.class);
-//    if(true) return;
-
-    jar = getFileContent("runtimecode/runtimelibs.jar.bin");
-    TestBlobHandler.postAndCheck(cloudClient, baseURL, blobName, jar, 1);
-
-    payload = "{\n" +
-        "'create-requesthandler' : { 'name' : '/runtime', 'class': 'org.apache.solr.core.RuntimeLibReqHandler' , 'runtimeLib':true }," +
-        "'create-searchcomponent' : { 'name' : 'get', 'class': 'org.apache.solr.core.RuntimeLibSearchComponent' , 'runtimeLib':true }," +
-        "'create-queryResponseWriter' : { 'name' : 'json1', 'class': 'org.apache.solr.core.RuntimeLibResponseWriter' , 'runtimeLib':true }" +
-        "}";
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-
-    @SuppressWarnings({"rawtypes"})
-    Map result = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "requestHandler", "/runtime", "class"),
-        "org.apache.solr.core.RuntimeLibReqHandler", 10);
-    compareValues(result, "org.apache.solr.core.RuntimeLibResponseWriter", asList("overlay", "queryResponseWriter", "json1", "class"));
-    compareValues(result, "org.apache.solr.core.RuntimeLibSearchComponent", asList("overlay", "searchComponent", "get", "class"));
-
-    result = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/runtime",
-        null,
-        Arrays.asList("class"),
-        "org.apache.solr.core.RuntimeLibReqHandler", 10);
-    compareValues(result, MemClassLoader.class.getName(), asList( "loader"));
-
-    result = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/runtime?wt=json1",
-        null,
-        Arrays.asList("wt"),
-        "org.apache.solr.core.RuntimeLibResponseWriter", 10);
-    compareValues(result, MemClassLoader.class.getName(), asList( "loader"));
-
-    result = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/get?abc=xyz",
-        null,
-        Arrays.asList("get"),
-        "org.apache.solr.core.RuntimeLibSearchComponent", 10);
-    compareValues(result, MemClassLoader.class.getName(), asList( "loader"));
-
-    jar = getFileContent("runtimecode/runtimelibs_v2.jar.bin");
-    TestBlobHandler.postAndCheck(cloudClient, baseURL, blobName, jar, 2);
-    payload = "{\n" +
-        "'update-runtimelib' : { 'name' : 'colltest' ,'version':2}\n" +
-        "}";
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "version"),
-        2l, 10);
-
-    result = TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/get?abc=xyz",
-        null,
-        Arrays.asList("Version"),
-        "2", 10);
-
-
-    payload = " {\n" +
-        "  'set' : {'watched': {" +
-        "                    'x':'X val',\n" +
-        "                    'y': 'Y val'}\n" +
-        "             }\n" +
-        "  }";
-
-    TestSolrConfigHandler.runConfigCommand(client,"/config/params",payload);
-    TestSolrConfigHandler.testForResponseElement(
-        client,
-        null,
-        "/config/params",
-        cloudClient,
-        Arrays.asList("response", "params", "watched", "x"),
-        "X val",
-        10);
-   result = TestSolrConfigHandler.testForResponseElement(
-        client,
-        null,
-        "/test1",
-        cloudClient,
-        Arrays.asList("x"),
-        "X val",
-        10);
-
-    payload = " {\n" +
-        "  'set' : {'watched': {" +
-        "                    'x':'X val changed',\n" +
-        "                    'y': 'Y val'}\n" +
-        "             }\n" +
-        "  }";
-
-    TestSolrConfigHandler.runConfigCommand(client,"/config/params",payload);
-    result = TestSolrConfigHandler.testForResponseElement(
-        client,
-        null,
-        "/test1",
-        cloudClient,
-        Arrays.asList("x"),
-        "X val changed",
-        10);
-  }
-
-  public static ByteBuffer getFileContent(String f) throws IOException {
-    return getFileContent(f, true);
-  }
-  /**
-   * @param loadFromClassPath if true, it will look in the classpath to find the file,
-   *        otherwise load from absolute filesystem path.
-   */
-  public static ByteBuffer getFileContent(String f, boolean loadFromClassPath) throws IOException {
-    ByteBuffer jar;
-    File file = loadFromClassPath ? getFile(f): new File(f);
-    try (FileInputStream fis = new FileInputStream(file)) {
-      byte[] buf = new byte[fis.available()];
-      fis.read(buf);
-      jar = ByteBuffer.wrap(buf);
-    }
-    return jar;
-  }
-
-  public static  ByteBuffer persistZip(String loc,
-                                       @SuppressWarnings({"rawtypes"})Class... classes) throws IOException {
-    ByteBuffer jar = generateZip(classes);
-    try (FileOutputStream fos =  new FileOutputStream(loc)){
-      fos.write(jar.array(), 0, jar.limit());
-      fos.flush();
-    }
-    return jar;
-  }
-
-
-  public static ByteBuffer generateZip(@SuppressWarnings({"rawtypes"})Class... classes) throws IOException {
-    SimplePostTool.BAOS bos = new SimplePostTool.BAOS();
-    try (ZipOutputStream zipOut = new ZipOutputStream(bos)) {
-      zipOut.setLevel(ZipOutputStream.DEFLATED);
-      for (@SuppressWarnings({"rawtypes"})Class c : classes) {
-        String path = c.getName().replace('.', '/').concat(".class");
-        ZipEntry entry = new ZipEntry(path);
-        ByteBuffer b = SimplePostTool.inputStreamToByteArray(c.getClassLoader().getResourceAsStream(path));
-        zipOut.putNextEntry(entry);
-        zipOut.write(b.array(), 0, b.limit());
-        zipOut.closeEntry();
-      }
-    }
-    return bos.getByteBuffer();
-  }
-
-}
diff --git a/solr/core/src/test/org/apache/solr/core/TestDynamicLoadingUrl.java b/solr/core/src/test/org/apache/solr/core/TestDynamicLoadingUrl.java
deleted file mode 100644
index b172d52..0000000
--- a/solr/core/src/test/org/apache/solr/core/TestDynamicLoadingUrl.java
+++ /dev/null
@@ -1,128 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.core;
-
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.Map;
-
-import com.google.common.collect.ImmutableMap;
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.cloud.AbstractFullDistribZkTestBase;
-import org.apache.solr.common.util.Pair;
-import org.apache.solr.util.RestTestHarness;
-import org.eclipse.jetty.server.Connector;
-import org.eclipse.jetty.server.Request;
-import org.eclipse.jetty.server.Server;
-import org.eclipse.jetty.server.ServerConnector;
-import org.eclipse.jetty.server.handler.AbstractHandler;
-import org.junit.BeforeClass;
-
-import static java.util.Arrays.asList;
-import static org.apache.solr.core.TestDynamicLoading.getFileContent;
-import static org.apache.solr.handler.TestSolrConfigHandlerCloud.compareValues;
-
-@SolrTestCaseJ4.SuppressSSL
-public class TestDynamicLoadingUrl extends AbstractFullDistribZkTestBase {
-
-  @BeforeClass
-  public static void enableRuntimeLib() throws Exception {
-    System.setProperty("enable.runtime.lib", "true");
-  }
-
-  public static Pair<Server, Integer> runHttpServer(Map<String, Object> jars) throws Exception {
-    final Server server = new Server();
-    final ServerConnector connector = new ServerConnector(server);
-    server.setConnectors(new Connector[] { connector });
-    server.setHandler(new AbstractHandler() {
-      @Override
-      public void handle(String s, Request request, HttpServletRequest req, HttpServletResponse rsp)
-        throws IOException {
-        ByteBuffer b = (ByteBuffer) jars.get(s);
-        if (b != null) {
-          rsp.getOutputStream().write(b.array(), 0, b.limit());
-          rsp.setContentType("application/octet-stream");
-          rsp.setStatus(HttpServletResponse.SC_OK);
-          request.setHandled(true);
-        }
-      }
-    });
-    server.start();
-    return new Pair<>(server, connector.getLocalPort());
-  }
-
-  public void testDynamicLoadingUrl() throws Exception {
-    setupRestTestHarnesses();
-    Pair<Server, Integer> pair = runHttpServer(ImmutableMap.of("/jar1.jar", getFileContent("runtimecode/runtimelibs.jar.bin")));
-    Integer port = pair.second();
-
-    try {
-      String payload = "{\n" +
-          "'add-runtimelib' : { 'name' : 'urljar', url : 'http://localhost:" + port + "/jar1.jar'" +
-          "  'sha512':'e01b51de67ae1680a84a813983b1de3b592fc32f1a22b662fc9057da5953abd1b72476388ba342cad21671cd0b805503c78ab9075ff2f3951fdf75fa16981420'}" +
-          "}";
-      RestTestHarness client = randomRestTestHarness();
-      TestSolrConfigHandler.runConfigCommandExpectFailure(client, "/config", payload, "Invalid jar");
-
-
-      payload = "{\n" +
-          "'add-runtimelib' : { 'name' : 'urljar', url : 'http://localhost:" + port + "/jar1.jar'" +
-          "  'sha512':'d01b51de67ae1680a84a813983b1de3b592fc32f1a22b662fc9057da5953abd1b72476388ba342cad21671cd0b805503c78ab9075ff2f3951fdf75fa16981420'}" +
-          "}";
-      client = randomRestTestHarness();
-      TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-      TestSolrConfigHandler.testForResponseElement(client,
-          null,
-          "/config/overlay",
-          null,
-          Arrays.asList("overlay", "runtimeLib", "urljar", "sha512"),
-          "d01b51de67ae1680a84a813983b1de3b592fc32f1a22b662fc9057da5953abd1b72476388ba342cad21671cd0b805503c78ab9075ff2f3951fdf75fa16981420", 120);
-
-      payload = "{\n" +
-          "'create-requesthandler' : { 'name' : '/runtime', 'class': 'org.apache.solr.core.RuntimeLibReqHandler', 'runtimeLib' : true}" +
-          "}";
-      client = randomRestTestHarness();
-      TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-
-      TestSolrConfigHandler.testForResponseElement(client,
-          null,
-          "/config/overlay",
-          null,
-          Arrays.asList("overlay", "requestHandler", "/runtime", "class"),
-          "org.apache.solr.core.RuntimeLibReqHandler", 120);
-
-      @SuppressWarnings({"rawtypes"})
-      Map result = TestSolrConfigHandler.testForResponseElement(client,
-          null,
-          "/runtime",
-          null,
-          Arrays.asList("class"),
-          "org.apache.solr.core.RuntimeLibReqHandler", 120);
-      compareValues(result, MemClassLoader.class.getName(), asList("loader"));
-    } finally {
-      pair.first().stop();
-
-    }
-
-
-  }
-}
-
diff --git a/solr/core/src/test/org/apache/solr/core/TestDynamicURP.java b/solr/core/src/test/org/apache/solr/core/TestDynamicURP.java
deleted file mode 100644
index ac37e28..0000000
--- a/solr/core/src/test/org/apache/solr/core/TestDynamicURP.java
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.solr.core;
-
-import static java.util.Collections.singletonMap;
-import static org.apache.solr.client.solrj.SolrRequest.METHOD.POST;
-import static org.apache.solr.core.TestDynamicLoading.getFileContent;
-
-import java.nio.ByteBuffer;
-import java.nio.charset.StandardCharsets;
-import java.util.Arrays;
-
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.request.CollectionAdminRequest;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.request.V2Request;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.SolrCloudTestCase;
-import org.apache.solr.common.MapWriter;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.cloud.SolrZkClient;
-import org.apache.solr.common.cloud.ZkStateReader;
-import org.apache.solr.handler.TestBlobHandler;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-public class TestDynamicURP extends SolrCloudTestCase {
-
-
-  private static final String COLLECTION = "testUrpColl";
-
-  @BeforeClass
-  public static void setupCluster() throws Exception {
-    System.setProperty("enable.runtime.lib", "true");
-    configureCluster(3)
-        .addConfig("conf", configset("cloud-minimal"))
-        .configure();
-    SolrZkClient zkClient = cluster.getSolrClient().getZkStateReader().getZkClient();
-    String path = ZkStateReader.CONFIGS_ZKNODE + "/conf/solrconfig.xml";
-    byte[] data = zkClient.getData(path, null, null, true);
-
-    String solrconfigStr = new String(data, StandardCharsets.UTF_8);
-    zkClient.setData(path, solrconfigStr.replace("</config>",
-        "<updateRequestProcessorChain name=\"test_urp\" processor=\"testURP\" default=\"true\">\n" +
-        "    <processor class=\"solr.RunUpdateProcessorFactory\"/>\n" +
-        "  </updateRequestProcessorChain>\n" +
-        "\n" +
-        "  <updateProcessor class=\"runtimecode.TestURP\" name=\"testURP\" runtimeLib=\"true\"></updateProcessor>\n" +
-        "</config>").getBytes(StandardCharsets.UTF_8), true );
-
-
-    CollectionAdminRequest.createCollection(COLLECTION, "conf", 3, 1).process(cluster.getSolrClient());
-    waitForState("", COLLECTION, clusterShape(3, 3));
-  }
-
-
-
-  @Test
-  public void testUrp() throws Exception {
-
-    ByteBuffer jar = getFileContent("runtimecode/runtimeurp.jar.bin");
-
-    String blobName = "urptest";
-    TestBlobHandler.postAndCheck(cluster.getSolrClient(), cluster.getRandomJetty(random()).getBaseUrl().toString(),
-        blobName, jar, 1);
-
-    new V2Request.Builder("/c/" + COLLECTION + "/config")
-        .withPayload(singletonMap("add-runtimelib", (MapWriter) ew1 -> ew1
-            .put("name", blobName)
-            .put("version", "1")))
-        .withMethod(POST)
-        .build()
-        .process(cluster.getSolrClient());
-    TestSolrConfigHandler.testForResponseElement(null,
-        cluster.getRandomJetty(random()).getBaseUrl().toString(),
-        "/"+COLLECTION+"/config/overlay",
-        cluster.getSolrClient(),
-        Arrays.asList("overlay", "runtimeLib", blobName, "version")
-        ,"1",10);
-
-    SolrInputDocument doc = new SolrInputDocument();
-    doc.addField("id", "123");
-    doc.addField("name_s", "Test URP");
-    new UpdateRequest()
-        .add(doc)
-        .commit(cluster.getSolrClient(), COLLECTION);
-    QueryResponse result = cluster.getSolrClient().query(COLLECTION, new SolrQuery("id:123"));
-    assertEquals(1, result.getResults().getNumFound());
-    Object time_s = result.getResults().get(0).getFirstValue("time_s");
-    assertNotNull(time_s);
-
-
-
-  }
-
-}
diff --git a/solr/core/src/test/org/apache/solr/core/TestSolrConfigHandler.java b/solr/core/src/test/org/apache/solr/core/TestSolrConfigHandler.java
index 6aa48a5..d37a544 100644
--- a/solr/core/src/test/org/apache/solr/core/TestSolrConfigHandler.java
+++ b/solr/core/src/test/org/apache/solr/core/TestSolrConfigHandler.java
@@ -16,10 +16,9 @@
  */
 package org.apache.solr.core;
 
-import java.io.File;
-import java.io.IOException;
-import java.io.StringReader;
+import java.io.*;
 import java.lang.invoke.MethodHandles;
+import java.nio.ByteBuffer;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -27,6 +26,8 @@
 import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.concurrent.TimeUnit;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipOutputStream;
 
 import com.google.common.collect.ImmutableList;
 import org.apache.commons.io.FileUtils;
@@ -45,6 +46,7 @@
 import org.apache.solr.util.RESTfulServerProvider;
 import org.apache.solr.util.RestTestBase;
 import org.apache.solr.util.RestTestHarness;
+import org.apache.solr.util.SimplePostTool;
 import org.eclipse.jetty.servlet.ServletHolder;
 import org.junit.After;
 import org.junit.Before;
@@ -67,8 +69,53 @@
   private static final String collection = "collection1";
   private static final String confDir = collection + "/conf";
 
+    public static ByteBuffer getFileContent(String f) throws IOException {
+      return getFileContent(f, true);
+    }
 
-  @Before
+    /**
+     * @param loadFromClassPath if true, it will look in the classpath to find the file,
+     *        otherwise load from absolute filesystem path.
+     */
+    public static ByteBuffer getFileContent(String f, boolean loadFromClassPath) throws IOException {
+      ByteBuffer jar;
+      File file = loadFromClassPath ? getFile(f): new File(f);
+      try (FileInputStream fis = new FileInputStream(file)) {
+        byte[] buf = new byte[fis.available()];
+        fis.read(buf);
+        jar = ByteBuffer.wrap(buf);
+      }
+      return jar;
+    }
+
+    public static  ByteBuffer persistZip(String loc,
+                                           @SuppressWarnings({"rawtypes"})Class... classes) throws IOException {
+      ByteBuffer jar = generateZip(classes);
+      try (FileOutputStream fos =  new FileOutputStream(loc)){
+        fos.write(jar.array(), 0, jar.limit());
+        fos.flush();
+      }
+      return jar;
+    }
+
+    public static ByteBuffer generateZip(@SuppressWarnings({"rawtypes"})Class... classes) throws IOException {
+      SimplePostTool.BAOS bos = new SimplePostTool.BAOS();
+      try (ZipOutputStream zipOut = new ZipOutputStream(bos)) {
+        zipOut.setLevel(ZipOutputStream.DEFLATED);
+        for (@SuppressWarnings({"rawtypes"})Class c : classes) {
+          String path = c.getName().replace('.', '/').concat(".class");
+          ZipEntry entry = new ZipEntry(path);
+          ByteBuffer b = SimplePostTool.inputStreamToByteArray(c.getClassLoader().getResourceAsStream(path));
+          zipOut.putNextEntry(entry);
+          zipOut.write(b.array(), 0, b.limit());
+          zipOut.closeEntry();
+        }
+      }
+      return bos.getByteBuffer();
+    }
+
+
+    @Before
   public void before() throws Exception {
     tmpSolrHome = createTempDir().toFile();
     tmpConfDir = new File(tmpSolrHome, confDir);
@@ -532,7 +579,7 @@
 
     for (String component : new String[] {
         "requesthandler", "searchcomponent", "initparams", "queryresponsewriter", "queryparser",
-        "valuesourceparser", "transformer", "updateprocessor", "queryconverter", "listener", "runtimelib"}) {
+        "valuesourceparser", "transformer", "updateprocessor", "queryconverter", "listener"}) {
       for (String operation : new String[] { "add", "update" }) {
         payload = "{ " + operation + "-" + component + ": { param1: value1 } }";
         runConfigCommandExpectFailure(restTestHarness, "/config", payload, "'name' is a required field");
diff --git a/solr/core/src/test/org/apache/solr/core/snapshots/TestSolrCoreSnapshots.java b/solr/core/src/test/org/apache/solr/core/snapshots/TestSolrCoreSnapshots.java
index 23fed68..cd81958 100644
--- a/solr/core/src/test/org/apache/solr/core/snapshots/TestSolrCoreSnapshots.java
+++ b/solr/core/src/test/org/apache/solr/core/snapshots/TestSolrCoreSnapshots.java
@@ -105,7 +105,7 @@
 
     try (
         SolrClient adminClient = getHttpSolrClient(cluster.getJettySolrRunners().get(0).getBaseUrl().toString());
-        SolrClient masterClient = getHttpSolrClient(replica.getCoreUrl())) {
+        SolrClient leaderClient = getHttpSolrClient(replica.getCoreUrl())) {
 
       SnapshotMetaData metaData = createSnapshot(adminClient, coreName, commitName);
       // Create another snapshot referring to the same index commit to verify the
@@ -116,8 +116,8 @@
       assertEquals (metaData.getGenerationNumber(), duplicateCommit.getGenerationNumber());
 
       // Delete all documents
-      masterClient.deleteByQuery("*:*");
-      masterClient.commit();
+      leaderClient.deleteByQuery("*:*");
+      leaderClient.commit();
       BackupRestoreUtils.verifyDocs(0, cluster.getSolrClient(), collectionName);
 
       // Verify that the index directory contains at least 2 index commits - one referred by the snapshots
@@ -192,7 +192,7 @@
 
     try (
         SolrClient adminClient = getHttpSolrClient(cluster.getJettySolrRunners().get(0).getBaseUrl().toString());
-        SolrClient masterClient = getHttpSolrClient(replica.getCoreUrl())) {
+        SolrClient leaderClient = getHttpSolrClient(replica.getCoreUrl())) {
 
       SnapshotMetaData metaData = createSnapshot(adminClient, coreName, commitName);
 
@@ -203,7 +203,7 @@
           //Delete a few docs
           int numDeletes = TestUtil.nextInt(random(), 1, nDocs);
           for(int i=0; i<numDeletes; i++) {
-            masterClient.deleteByQuery("id:" + i);
+            leaderClient.deleteByQuery("id:" + i);
           }
           //Add a few more
           int moreAdds = TestUtil.nextInt(random(), 1, 100);
@@ -211,9 +211,9 @@
             SolrInputDocument doc = new SolrInputDocument();
             doc.addField("id", i + nDocs);
             doc.addField("name", "name = " + (i + nDocs));
-            masterClient.add(doc);
+            leaderClient.add(doc);
           }
-          masterClient.commit();
+          leaderClient.commit();
         }
       }
 
@@ -227,7 +227,7 @@
       }
 
       // Optimize the index.
-      masterClient.optimize(true, true, 1);
+      leaderClient.optimize(true, true, 1);
 
       // After invoking optimize command, verify that the index directory contains multiple commits (including the one we snapshotted earlier).
       {
@@ -248,13 +248,13 @@
           SolrInputDocument doc = new SolrInputDocument();
           doc.addField("id", i + nDocs);
           doc.addField("name", "name = " + (i + nDocs));
-          masterClient.add(doc);
+          leaderClient.add(doc);
         }
-        masterClient.commit();
+        leaderClient.commit();
       }
 
       // Optimize the index.
-      masterClient.optimize(true, true, 1);
+      leaderClient.optimize(true, true, 1);
 
       // Verify that the index directory contains only 1 index commit (which is not the same as the snapshotted commit).
       Collection<IndexCommit> commits = listCommits(metaData.getIndexDirPath());
diff --git a/solr/core/src/test/org/apache/solr/filestore/TestDistribPackageStore.java b/solr/core/src/test/org/apache/solr/filestore/TestDistribPackageStore.java
index f881156..f6930d0 100644
--- a/solr/core/src/test/org/apache/solr/filestore/TestDistribPackageStore.java
+++ b/solr/core/src/test/org/apache/solr/filestore/TestDistribPackageStore.java
@@ -53,7 +53,7 @@
 import java.util.function.Predicate;
 
 import static org.apache.solr.common.util.Utils.JAVABINCONSUMER;
-import static org.apache.solr.core.TestDynamicLoading.getFileContent;
+import static org.apache.solr.core.TestSolrConfigHandler.getFileContent;
 import static org.hamcrest.CoreMatchers.containsString;
 
 @LogLevel("org.apache.solr.filestore.PackageStoreAPI=DEBUG;org.apache.solr.filestore.DistribPackageStore=DEBUG")
diff --git a/solr/core/src/test/org/apache/solr/handler/BackupRestoreUtils.java b/solr/core/src/test/org/apache/solr/handler/BackupRestoreUtils.java
index 74add18..7617dc5 100644
--- a/solr/core/src/test/org/apache/solr/handler/BackupRestoreUtils.java
+++ b/solr/core/src/test/org/apache/solr/handler/BackupRestoreUtils.java
@@ -41,8 +41,8 @@
 
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  public static int indexDocs(SolrClient masterClient, String collectionName, long docsSeed) throws IOException, SolrServerException {
-    masterClient.deleteByQuery(collectionName, "*:*");
+  public static int indexDocs(SolrClient leaderClient, String collectionName, long docsSeed) throws IOException, SolrServerException {
+    leaderClient.deleteByQuery(collectionName, "*:*");
 
     Random random = new Random(docsSeed);// use a constant seed for the whole test run so that we can easily re-index.
     int nDocs = TestUtil.nextInt(random, 1, 100);
@@ -55,15 +55,15 @@
       doc.addField("name", "name = " + i);
       docs.add(doc);
     }
-    masterClient.add(collectionName, docs);
-    masterClient.commit(collectionName);
+    leaderClient.add(collectionName, docs);
+    leaderClient.commit(collectionName);
     return nDocs;
   }
 
-  public static void verifyDocs(int nDocs, SolrClient masterClient, String collectionName) throws SolrServerException, IOException {
+  public static void verifyDocs(int nDocs, SolrClient leaderClient, String collectionName) throws SolrServerException, IOException {
     ModifiableSolrParams queryParams = new ModifiableSolrParams();
     queryParams.set("q", "*:*");
-    QueryResponse response = masterClient.query(collectionName, queryParams);
+    QueryResponse response = leaderClient.query(collectionName, queryParams);
 
     assertEquals(0, response.getStatus());
     assertEquals(nDocs, response.getResults().getNumFound());
@@ -82,13 +82,13 @@
       builder.append("=");
       builder.append(p.getValue());
     }
-    String masterUrl = builder.toString();
-    executeHttpRequest(masterUrl);
+    String leaderUrl = builder.toString();
+    executeHttpRequest(leaderUrl);
   }
 
   public static void runReplicationHandlerCommand(String baseUrl, String coreName, String action, String repoName, String backupName) throws IOException {
-    String masterUrl = baseUrl + "/" + coreName + ReplicationHandler.PATH + "?command=" + action + "&repository="+repoName+"&name="+backupName;
-    executeHttpRequest(masterUrl);
+    String leaderUrl = baseUrl + "/" + coreName + ReplicationHandler.PATH + "?command=" + action + "&repository="+repoName+"&name="+backupName;
+    executeHttpRequest(leaderUrl);
   }
 
   static void executeHttpRequest(String requestUrl) throws IOException {
diff --git a/solr/core/src/test/org/apache/solr/handler/TestHdfsBackupRestoreCore.java b/solr/core/src/test/org/apache/solr/handler/TestHdfsBackupRestoreCore.java
index 010cc8b..3347c02 100644
--- a/solr/core/src/test/org/apache/solr/handler/TestHdfsBackupRestoreCore.java
+++ b/solr/core/src/test/org/apache/solr/handler/TestHdfsBackupRestoreCore.java
@@ -186,13 +186,13 @@
     boolean testViaReplicationHandler = random().nextBoolean();
     String baseUrl = cluster.getJettySolrRunners().get(0).getBaseUrl().toString();
 
-    try (HttpSolrClient masterClient = getHttpSolrClient(replicaBaseUrl)) {
+    try (HttpSolrClient leaderClient = getHttpSolrClient(replicaBaseUrl)) {
       // Create a backup.
       if (testViaReplicationHandler) {
         log.info("Running Backup via replication handler");
         BackupRestoreUtils.runReplicationHandlerCommand(baseUrl, coreName, ReplicationHandler.CMD_BACKUP, "hdfs", backupName);
         final BackupStatusChecker backupStatus
-          = new BackupStatusChecker(masterClient, "/" + coreName + "/replication");
+          = new BackupStatusChecker(leaderClient, "/" + coreName + "/replication");
         backupStatus.waitForBackupSuccess(backupName, 30);
       } else {
         log.info("Running Backup via core admin api");
@@ -209,9 +209,9 @@
           //Delete a few docs
           int numDeletes = TestUtil.nextInt(random(), 1, nDocs);
           for(int i=0; i<numDeletes; i++) {
-            masterClient.deleteByQuery(collectionName, "id:" + i);
+            leaderClient.deleteByQuery(collectionName, "id:" + i);
           }
-          masterClient.commit(collectionName);
+          leaderClient.commit(collectionName);
 
           //Add a few more
           int moreAdds = TestUtil.nextInt(random(), 1, 100);
@@ -219,11 +219,11 @@
             SolrInputDocument doc = new SolrInputDocument();
             doc.addField("id", i + nDocs);
             doc.addField("name", "name = " + (i + nDocs));
-            masterClient.add(collectionName, doc);
+            leaderClient.add(collectionName, doc);
           }
           //Purposely not calling commit once in a while. There can be some docs which are not committed
           if (usually()) {
-            masterClient.commit(collectionName);
+            leaderClient.commit(collectionName);
           }
         }
         // Snapshooter prefixes "snapshot." to the backup name.
@@ -242,7 +242,7 @@
           BackupRestoreUtils.runCoreAdminCommand(replicaBaseUrl, coreName, CoreAdminAction.RESTORECORE.toString(), params);
         }
         //See if restore was successful by checking if all the docs are present again
-        BackupRestoreUtils.verifyDocs(nDocs, masterClient, coreName);
+        BackupRestoreUtils.verifyDocs(nDocs, leaderClient, coreName);
 
         // Verify the permissions for the backup folder.
         FileStatus status = fs.getFileStatus(new org.apache.hadoop.fs.Path("/backup/snapshot."+backupName));
diff --git a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
index a194bcf..c538551 100644
--- a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
+++ b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java
@@ -109,9 +109,9 @@
       + File.separator + "collection1" + File.separator + "conf"
       + File.separator;
 
-  JettySolrRunner masterJetty, slaveJetty, repeaterJetty;
-  HttpSolrClient masterClient, slaveClient, repeaterClient;
-  SolrInstance master = null, slave = null, repeater = null;
+  JettySolrRunner leaderJetty, followerJetty, repeaterJetty;
+  HttpSolrClient leaderClient, followerClient, repeaterClient;
+  SolrInstance leader = null, follower = null, repeater = null;
 
   static String context = "/solr";
 
@@ -119,8 +119,12 @@
   // index from previous test method
   static int nDocs = 500;
   
+  /* For testing backward compatibility, remove for 10.x */
+  private static boolean useLegacyParams = false; 
+  
   @BeforeClass
   public static void beforeClass() {
+    useLegacyParams = rarely();
 
   }
   
@@ -130,25 +134,25 @@
 //    System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
     // For manual testing only
     // useFactory(null); // force an FS factory.
-    master = new SolrInstance(createTempDir("solr-instance").toFile(), "master", null);
-    master.setUp();
-    masterJetty = createAndStartJetty(master);
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leader = new SolrInstance(createTempDir("solr-instance").toFile(), "leader", null);
+    leader.setUp();
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
-    slave = new SolrInstance(createTempDir("solr-instance").toFile(), "slave", masterJetty.getLocalPort());
-    slave.setUp();
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    follower = new SolrInstance(createTempDir("solr-instance").toFile(), "follower", leaderJetty.getLocalPort());
+    follower.setUp();
+    followerJetty = createAndStartJetty(follower);
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
     
     System.setProperty("solr.indexfetcher.sotimeout2", "45000");
   }
 
   public void clearIndexWithReplication() throws Exception {
-    if (numFound(query("*:*", masterClient)) != 0) {
-      masterClient.deleteByQuery("*:*");
-      masterClient.commit();
+    if (numFound(query("*:*", leaderClient)) != 0) {
+      leaderClient.deleteByQuery("*:*");
+      leaderClient.commit();
       // wait for replication to sync & verify
-      assertEquals(0, numFound(rQuery(0, "*:*", slaveClient)));
+      assertEquals(0, numFound(rQuery(0, "*:*", followerClient)));
     }
   }
 
@@ -156,21 +160,21 @@
   @After
   public void tearDown() throws Exception {
     super.tearDown();
-    if (null != masterJetty) {
-      masterJetty.stop();
-      masterJetty = null;
+    if (null != leaderJetty) {
+      leaderJetty.stop();
+      leaderJetty = null;
     }
-    if (null != slaveJetty) {
-      slaveJetty.stop();
-      slaveJetty = null;
+    if (null != followerJetty) {
+      followerJetty.stop();
+      followerJetty = null;
     }
-    if (null != masterClient) {
-      masterClient.close();
-      masterClient = null;
+    if (null != leaderClient) {
+      leaderClient.close();
+      leaderClient = null;
     }
-    if (null != slaveClient) {
-      slaveClient.close();
-      slaveClient = null;
+    if (null != followerClient) {
+      followerClient.close();
+      followerClient = null;
     }
     System.clearProperty("solr.indexfetcher.sotimeout");
   }
@@ -300,67 +304,67 @@
 
   @Test
   public void doTestDetails() throws Exception {
-    slaveJetty.stop();
+    followerJetty.stop();
     
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(CONF_DIR + "solrconfig-slave.xml", "solrconfig.xml");
-    slaveJetty = createAndStartJetty(slave);
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(CONF_DIR + "solrconfig-follower.xml", "solrconfig.xml");
+    followerJetty = createAndStartJetty(follower);
     
-    slaveClient.close();
-    masterClient.close();
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    followerClient.close();
+    leaderClient.close();
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
     
     clearIndexWithReplication();
     { 
-      NamedList<Object> details = getDetails(masterClient);
+      NamedList<Object> details = getDetails(leaderClient);
       
-      assertEquals("master isMaster?", 
-                   "true", details.get("isMaster"));
-      assertEquals("master isSlave?", 
-                   "false", details.get("isSlave"));
-      assertNotNull("master has master section", 
-                    details.get("master"));
+      assertEquals("leader isLeader?",
+                   "true", details.get("isLeader"));
+      assertEquals("leader isFollower?",
+                   "false", details.get("isFollower"));
+      assertNotNull("leader has leader section",
+                    details.get("leader"));
     }
 
-    // check details on the slave a couple of times before & after fetching
+    // check details on the follower a couple of times before & after fetching
     for (int i = 0; i < 3; i++) {
-      NamedList<Object> details = getDetails(slaveClient);
+      NamedList<Object> details = getDetails(followerClient);
       assertNotNull(i + ": " + details);
-      assertNotNull(i + ": " + details.toString(), details.get("slave"));
+      assertNotNull(i + ": " + details.toString(), details.get("follower"));
 
       if (i > 0) {
-        rQuery(i, "*:*", slaveClient);
+        rQuery(i, "*:*", followerClient);
         @SuppressWarnings({"rawtypes"})
-        List replicatedAtCount = (List) ((NamedList) details.get("slave")).get("indexReplicatedAtList");
+        List replicatedAtCount = (List) ((NamedList) details.get("follower")).get("indexReplicatedAtList");
         int tries = 0;
         while ((replicatedAtCount == null || replicatedAtCount.size() < i) && tries++ < 5) {
           Thread.sleep(1000);
-          details = getDetails(slaveClient);
-          replicatedAtCount = (List) ((NamedList) details.get("slave")).get("indexReplicatedAtList");
+          details = getDetails(followerClient);
+          replicatedAtCount = (List) ((NamedList) details.get("follower")).get("indexReplicatedAtList");
         }
         
-        assertNotNull("Expected to see that the slave has replicated" + i + ": " + details.toString(), replicatedAtCount);
+        assertNotNull("Expected to see that the follower has replicated" + i + ": " + details.toString(), replicatedAtCount);
         
         // we can have more replications than we added docs because a replication can legally fail and try 
         // again (sometimes we cannot merge into a live index and have to try again)
         assertTrue("i:" + i + " replicationCount:" + replicatedAtCount.size(), replicatedAtCount.size() >= i); 
       }
 
-      assertEquals(i + ": " + "slave isMaster?", "false", details.get("isMaster"));
-      assertEquals(i + ": " + "slave isSlave?", "true", details.get("isSlave"));
-      assertNotNull(i + ": " + "slave has slave section", details.get("slave"));
+      assertEquals(i + ": " + "follower isLeader?", "false", details.get("isLeader"));
+      assertEquals(i + ": " + "follower isFollower?", "true", details.get("isFollower"));
+      assertNotNull(i + ": " + "follower has follower section", details.get("follower"));
       // SOLR-2677: assert not false negatives
-      Object timesFailed = ((NamedList)details.get("slave")).get(IndexFetcher.TIMES_FAILED);
+      Object timesFailed = ((NamedList)details.get("follower")).get(IndexFetcher.TIMES_FAILED);
       // SOLR-7134: we can have a fail because some mock index files have no checksum, will
       // always be downloaded, and may not be able to be moved into the existing index
-      assertTrue(i + ": " + "slave has fetch error count: " + (String)timesFailed, timesFailed == null || ((String) timesFailed).equals("1"));
+      assertTrue(i + ": " + "follower has fetch error count: " + (String)timesFailed, timesFailed == null || ((String) timesFailed).equals("1"));
 
       if (3 != i) {
         // index & fetch
-        index(masterClient, "id", i, "name", "name = " + i);
-        masterClient.commit();
-        pullFromTo(masterJetty, slaveJetty);
+        index(leaderClient, "id", i, "name", "name = " + i);
+        leaderClient.commit();
+        pullFromTo(leaderJetty, followerJetty);
       }
     }
 
@@ -368,7 +372,7 @@
     JettySolrRunner repeaterJetty = null;
     SolrClient repeaterClient = null;
     try {
-      repeater = new SolrInstance(createTempDir("solr-instance").toFile(), "repeater", masterJetty.getLocalPort());
+      repeater = new SolrInstance(createTempDir("solr-instance").toFile(), "repeater", leaderJetty.getLocalPort());
       repeater.setUp();
       repeaterJetty = createAndStartJetty(repeater);
       repeaterClient = createNewSolrClient(repeaterJetty.getLocalPort());
@@ -376,14 +380,14 @@
       
       NamedList<Object> details = getDetails(repeaterClient);
       
-      assertEquals("repeater isMaster?", 
-                   "true", details.get("isMaster"));
-      assertEquals("repeater isSlave?", 
-                   "true", details.get("isSlave"));
-      assertNotNull("repeater has master section", 
-                    details.get("master"));
-      assertNotNull("repeater has slave section", 
-                    details.get("slave"));
+      assertEquals("repeater isLeader?",
+                   "true", details.get("isLeader"));
+      assertEquals("repeater isFollower?",
+                   "true", details.get("isFollower"));
+      assertNotNull("repeater has leader section",
+                    details.get("leader"));
+      assertNotNull("repeater has follower section",
+                    details.get("follower"));
 
     } finally {
       try { 
@@ -392,104 +396,135 @@
       if (repeaterClient != null) repeaterClient.close();
     }
   }
+  
+  @Test
+  public void testLegacyConfiguration() throws Exception {
+    SolrInstance solrInstance = null;
+    JettySolrRunner instanceJetty = null;
+    SolrClient client = null;
+    try {
+      solrInstance = new SolrInstance(createTempDir("solr-instance").toFile(), "replication-legacy", leaderJetty.getLocalPort());
+      solrInstance.setUp();
+      instanceJetty = createAndStartJetty(solrInstance);
+      client = createNewSolrClient(instanceJetty.getLocalPort());
+
+      
+      NamedList<Object> details = getDetails(client);
+      
+      assertEquals("repeater isLeader?",
+                   "true", details.get("isLeader"));
+      assertEquals("repeater isFollower?",
+                   "true", details.get("isFollower"));
+      assertNotNull("repeater has leader section",
+                    details.get("leader"));
+      assertNotNull("repeater has follower section",
+                    details.get("follower"));
+
+    } finally {
+      if (instanceJetty != null) {
+        instanceJetty.stop();
+      }
+      if (client != null) client.close();
+    }
+  }
 
 
   /**
    * Verify that empty commits and/or commits with openSearcher=false
-   * on the master do not cause subsequent replication problems on the slave 
+   * on the leader do not cause subsequent replication problems on the follower
    */
   public void testEmptyCommits() throws Exception {
     clearIndexWithReplication();
     
-    // add a doc to master and commit
-    index(masterClient, "id", "1", "name", "empty1");
-    emptyUpdate(masterClient, "commit", "true");
+    // add a doc to leader and commit
+    index(leaderClient, "id", "1", "name", "empty1");
+    emptyUpdate(leaderClient, "commit", "true");
     // force replication
-    pullFromMasterToSlave();
-    // verify doc is on slave
-    rQuery(1, "name:empty1", slaveClient);
-    assertVersions(masterClient, slaveClient);
+    pullFromLeaderToFollower();
+    // verify doc is on follower
+    rQuery(1, "name:empty1", followerClient);
+    assertVersions(leaderClient, followerClient);
 
-    // do a completely empty commit on master and force replication
-    emptyUpdate(masterClient, "commit", "true");
-    pullFromMasterToSlave();
+    // do a completely empty commit on leader and force replication
+    emptyUpdate(leaderClient, "commit", "true");
+    pullFromLeaderToFollower();
 
-    // add another doc and verify slave gets it
-    index(masterClient, "id", "2", "name", "empty2");
-    emptyUpdate(masterClient, "commit", "true");
+    // add another doc and verify follower gets it
+    index(leaderClient, "id", "2", "name", "empty2");
+    emptyUpdate(leaderClient, "commit", "true");
     // force replication
-    pullFromMasterToSlave();
+    pullFromLeaderToFollower();
 
-    rQuery(1, "name:empty2", slaveClient);
-    assertVersions(masterClient, slaveClient);
+    rQuery(1, "name:empty2", followerClient);
+    assertVersions(leaderClient, followerClient);
 
-    // add a third doc but don't open a new searcher on master
-    index(masterClient, "id", "3", "name", "empty3");
-    emptyUpdate(masterClient, "commit", "true", "openSearcher", "false");
-    pullFromMasterToSlave();
+    // add a third doc but don't open a new searcher on leader
+    index(leaderClient, "id", "3", "name", "empty3");
+    emptyUpdate(leaderClient, "commit", "true", "openSearcher", "false");
+    pullFromLeaderToFollower();
     
-    // verify slave can search the doc, but master doesn't
-    rQuery(0, "name:empty3", masterClient);
-    rQuery(1, "name:empty3", slaveClient);
+    // verify follower can search the doc, but leader doesn't
+    rQuery(0, "name:empty3", leaderClient);
+    rQuery(1, "name:empty3", followerClient);
 
-    // final doc with hard commit, slave and master both showing all docs
-    index(masterClient, "id", "4", "name", "empty4");
-    emptyUpdate(masterClient, "commit", "true");
-    pullFromMasterToSlave();
+    // final doc with hard commit, follower and leader both showing all docs
+    index(leaderClient, "id", "4", "name", "empty4");
+    emptyUpdate(leaderClient, "commit", "true");
+    pullFromLeaderToFollower();
 
     String q = "name:(empty1 empty2 empty3 empty4)";
-    rQuery(4, q, masterClient);
-    rQuery(4, q, slaveClient);
-    assertVersions(masterClient, slaveClient);
+    rQuery(4, q, leaderClient);
+    rQuery(4, q, followerClient);
+    assertVersions(leaderClient, followerClient);
 
   }
 
   @Test
-  public void doTestReplicateAfterWrite2Slave() throws Exception {
+  public void doTestReplicateAfterWrite2Follower() throws Exception {
     clearIndexWithReplication();
     nDocs--;
     for (int i = 0; i < nDocs; i++) {
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
     }
 
-    invokeReplicationCommand(masterJetty.getLocalPort(), "disableReplication");
-    invokeReplicationCommand(slaveJetty.getLocalPort(), "disablepoll");
+    invokeReplicationCommand(leaderJetty.getLocalPort(), "disableReplication");
+    invokeReplicationCommand(followerJetty.getLocalPort(), "disablepoll");
     
-    masterClient.commit();
+    leaderClient.commit();
 
-    assertEquals(nDocs, numFound(rQuery(nDocs, "*:*", masterClient)));
+    assertEquals(nDocs, numFound(rQuery(nDocs, "*:*", leaderClient)));
 
-    // Make sure that both the index version and index generation on the slave is
-    // higher than that of the master, just to make the test harder.
+    // Make sure that both the index version and index generation on the follower is
+    // higher than that of the leader, just to make the test harder.
 
-    index(slaveClient, "id", 551, "name", "name = " + 551);
-    slaveClient.commit(true, true);
-    index(slaveClient, "id", 552, "name", "name = " + 552);
-    slaveClient.commit(true, true);
-    index(slaveClient, "id", 553, "name", "name = " + 553);
-    slaveClient.commit(true, true);
-    index(slaveClient, "id", 554, "name", "name = " + 554);
-    slaveClient.commit(true, true);
-    index(slaveClient, "id", 555, "name", "name = " + 555);
-    slaveClient.commit(true, true);
+    index(followerClient, "id", 551, "name", "name = " + 551);
+    followerClient.commit(true, true);
+    index(followerClient, "id", 552, "name", "name = " + 552);
+    followerClient.commit(true, true);
+    index(followerClient, "id", 553, "name", "name = " + 553);
+    followerClient.commit(true, true);
+    index(followerClient, "id", 554, "name", "name = " + 554);
+    followerClient.commit(true, true);
+    index(followerClient, "id", 555, "name", "name = " + 555);
+    followerClient.commit(true, true);
 
-    //this doc is added to slave so it should show an item w/ that result
-    assertEquals(1, numFound(rQuery(1, "id:555", slaveClient)));
+    //this doc is added to follower so it should show an item w/ that result
+    assertEquals(1, numFound(rQuery(1, "id:555", followerClient)));
 
     //Let's fetch the index rather than rely on the polling.
-    invokeReplicationCommand(masterJetty.getLocalPort(), "enablereplication");
-    invokeReplicationCommand(slaveJetty.getLocalPort(), "fetchindex");
+    invokeReplicationCommand(leaderJetty.getLocalPort(), "enablereplication");
+    invokeReplicationCommand(followerJetty.getLocalPort(), "fetchindex");
 
     /*
-    //the slave should have done a full copy of the index so the doc with id:555 should not be there in the slave now
-    slaveQueryRsp = rQuery(0, "id:555", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(0, slaveQueryResult.getNumFound());
+    //the follower should have done a full copy of the index so the doc with id:555 should not be there in the follower now
+    followerQueryRsp = rQuery(0, "id:555", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(0, followerQueryResult.getNumFound());
 
-    // make sure we replicated the correct index from the master
-    slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs, slaveQueryResult.getNumFound());
+    // make sure we replicated the correct index from the leader
+    followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs, followerQueryResult.getNumFound());
     
     */
   }
@@ -498,8 +533,8 @@
   //jetty servers.
   static void invokeReplicationCommand(int pJettyPort, String pCommand) throws IOException
   {
-    String masterUrl = buildUrl(pJettyPort) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH+"?command=" + pCommand;
-    URL u = new URL(masterUrl);
+    String leaderUrl = buildUrl(pJettyPort) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH+"?command=" + pCommand;
+    URL u = new URL(leaderUrl);
     InputStream stream = u.openStream();
     stream.close();
   }
@@ -507,80 +542,80 @@
   @Test
   public void doTestIndexAndConfigReplication() throws Exception {
 
-    TestInjection.delayBeforeSlaveCommitRefresh = random().nextInt(10);
+    TestInjection.delayBeforeFollowerCommitRefresh = random().nextInt(10);
 
     clearIndexWithReplication();
 
     nDocs--;
     for (int i = 0; i < nDocs; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
 
-    masterClient.commit();
+    leaderClient.commit();
 
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-    SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    assertEquals(nDocs, numFound(masterQueryRsp));
+    NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+    SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    assertEquals(nDocs, numFound(leaderQueryRsp));
 
-    //get docs from slave and check if number is equal to master
+    //get docs from follower and check if number is equal to leader
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs, numFound(slaveQueryRsp));
+    NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs, numFound(followerQueryRsp));
 
     //compare results
-    String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
     
-    assertVersions(masterClient, slaveClient);
+    assertVersions(leaderClient, followerClient);
 
     //start config files replication test
-    masterClient.deleteByQuery("*:*");
-    masterClient.commit();
+    leaderClient.deleteByQuery("*:*");
+    leaderClient.commit();
 
-    //change the schema on master
-    master.copyConfigFile(CONF_DIR + "schema-replication2.xml", "schema.xml");
+    //change the schema on leader
+    leader.copyConfigFile(CONF_DIR + "schema-replication2.xml", "schema.xml");
 
-    masterJetty.stop();
+    leaderJetty.stop();
 
-    masterJetty = createAndStartJetty(master);
-    masterClient.close();
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient.close();
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(slave.getSolrConfigFile(), "solrconfig.xml");
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(follower.getSolrConfigFile(), "solrconfig.xml");
 
-    slaveJetty.stop();
+    followerJetty.stop();
 
     // setup an xslt dir to force subdir file replication
-    File masterXsltDir = new File(master.getConfDir() + File.separator + "xslt");
-    File masterXsl = new File(masterXsltDir, "dummy.xsl");
-    assertTrue("could not make dir " + masterXsltDir, masterXsltDir.mkdirs());
-    assertTrue(masterXsl.createNewFile());
+    File leaderXsltDir = new File(leader.getConfDir() + File.separator + "xslt");
+    File leaderXsl = new File(leaderXsltDir, "dummy.xsl");
+    assertTrue("could not make dir " + leaderXsltDir, leaderXsltDir.mkdirs());
+    assertTrue(leaderXsl.createNewFile());
 
-    File slaveXsltDir = new File(slave.getConfDir() + File.separator + "xslt");
-    File slaveXsl = new File(slaveXsltDir, "dummy.xsl");
-    assertFalse(slaveXsltDir.exists());
+    File followerXsltDir = new File(follower.getConfDir() + File.separator + "xslt");
+    File followerXsl = new File(followerXsltDir, "dummy.xsl");
+    assertFalse(followerXsltDir.exists());
 
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
-    //add a doc with new field and commit on master to trigger index fetch from slave.
-    index(masterClient, "id", "2000", "name", "name = " + 2000, "newname", "newname = " + 2000);
-    masterClient.commit();
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
+    //add a doc with new field and commit on leader to trigger index fetch from follower.
+    index(leaderClient, "id", "2000", "name", "name = " + 2000, "newname", "newname = " + 2000);
+    leaderClient.commit();
 
-    assertEquals(1, numFound( rQuery(1, "*:*", masterClient)));
+    assertEquals(1, numFound( rQuery(1, "*:*", leaderClient)));
     
-    slaveQueryRsp = rQuery(1, "*:*", slaveClient);
-    assertVersions(masterClient, slaveClient);
-    SolrDocument d = ((SolrDocumentList) slaveQueryRsp.get("response")).get(0);
+    followerQueryRsp = rQuery(1, "*:*", followerClient);
+    assertVersions(leaderClient, followerClient);
+    SolrDocument d = ((SolrDocumentList) followerQueryRsp.get("response")).get(0);
     assertEquals("newname = 2000", (String) d.getFieldValue("newname"));
 
-    assertTrue(slaveXsltDir.isDirectory());
-    assertTrue(slaveXsl.exists());
+    assertTrue(followerXsltDir.isDirectory());
+    assertTrue(followerXsl.exists());
     
-    checkForSingleIndex(masterJetty);
-    checkForSingleIndex(slaveJetty, true);
+    checkForSingleIndex(leaderJetty);
+    checkForSingleIndex(followerJetty, true);
   }
 
   @Test
@@ -588,136 +623,136 @@
     clearIndexWithReplication();
 
     // Test:
-    // setup master/slave.
-    // stop polling on slave, add a doc to master and verify slave hasn't picked it.
+    // setup leader/follower.
+    // stop polling on follower, add a doc to leader and verify follower hasn't picked it.
     nDocs--;
     for (int i = 0; i < nDocs; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
 
-    masterClient.commit();
+    leaderClient.commit();
 
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-    SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    assertEquals(nDocs, numFound(masterQueryRsp));
+    NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+    SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    assertEquals(nDocs, numFound(leaderQueryRsp));
 
-    //get docs from slave and check if number is equal to master
+    //get docs from follower and check if number is equal to leader
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs, numFound(slaveQueryRsp));
+    NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs, numFound(followerQueryRsp));
 
     //compare results
-    String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
 
     // start stop polling test
-    invokeReplicationCommand(slaveJetty.getLocalPort(), "disablepoll");
+    invokeReplicationCommand(followerJetty.getLocalPort(), "disablepoll");
     
-    index(masterClient, "id", 501, "name", "name = " + 501);
-    masterClient.commit();
+    index(leaderClient, "id", 501, "name", "name = " + 501);
+    leaderClient.commit();
 
-    //get docs from master and check if number is equal to master
-    assertEquals(nDocs+1, numFound(rQuery(nDocs+1, "*:*", masterClient)));
+    //get docs from leader and check if number is equal to leader
+    assertEquals(nDocs+1, numFound(rQuery(nDocs+1, "*:*", leaderClient)));
     
     // NOTE: this test is wierd, we want to verify it DOESNT replicate...
     // for now, add a sleep for this.., but the logic is wierd.
     Thread.sleep(3000);
     
-    //get docs from slave and check if number is not equal to master; polling is disabled
-    assertEquals(nDocs, numFound(rQuery(nDocs, "*:*", slaveClient)));
+    //get docs from follower and check if number is not equal to leader; polling is disabled
+    assertEquals(nDocs, numFound(rQuery(nDocs, "*:*", followerClient)));
 
     // re-enable replication
-    invokeReplicationCommand(slaveJetty.getLocalPort(), "enablepoll");
+    invokeReplicationCommand(followerJetty.getLocalPort(), "enablepoll");
 
-    assertEquals(nDocs+1, numFound(rQuery(nDocs+1, "*:*", slaveClient)));
+    assertEquals(nDocs+1, numFound(rQuery(nDocs+1, "*:*", followerClient)));
   }
 
   /**
-   * We assert that if master is down for more than poll interval,
-   * the slave doesn't re-fetch the whole index from master again if
+   * We assert that if leader is down for more than poll interval,
+   * the follower doesn't re-fetch the whole index from leader again if
    * the index hasn't changed. See SOLR-9036
    */
   @Test
-  public void doTestIndexFetchOnMasterRestart() throws Exception  {
+  public void doTestIndexFetchOnLeaderRestart() throws Exception  {
     useFactory(null);
     try {
       clearIndexWithReplication();
-      // change solrconfig having 'replicateAfter startup' option on master
-      master.copyConfigFile(CONF_DIR + "solrconfig-master2.xml",
+      // change solrconfig having 'replicateAfter startup' option on leader
+      leader.copyConfigFile(CONF_DIR + "solrconfig-leader2.xml",
           "solrconfig.xml");
 
-      masterJetty.stop();
-      masterJetty.start();
+      leaderJetty.stop();
+      leaderJetty.start();
 
-      // close and re-create master client because its connection pool has stale connections
-      masterClient.close();
-      masterClient = createNewSolrClient(masterJetty.getLocalPort());
+      // close and re-create leader client because its connection pool has stale connections
+      leaderClient.close();
+      leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
       nDocs--;
       for (int i = 0; i < nDocs; i++)
-        index(masterClient, "id", i, "name", "name = " + i);
+        index(leaderClient, "id", i, "name", "name = " + i);
 
-      masterClient.commit();
+      leaderClient.commit();
 
       @SuppressWarnings({"rawtypes"})
-      NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-      SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-      assertEquals(nDocs, numFound(masterQueryRsp));
+      NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+      SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+      assertEquals(nDocs, numFound(leaderQueryRsp));
 
-      //get docs from slave and check if number is equal to master
+      //get docs from follower and check if number is equal to leader
       @SuppressWarnings({"rawtypes"})
-      NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-      SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-      assertEquals(nDocs, numFound(slaveQueryRsp));
+      NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+      SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+      assertEquals(nDocs, numFound(followerQueryRsp));
 
       //compare results
-      String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+      String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
       assertEquals(null, cmp);
 
-      String timesReplicatedString = getSlaveDetails("timesIndexReplicated");
+      String timesReplicatedString = getFollowerDetails("timesIndexReplicated");
       String timesFailed;
       Integer previousTimesFailed = null;
       if (timesReplicatedString == null) {
         timesFailed = "0";
       } else {
         int timesReplicated = Integer.parseInt(timesReplicatedString);
-        timesFailed = getSlaveDetails("timesFailed");
+        timesFailed = getFollowerDetails("timesFailed");
         if (null == timesFailed) {
           timesFailed = "0";
         }
 
         previousTimesFailed = Integer.parseInt(timesFailed);
-        // Sometimes replication will fail because master's core is still loading; make sure there was one success
+        // Sometimes replication will fail because leader's core is still loading; make sure there was one success
         assertEquals(1, timesReplicated - previousTimesFailed);
 
       }
 
-      masterJetty.stop();
+      leaderJetty.stop();
 
       final TimeOut waitForLeaderToShutdown = new TimeOut(300, TimeUnit.SECONDS, TimeSource.NANO_TIME);
       waitForLeaderToShutdown.waitFor
         ("Gave up after waiting an obscene amount of time for leader to shut down",
-         () -> masterJetty.isStopped() );
+         () -> leaderJetty.isStopped() );
         
       for(int retries=0; ;retries++) { 
 
         Thread.yield(); // might not be necessary at all
-        // poll interval on slave is 1 second, so we just sleep for a few seconds
+        // poll interval on follower is 1 second, so we just sleep for a few seconds
         Thread.sleep(2000);
         
-        NamedList<Object> slaveDetails=null;
+        NamedList<Object> followerDetails=null;
         try {
-          slaveDetails = getSlaveDetails();
-          int failed = Integer.parseInt(getStringOrNull(slaveDetails,"timesFailed"));
+          followerDetails = getFollowerDetails();
+          int failed = Integer.parseInt(getStringOrNull(followerDetails,"timesFailed"));
           if (previousTimesFailed != null) {
             assertTrue(failed > previousTimesFailed);
           }
-          assertEquals(1, Integer.parseInt(getStringOrNull(slaveDetails,"timesIndexReplicated")) - failed);
+          assertEquals(1, Integer.parseInt(getStringOrNull(followerDetails,"timesIndexReplicated")) - failed);
           break;
         } catch (NumberFormatException | AssertionError notYet) {
           if (log.isInfoEnabled()) {
-            log.info("{}th attempt failure on {} details are {}", retries + 1, notYet, slaveDetails); // logOk
+            log.info("{}th attempt failure on {} details are {}", retries + 1, notYet, followerDetails); // logOk
           }
           if (retries>9) {
             log.error("giving up: ", notYet);
@@ -726,22 +761,22 @@
         }
       }
       
-      masterJetty.start();
+      leaderJetty.start();
 
-      // poll interval on slave is 1 second, so we just sleep for a few seconds
+      // poll interval on follower is 1 second, so we just sleep for a few seconds
       Thread.sleep(2000);
-      //get docs from slave and assert that they are still the same as before
-      slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-      slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-      assertEquals(nDocs, numFound(slaveQueryRsp));
+      //get docs from follower and assert that they are still the same as before
+      followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+      followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+      assertEquals(nDocs, numFound(followerQueryRsp));
 
     } finally {
       resetFactory();
     }
   }
 
-  private String getSlaveDetails(String keyName) throws SolrServerException, IOException {
-    NamedList<Object> details = getSlaveDetails();
+  private String getFollowerDetails(String keyName) throws SolrServerException, IOException {
+    NamedList<Object> details = getFollowerDetails();
     return getStringOrNull(details, keyName);
   }
 
@@ -750,149 +785,159 @@
     return o != null ? o.toString() : null;
   }
 
-  private NamedList<Object> getSlaveDetails() throws SolrServerException, IOException {
+  private NamedList<Object> getFollowerDetails() throws SolrServerException, IOException {
     ModifiableSolrParams params = new ModifiableSolrParams();
     params.set(CommonParams.QT, "/replication");
     params.set("command", "details");
-    QueryResponse response = slaveClient.query(params);
+    if (useLegacyParams) {
+      params.set("slave", "true");
+    } else {
+      params.set("follower", "true");
+    }
+    QueryResponse response = followerClient.query(params);
 
-    // details/slave/timesIndexReplicated
+    // details/follower/timesIndexReplicated
     @SuppressWarnings({"unchecked"})
     NamedList<Object> details = (NamedList<Object>) response.getResponse().get("details");
     @SuppressWarnings({"unchecked"})
-    NamedList<Object> slave = (NamedList<Object>) details.get("slave");
-    return slave;
+    NamedList<Object> follower = (NamedList<Object>) details.get("follower");
+    return follower;
   }
 
   @Test
-  public void doTestIndexFetchWithMasterUrl() throws Exception {
-    //change solrconfig on slave
+  public void doTestIndexFetchWithLeaderUrl() throws Exception {
+    //change solrconfig on follower
     //this has no entry for pollinginterval
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(CONF_DIR + "solrconfig-slave1.xml", "solrconfig.xml");
-    slaveJetty.stop();
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(CONF_DIR + "solrconfig-follower1.xml", "solrconfig.xml");
+    followerJetty.stop();
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
-    masterClient.deleteByQuery("*:*");
-    slaveClient.deleteByQuery("*:*");
-    slaveClient.commit();
+    leaderClient.deleteByQuery("*:*");
+    followerClient.deleteByQuery("*:*");
+    followerClient.commit();
     nDocs--;
     for (int i = 0; i < nDocs; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
 
     // make sure prepareCommit doesn't mess up commit  (SOLR-3938)
     
     // todo: make SolrJ easier to pass arbitrary params to
     // TODO: precommit WILL screw with the rest of this test
 
-    masterClient.commit();
+    leaderClient.commit();
 
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-    SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    assertEquals(nDocs, masterQueryResult.getNumFound());
+    NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+    SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    assertEquals(nDocs, leaderQueryResult.getNumFound());
+    
+    String urlKey = "leaderUrl";
+    if (useLegacyParams) {
+      urlKey = "masterUrl";
+    }
 
     // index fetch
-    String masterUrl = buildUrl(slaveJetty.getLocalPort()) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH+"?command=fetchindex&masterUrl=";
-    masterUrl += buildUrl(masterJetty.getLocalPort()) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH;
-    URL url = new URL(masterUrl);
+    String leaderUrl = buildUrl(followerJetty.getLocalPort()) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH+"?command=fetchindex&" + urlKey + "=";
+    leaderUrl += buildUrl(leaderJetty.getLocalPort()) + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH;
+    URL url = new URL(leaderUrl);
     InputStream stream = url.openStream();
     stream.close();
     
-    //get docs from slave and check if number is equal to master
+    //get docs from follower and check if number is equal to leader
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs, slaveQueryResult.getNumFound());
+    NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs, followerQueryResult.getNumFound());
     //compare results
-    String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
 
-    // index fetch from the slave to the master
+    // index fetch from the follower to the leader
     
     for (int i = nDocs; i < nDocs + 3; i++)
-      index(slaveClient, "id", i, "name", "name = " + i);
+      index(followerClient, "id", i, "name", "name = " + i);
 
-    slaveClient.commit();
+    followerClient.commit();
     
-    pullFromSlaveToMaster();
-    rQuery(nDocs + 3, "*:*", masterClient);
+    pullFromFollowerToLeader();
+    rQuery(nDocs + 3, "*:*", leaderClient);
     
-    //get docs from slave and check if number is equal to master
-    slaveQueryRsp = rQuery(nDocs + 3, "*:*", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs + 3, slaveQueryResult.getNumFound());
+    //get docs from follower and check if number is equal to leader
+    followerQueryRsp = rQuery(nDocs + 3, "*:*", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs + 3, followerQueryResult.getNumFound());
     //compare results
-    masterQueryRsp = rQuery(nDocs + 3, "*:*", masterClient);
-    masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    leaderQueryRsp = rQuery(nDocs + 3, "*:*", leaderClient);
+    leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
 
-    assertVersions(masterClient, slaveClient);
+    assertVersions(leaderClient, followerClient);
     
-    pullFromSlaveToMaster();
+    pullFromFollowerToLeader();
     
-    //get docs from slave and check if number is equal to master
-    slaveQueryRsp = rQuery(nDocs + 3, "*:*", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs + 3, slaveQueryResult.getNumFound());
+    //get docs from follower and check if number is equal to leader
+    followerQueryRsp = rQuery(nDocs + 3, "*:*", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs + 3, followerQueryResult.getNumFound());
     //compare results
-    masterQueryRsp = rQuery(nDocs + 3, "*:*", masterClient);
-    masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    leaderQueryRsp = rQuery(nDocs + 3, "*:*", leaderClient);
+    leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
     
-    assertVersions(masterClient, slaveClient);
+    assertVersions(leaderClient, followerClient);
     
     // now force a new index directory
     for (int i = nDocs + 3; i < nDocs + 7; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
     
-    masterClient.commit();
+    leaderClient.commit();
     
-    pullFromSlaveToMaster();
-    rQuery((int) slaveQueryResult.getNumFound(), "*:*", masterClient);
+    pullFromFollowerToLeader();
+    rQuery((int) followerQueryResult.getNumFound(), "*:*", leaderClient);
     
-    //get docs from slave and check if number is equal to master
-    slaveQueryRsp = rQuery(nDocs + 3, "*:*", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs + 3, slaveQueryResult.getNumFound());
+    //get docs from follower and check if number is equal to leader
+    followerQueryRsp = rQuery(nDocs + 3, "*:*", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs + 3, followerQueryResult.getNumFound());
     //compare results
-    masterQueryRsp = rQuery(nDocs + 3, "*:*", masterClient);
-    masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    leaderQueryRsp = rQuery(nDocs + 3, "*:*", leaderClient);
+    leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
     
-    assertVersions(masterClient, slaveClient);
-    pullFromSlaveToMaster();
+    assertVersions(leaderClient, followerClient);
+    pullFromFollowerToLeader();
     
-    //get docs from slave and check if number is equal to master
-    slaveQueryRsp = rQuery(nDocs + 3, "*:*", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs + 3, slaveQueryResult.getNumFound());
+    //get docs from follower and check if number is equal to leader
+    followerQueryRsp = rQuery(nDocs + 3, "*:*", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs + 3, followerQueryResult.getNumFound());
     //compare results
-    masterQueryRsp = rQuery(nDocs + 3, "*:*", masterClient);
-    masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    leaderQueryRsp = rQuery(nDocs + 3, "*:*", leaderClient);
+    leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
     
-    assertVersions(masterClient, slaveClient);
+    assertVersions(leaderClient, followerClient);
     
-    NamedList<Object> details = getDetails(masterClient);
+    NamedList<Object> details = getDetails(leaderClient);
    
-    details = getDetails(slaveClient);
+    details = getDetails(followerClient);
     
-    checkForSingleIndex(masterJetty);
-    checkForSingleIndex(slaveJetty);
+    checkForSingleIndex(leaderJetty);
+    checkForSingleIndex(followerJetty);
   }
   
   
   @Test
   //commented 20-Sep-2018  @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // added 17-Aug-2018
   public void doTestStressReplication() throws Exception {
-    // change solrconfig on slave
+    // change solrconfig on follower
     // this has no entry for pollinginterval
     
     // get us a straight standard fs dir rather than mock*dir
@@ -901,30 +946,30 @@
     if (useStraightStandardDirectory) {
       useFactory(null);
     }
-    final String SLAVE_SCHEMA_1 = "schema-replication1.xml";
-    final String SLAVE_SCHEMA_2 = "schema-replication2.xml";
-    String slaveSchema = SLAVE_SCHEMA_1;
+    final String FOLLOWER_SCHEMA_1 = "schema-replication1.xml";
+    final String FOLLOWER_SCHEMA_2 = "schema-replication2.xml";
+    String followerSchema = FOLLOWER_SCHEMA_1;
 
     try {
 
-      slave.setTestPort(masterJetty.getLocalPort());
-      slave.copyConfigFile(CONF_DIR +"solrconfig-slave1.xml", "solrconfig.xml");
-      slave.copyConfigFile(CONF_DIR +slaveSchema, "schema.xml");
-      slaveJetty.stop();
-      slaveJetty = createAndStartJetty(slave);
-      slaveClient.close();
-      slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+      follower.setTestPort(leaderJetty.getLocalPort());
+      follower.copyConfigFile(CONF_DIR +"solrconfig-follower1.xml", "solrconfig.xml");
+      follower.copyConfigFile(CONF_DIR +followerSchema, "schema.xml");
+      followerJetty.stop();
+      followerJetty = createAndStartJetty(follower);
+      followerClient.close();
+      followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
-      master.copyConfigFile(CONF_DIR + "solrconfig-master3.xml",
+      leader.copyConfigFile(CONF_DIR + "solrconfig-leader3.xml",
           "solrconfig.xml");
-      masterJetty.stop();
-      masterJetty = createAndStartJetty(master);
-      masterClient.close();
-      masterClient = createNewSolrClient(masterJetty.getLocalPort());
+      leaderJetty.stop();
+      leaderJetty = createAndStartJetty(leader);
+      leaderClient.close();
+      leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
       
-      masterClient.deleteByQuery("*:*");
-      slaveClient.deleteByQuery("*:*");
-      slaveClient.commit();
+      leaderClient.deleteByQuery("*:*");
+      followerClient.deleteByQuery("*:*");
+      followerClient.commit();
       
       int maxDocs = TEST_NIGHTLY ? 1000 : 75;
       int rounds = TEST_NIGHTLY ? 45 : 3;
@@ -936,57 +981,57 @@
         if (confCoreReload) {
           // toggle the schema file used
 
-          slaveSchema = slaveSchema.equals(SLAVE_SCHEMA_1) ? 
-            SLAVE_SCHEMA_2 : SLAVE_SCHEMA_1;
-          master.copyConfigFile(CONF_DIR + slaveSchema, "schema.xml");
+          followerSchema = followerSchema.equals(FOLLOWER_SCHEMA_1) ?
+            FOLLOWER_SCHEMA_2 : FOLLOWER_SCHEMA_1;
+          leader.copyConfigFile(CONF_DIR + followerSchema, "schema.xml");
         }
         
         int docs = random().nextInt(maxDocs) + 1;
         for (int i = 0; i < docs; i++) {
-          index(masterClient, "id", id++, "name", "name = " + i);
+          index(leaderClient, "id", id++, "name", "name = " + i);
         }
         
         totalDocs += docs;
-        masterClient.commit();
+        leaderClient.commit();
         
         @SuppressWarnings({"rawtypes"})
-        NamedList masterQueryRsp = rQuery(totalDocs, "*:*", masterClient);
-        SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp
+        NamedList leaderQueryRsp = rQuery(totalDocs, "*:*", leaderClient);
+        SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp
             .get("response");
-        assertEquals(totalDocs, masterQueryResult.getNumFound());
+        assertEquals(totalDocs, leaderQueryResult.getNumFound());
         
         // index fetch
-        Date slaveCoreStart = watchCoreStartAt(slaveClient, 30*1000, null);
-        pullFromMasterToSlave();
+        Date followerCoreStart = watchCoreStartAt(followerClient, 30*1000, null);
+        pullFromLeaderToFollower();
         if (confCoreReload) {
-          watchCoreStartAt(slaveClient, 30*1000, slaveCoreStart);
+          watchCoreStartAt(followerClient, 30*1000, followerCoreStart);
         }
 
-        // get docs from slave and check if number is equal to master
+        // get docs from follower and check if number is equal to leader
         @SuppressWarnings({"rawtypes"})
-        NamedList slaveQueryRsp = rQuery(totalDocs, "*:*", slaveClient);
-        SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp
+        NamedList followerQueryRsp = rQuery(totalDocs, "*:*", followerClient);
+        SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp
             .get("response");
-        assertEquals(totalDocs, slaveQueryResult.getNumFound());
+        assertEquals(totalDocs, followerQueryResult.getNumFound());
         // compare results
-        String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult,
-            slaveQueryResult, 0, null);
+        String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult,
+            followerQueryResult, 0, null);
         assertEquals(null, cmp);
         
-        assertVersions(masterClient, slaveClient);
+        assertVersions(leaderClient, followerClient);
         
-        checkForSingleIndex(masterJetty);
+        checkForSingleIndex(leaderJetty);
         
         if (!Constants.WINDOWS) {
-          checkForSingleIndex(slaveJetty);
+          checkForSingleIndex(followerJetty);
         }
         
         if (random().nextBoolean()) {
-          // move the slave ahead
+          // move the follower ahead
           for (int i = 0; i < 3; i++) {
-            index(slaveClient, "id", id++, "name", "name = " + i);
+            index(followerClient, "id", id++, "name", "name = " + i);
           }
-          slaveClient.commit();
+          followerClient.commit();
         }
         
       }
@@ -1050,23 +1095,23 @@
     return list.length;
   }
 
-  private void pullFromMasterToSlave() throws MalformedURLException,
+  private void pullFromLeaderToFollower() throws MalformedURLException,
       IOException {
-    pullFromTo(masterJetty, slaveJetty);
+    pullFromTo(leaderJetty, followerJetty);
   }
   
   @Test
   public void doTestRepeater() throws Exception {
     // no polling
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(CONF_DIR + "solrconfig-slave1.xml", "solrconfig.xml");
-    slaveJetty.stop();
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(CONF_DIR + "solrconfig-follower1.xml", "solrconfig.xml");
+    followerJetty.stop();
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
     try {
-      repeater = new SolrInstance(createTempDir("solr-instance").toFile(), "repeater", masterJetty.getLocalPort());
+      repeater = new SolrInstance(createTempDir("solr-instance").toFile(), "repeater", leaderJetty.getLocalPort());
       repeater.setUp();
       repeater.copyConfigFile(CONF_DIR + "solrconfig-repeater.xml",
           "solrconfig.xml");
@@ -1077,45 +1122,45 @@
       repeaterClient = createNewSolrClient(repeaterJetty.getLocalPort());
       
       for (int i = 0; i < 3; i++)
-        index(masterClient, "id", i, "name", "name = " + i);
+        index(leaderClient, "id", i, "name", "name = " + i);
 
-      masterClient.commit();
+      leaderClient.commit();
       
-      pullFromTo(masterJetty, repeaterJetty);
+      pullFromTo(leaderJetty, repeaterJetty);
       
       rQuery(3, "*:*", repeaterClient);
       
-      pullFromTo(repeaterJetty, slaveJetty);
+      pullFromTo(repeaterJetty, followerJetty);
       
-      rQuery(3, "*:*", slaveClient);
+      rQuery(3, "*:*", followerClient);
       
-      assertVersions(masterClient, repeaterClient);
-      assertVersions(repeaterClient, slaveClient);
+      assertVersions(leaderClient, repeaterClient);
+      assertVersions(repeaterClient, followerClient);
       
       for (int i = 0; i < 4; i++)
         index(repeaterClient, "id", i, "name", "name = " + i);
       repeaterClient.commit();
       
-      pullFromTo(masterJetty, repeaterJetty);
+      pullFromTo(leaderJetty, repeaterJetty);
       
       rQuery(3, "*:*", repeaterClient);
       
-      pullFromTo(repeaterJetty, slaveJetty);
+      pullFromTo(repeaterJetty, followerJetty);
       
-      rQuery(3, "*:*", slaveClient);
+      rQuery(3, "*:*", followerClient);
       
       for (int i = 3; i < 6; i++)
-        index(masterClient, "id", i, "name", "name = " + i);
+        index(leaderClient, "id", i, "name", "name = " + i);
       
-      masterClient.commit();
+      leaderClient.commit();
       
-      pullFromTo(masterJetty, repeaterJetty);
+      pullFromTo(leaderJetty, repeaterJetty);
       
       rQuery(6, "*:*", repeaterClient);
       
-      pullFromTo(repeaterJetty, slaveJetty);
+      pullFromTo(repeaterJetty, followerJetty);
       
-      rQuery(6, "*:*", slaveClient);
+      rQuery(6, "*:*", followerClient);
 
     } finally {
       if (repeater != null) {
@@ -1164,82 +1209,82 @@
     ArrayList<NamedList<Object>> commits;
     details = getDetails(client);
     commits = (ArrayList<NamedList<Object>>) details.get("commits");
-    Long maxVersionSlave= 0L;
+    Long maxVersionFollower= 0L;
     for(NamedList<Object> commit : commits) {
       Long version = (Long) commit.get("indexVersion");
-      maxVersionSlave = Math.max(version, maxVersionSlave);
+      maxVersionFollower = Math.max(version, maxVersionFollower);
     }
-    return maxVersionSlave;
+    return maxVersionFollower;
   }
 
-  private void pullFromSlaveToMaster() throws MalformedURLException,
+  private void pullFromFollowerToLeader() throws MalformedURLException,
       IOException {
-    pullFromTo(slaveJetty, masterJetty);
+    pullFromTo(followerJetty, leaderJetty);
   }
   
   private void pullFromTo(JettySolrRunner from, JettySolrRunner to) throws IOException {
-    String masterUrl;
+    String leaderUrl;
     URL url;
     InputStream stream;
-    masterUrl = buildUrl(to.getLocalPort())
+    leaderUrl = buildUrl(to.getLocalPort())
         + "/" + DEFAULT_TEST_CORENAME
-        + ReplicationHandler.PATH+"?wait=true&command=fetchindex&masterUrl="
+        + ReplicationHandler.PATH+"?wait=true&command=fetchindex&leaderUrl="
         + buildUrl(from.getLocalPort())
         + "/" + DEFAULT_TEST_CORENAME + ReplicationHandler.PATH;
-    url = new URL(masterUrl);
+    url = new URL(leaderUrl);
     stream = url.openStream();
     stream.close();
   }
 
   @Test
   public void doTestReplicateAfterStartup() throws Exception {
-    //stop slave
-    slaveJetty.stop();
+    //stop follower
+    followerJetty.stop();
 
     nDocs--;
-    masterClient.deleteByQuery("*:*");
+    leaderClient.deleteByQuery("*:*");
 
-    masterClient.commit();
+    leaderClient.commit();
 
 
 
-    //change solrconfig having 'replicateAfter startup' option on master
-    master.copyConfigFile(CONF_DIR + "solrconfig-master2.xml",
+    //change solrconfig having 'replicateAfter startup' option on leader
+    leader.copyConfigFile(CONF_DIR + "solrconfig-leader2.xml",
                           "solrconfig.xml");
 
-    masterJetty.stop();
+    leaderJetty.stop();
 
-    masterJetty = createAndStartJetty(master);
-    masterClient.close();
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient.close();
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
     
     for (int i = 0; i < nDocs; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
 
-    masterClient.commit();
+    leaderClient.commit();
     
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-    SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    assertEquals(nDocs, masterQueryResult.getNumFound());
+    NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+    SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    assertEquals(nDocs, leaderQueryResult.getNumFound());
     
 
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(slave.getSolrConfigFile(), "solrconfig.xml");
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(follower.getSolrConfigFile(), "solrconfig.xml");
 
-    //start slave
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    //start follower
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
-    //get docs from slave and check if number is equal to master
+    //get docs from follower and check if number is equal to leader
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(nDocs, slaveQueryResult.getNumFound());
+    NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(nDocs, followerQueryResult.getNumFound());
 
     //compare results
-    String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
 
   }
@@ -1249,61 +1294,61 @@
     useFactory(null);
     try {
       
-      // stop slave
-      slaveJetty.stop();
+      // stop follower
+      followerJetty.stop();
       
       nDocs--;
-      masterClient.deleteByQuery("*:*");
+      leaderClient.deleteByQuery("*:*");
       
-      masterClient.commit();
+      leaderClient.commit();
       
-      // change solrconfig having 'replicateAfter startup' option on master
-      master.copyConfigFile(CONF_DIR + "solrconfig-master2.xml",
+      // change solrconfig having 'replicateAfter startup' option on leader
+      leader.copyConfigFile(CONF_DIR + "solrconfig-leader2.xml",
           "solrconfig.xml");
       
-      masterJetty.stop();
+      leaderJetty.stop();
       
-      masterJetty = createAndStartJetty(master);
-      masterClient.close();
-      masterClient = createNewSolrClient(masterJetty.getLocalPort());
+      leaderJetty = createAndStartJetty(leader);
+      leaderClient.close();
+      leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
       
       for (int i = 0; i < nDocs; i++)
-        index(masterClient, "id", i, "name", "name = " + i);
+        index(leaderClient, "id", i, "name", "name = " + i);
       
-      masterClient.commit();
+      leaderClient.commit();
       
-      // now we restart to test what happens with no activity before the slave
+      // now we restart to test what happens with no activity before the follower
       // tries to
       // replicate
-      masterJetty.stop();
-      masterJetty.start();
+      leaderJetty.stop();
+      leaderJetty.start();
       
-      // masterClient = createNewSolrClient(masterJetty.getLocalPort());
+      // leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
       
       @SuppressWarnings({"rawtypes"})
-      NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-      SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp
+      NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+      SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp
           .get("response");
-      assertEquals(nDocs, masterQueryResult.getNumFound());
+      assertEquals(nDocs, leaderQueryResult.getNumFound());
       
-      slave.setTestPort(masterJetty.getLocalPort());
-      slave.copyConfigFile(slave.getSolrConfigFile(), "solrconfig.xml");
+      follower.setTestPort(leaderJetty.getLocalPort());
+      follower.copyConfigFile(follower.getSolrConfigFile(), "solrconfig.xml");
       
-      // start slave
-      slaveJetty = createAndStartJetty(slave);
-      slaveClient.close();
-      slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+      // start follower
+      followerJetty = createAndStartJetty(follower);
+      followerClient.close();
+      followerClient = createNewSolrClient(followerJetty.getLocalPort());
       
-      // get docs from slave and check if number is equal to master
+      // get docs from follower and check if number is equal to leader
       @SuppressWarnings({"rawtypes"})
-      NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-      SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp
+      NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+      SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp
           .get("response");
-      assertEquals(nDocs, slaveQueryResult.getNumFound());
+      assertEquals(nDocs, followerQueryResult.getNumFound());
       
       // compare results
-      String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult,
-          slaveQueryResult, 0, null);
+      String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult,
+          followerQueryResult, 0, null);
       assertEquals(null, cmp);
       
     } finally {
@@ -1315,69 +1360,69 @@
   public void doTestReplicateAfterCoreReload() throws Exception {
     int docs = TEST_NIGHTLY ? 200000 : 10;
     
-    //stop slave
-    slaveJetty.stop();
+    //stop follower
+    followerJetty.stop();
 
 
-    //change solrconfig having 'replicateAfter startup' option on master
-    master.copyConfigFile(CONF_DIR + "solrconfig-master3.xml",
+    //change solrconfig having 'replicateAfter startup' option on leader
+    leader.copyConfigFile(CONF_DIR + "solrconfig-leader3.xml",
                           "solrconfig.xml");
 
-    masterJetty.stop();
+    leaderJetty.stop();
 
-    masterJetty = createAndStartJetty(master);
-    masterClient.close();
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient.close();
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
-    masterClient.deleteByQuery("*:*");
+    leaderClient.deleteByQuery("*:*");
     for (int i = 0; i < docs; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
 
-    masterClient.commit();
+    leaderClient.commit();
 
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp = rQuery(docs, "*:*", masterClient);
-    SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    assertEquals(docs, masterQueryResult.getNumFound());
+    NamedList leaderQueryRsp = rQuery(docs, "*:*", leaderClient);
+    SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    assertEquals(docs, leaderQueryResult.getNumFound());
     
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(slave.getSolrConfigFile(), "solrconfig.xml");
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(follower.getSolrConfigFile(), "solrconfig.xml");
 
-    //start slave
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    //start follower
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
     
-    //get docs from slave and check if number is equal to master
+    //get docs from follower and check if number is equal to leader
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(docs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(docs, slaveQueryResult.getNumFound());
+    NamedList followerQueryRsp = rQuery(docs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(docs, followerQueryResult.getNumFound());
     
     //compare results
-    String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
     
-    Object version = getIndexVersion(masterClient).get("indexversion");
+    Object version = getIndexVersion(leaderClient).get("indexversion");
     
-    reloadCore(masterClient, "collection1");
+    reloadCore(leaderClient, "collection1");
     
-    assertEquals(version, getIndexVersion(masterClient).get("indexversion"));
+    assertEquals(version, getIndexVersion(leaderClient).get("indexversion"));
     
-    index(masterClient, "id", docs + 10, "name", "name = 1");
-    index(masterClient, "id", docs + 20, "name", "name = 2");
+    index(leaderClient, "id", docs + 10, "name", "name = 1");
+    index(leaderClient, "id", docs + 20, "name", "name = 2");
 
-    masterClient.commit();
+    leaderClient.commit();
     
     @SuppressWarnings({"rawtypes"})
-    NamedList resp =  rQuery(docs + 2, "*:*", masterClient);
-    masterQueryResult = (SolrDocumentList) resp.get("response");
-    assertEquals(docs + 2, masterQueryResult.getNumFound());
+    NamedList resp =  rQuery(docs + 2, "*:*", leaderClient);
+    leaderQueryResult = (SolrDocumentList) resp.get("response");
+    assertEquals(docs + 2, leaderQueryResult.getNumFound());
     
-    //get docs from slave and check if number is equal to master
-    slaveQueryRsp = rQuery(docs + 2, "*:*", slaveClient);
-    slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(docs + 2, slaveQueryResult.getNumFound());
+    //get docs from follower and check if number is equal to leader
+    followerQueryRsp = rQuery(docs + 2, "*:*", followerClient);
+    followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(docs + 2, followerQueryResult.getNumFound());
     
   }
 
@@ -1388,117 +1433,117 @@
 
     nDocs--;
     for (int i = 0; i < nDocs; i++)
-      index(masterClient, "id", i, "name", "name = " + i);
+      index(leaderClient, "id", i, "name", "name = " + i);
 
-    masterClient.commit();
+    leaderClient.commit();
 
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp = rQuery(nDocs, "*:*", masterClient);
-    SolrDocumentList masterQueryResult = (SolrDocumentList) masterQueryRsp.get("response");
-    assertEquals(nDocs, masterQueryResult.getNumFound());
+    NamedList leaderQueryRsp = rQuery(nDocs, "*:*", leaderClient);
+    SolrDocumentList leaderQueryResult = (SolrDocumentList) leaderQueryRsp.get("response");
+    assertEquals(nDocs, leaderQueryResult.getNumFound());
 
-    //get docs from slave and check if number is equal to master
+    //get docs from follower and check if number is equal to leader
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(nDocs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
+    NamedList followerQueryRsp = rQuery(nDocs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
 
-    assertEquals(nDocs, slaveQueryResult.getNumFound());
+    assertEquals(nDocs, followerQueryResult.getNumFound());
 
     //compare results
-    String cmp = BaseDistributedSearchTestCase.compare(masterQueryResult, slaveQueryResult, 0, null);
+    String cmp = BaseDistributedSearchTestCase.compare(leaderQueryResult, followerQueryResult, 0, null);
     assertEquals(null, cmp);
 
     //start config files replication test
-    //clear master index
-    masterClient.deleteByQuery("*:*");
-    masterClient.commit();
-    rQuery(0, "*:*", masterClient); // sanity check w/retry
+    //clear leader index
+    leaderClient.deleteByQuery("*:*");
+    leaderClient.commit();
+    rQuery(0, "*:*", leaderClient); // sanity check w/retry
 
-    //change solrconfig on master
-    master.copyConfigFile(CONF_DIR + "solrconfig-master1.xml", 
+    //change solrconfig on leader
+    leader.copyConfigFile(CONF_DIR + "solrconfig-leader1.xml",
                           "solrconfig.xml");
 
-    //change schema on master
-    master.copyConfigFile(CONF_DIR + "schema-replication2.xml", 
+    //change schema on leader
+    leader.copyConfigFile(CONF_DIR + "schema-replication2.xml",
                           "schema.xml");
 
     //keep a copy of the new schema
-    master.copyConfigFile(CONF_DIR + "schema-replication2.xml", 
+    leader.copyConfigFile(CONF_DIR + "schema-replication2.xml",
                           "schema-replication2.xml");
 
-    masterJetty.stop();
+    leaderJetty.stop();
 
-    masterJetty = createAndStartJetty(master);
-    masterClient.close();
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient.close();
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(slave.getSolrConfigFile(), "solrconfig.xml");
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(follower.getSolrConfigFile(), "solrconfig.xml");
 
-    slaveJetty.stop();
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    followerJetty.stop();
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
-    slaveClient.deleteByQuery("*:*");
-    slaveClient.commit();
-    rQuery(0, "*:*", slaveClient); // sanity check w/retry
+    followerClient.deleteByQuery("*:*");
+    followerClient.commit();
+    rQuery(0, "*:*", followerClient); // sanity check w/retry
     
-    // record collection1's start time on slave
-    final Date slaveStartTime = watchCoreStartAt(slaveClient, 30*1000, null);
+    // record collection1's start time on follower
+    final Date followerStartTime = watchCoreStartAt(followerClient, 30*1000, null);
 
-    //add a doc with new field and commit on master to trigger index fetch from slave.
-    index(masterClient, "id", "2000", "name", "name = " + 2000, "newname", "n2000");
-    masterClient.commit();
-    rQuery(1, "newname:n2000", masterClient);  // sanity check
+    //add a doc with new field and commit on leader to trigger index fetch from follower.
+    index(leaderClient, "id", "2000", "name", "name = " + 2000, "newname", "n2000");
+    leaderClient.commit();
+    rQuery(1, "newname:n2000", leaderClient);  // sanity check
 
-    // wait for slave to reload core by watching updated startTime
-    watchCoreStartAt(slaveClient, 30*1000, slaveStartTime);
+    // wait for follower to reload core by watching updated startTime
+    watchCoreStartAt(followerClient, 30*1000, followerStartTime);
 
     @SuppressWarnings({"rawtypes"})
-    NamedList masterQueryRsp2 = rQuery(1, "id:2000", masterClient);
-    SolrDocumentList masterQueryResult2 = (SolrDocumentList) masterQueryRsp2.get("response");
-    assertEquals(1, masterQueryResult2.getNumFound());
+    NamedList leaderQueryRsp2 = rQuery(1, "id:2000", leaderClient);
+    SolrDocumentList leaderQueryResult2 = (SolrDocumentList) leaderQueryRsp2.get("response");
+    assertEquals(1, leaderQueryResult2.getNumFound());
 
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp2 = rQuery(1, "id:2000", slaveClient);
-    SolrDocumentList slaveQueryResult2 = (SolrDocumentList) slaveQueryRsp2.get("response");
-    assertEquals(1, slaveQueryResult2.getNumFound());
+    NamedList followerQueryRsp2 = rQuery(1, "id:2000", followerClient);
+    SolrDocumentList followerQueryResult2 = (SolrDocumentList) followerQueryRsp2.get("response");
+    assertEquals(1, followerQueryResult2.getNumFound());
     
-    checkForSingleIndex(masterJetty);
-    checkForSingleIndex(slaveJetty, true);
+    checkForSingleIndex(leaderJetty);
+    checkForSingleIndex(followerJetty, true);
   }
 
   @Test
   public void testRateLimitedReplication() throws Exception {
 
     //clean index
-    masterClient.deleteByQuery("*:*");
-    slaveClient.deleteByQuery("*:*");
-    masterClient.commit();
-    slaveClient.commit();
+    leaderClient.deleteByQuery("*:*");
+    followerClient.deleteByQuery("*:*");
+    leaderClient.commit();
+    followerClient.commit();
 
-    masterJetty.stop();
-    slaveJetty.stop();
+    leaderJetty.stop();
+    followerJetty.stop();
 
-    //Start master with the new solrconfig
-    master.copyConfigFile(CONF_DIR + "solrconfig-master-throttled.xml", "solrconfig.xml");
+    //Start leader with the new solrconfig
+    leader.copyConfigFile(CONF_DIR + "solrconfig-leader-throttled.xml", "solrconfig.xml");
     useFactory(null);
-    masterJetty = createAndStartJetty(master);
-    masterClient.close();
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient.close();
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
     //index docs
     final int totalDocs = TestUtil.nextInt(random(), 17, 53);
     for (int i = 0; i < totalDocs; i++)
-      index(masterClient, "id", i, "name", TestUtil.randomSimpleString(random(), 1000 , 5000));
+      index(leaderClient, "id", i, "name", TestUtil.randomSimpleString(random(), 1000 , 5000));
 
-    masterClient.commit();
+    leaderClient.commit();
 
     //Check Index Size
-    String dataDir = master.getDataDir();
-    masterClient.close();
-    masterJetty.stop();
+    String dataDir = leader.getDataDir();
+    leaderClient.close();
+    leaderJetty.stop();
 
     Directory dir = FSDirectory.open(Paths.get(dataDir).resolve("index"));
     String[] files = dir.listAll();
@@ -1511,29 +1556,29 @@
 
     //Start again and replicate the data
     useFactory(null);
-    masterJetty = createAndStartJetty(master);
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
-    //start slave
-    slave.setTestPort(masterJetty.getLocalPort());
-    slave.copyConfigFile(CONF_DIR + "solrconfig-slave1.xml", "solrconfig.xml");
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient.close();
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    //start follower
+    follower.setTestPort(leaderJetty.getLocalPort());
+    follower.copyConfigFile(CONF_DIR + "solrconfig-follower1.xml", "solrconfig.xml");
+    followerJetty = createAndStartJetty(follower);
+    followerClient.close();
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
     long startTime = System.nanoTime();
 
-    pullFromMasterToSlave();
+    pullFromLeaderToFollower();
 
-    //Add a few more docs in the master. Just to make sure that we are replicating the correct index point
+    //Add a few more docs in the leader. Just to make sure that we are replicating the correct index point
     //These extra docs should not get replicated
-    new Thread(new AddExtraDocs(masterClient, totalDocs)).start();
+    new Thread(new AddExtraDocs(leaderClient, totalDocs)).start();
 
     //Wait and make sure that it actually replicated correctly.
     @SuppressWarnings({"rawtypes"})
-    NamedList slaveQueryRsp = rQuery(totalDocs, "*:*", slaveClient);
-    SolrDocumentList slaveQueryResult = (SolrDocumentList) slaveQueryRsp.get("response");
-    assertEquals(totalDocs, slaveQueryResult.getNumFound());
+    NamedList followerQueryRsp = rQuery(totalDocs, "*:*", followerClient);
+    SolrDocumentList followerQueryResult = (SolrDocumentList) followerQueryRsp.get("response");
+    assertEquals(totalDocs, followerQueryResult.getNumFound());
 
     long timeTaken = System.nanoTime() - startTime;
 
@@ -1554,7 +1599,7 @@
     for (String param : params) {
       for (String filename : illegalFilenames) {
         expectThrows(Exception.class, () ->
-            invokeReplicationCommand(masterJetty.getLocalPort(), "filecontent&" + param + "=" + filename));
+            invokeReplicationCommand(leaderJetty.getLocalPort(), "filecontent&" + param + "=" + filename));
       }
     }
   }
@@ -1566,7 +1611,7 @@
         .add("wt", "json")
         .add("command", "filelist")
         .add("generation", "-2"); // A 'generation' value not matching any commit point should cause error.
-    QueryResponse response = slaveClient.query(q);
+    QueryResponse response = followerClient.query(q);
     NamedList<Object> resp = response.getResponse();
     assertNotNull(resp);
     assertEquals("ERROR", resp.get("status"));
@@ -1575,15 +1620,15 @@
 
   @Test
   public void testFetchIndexShouldReportErrorsWhenTheyOccur() throws Exception  {
-    int masterPort = masterJetty.getLocalPort();
-    masterJetty.stop();
+    int leaderPort = leaderJetty.getLocalPort();
+    leaderJetty.stop();
     SolrQuery q = new SolrQuery();
     q.add("qt", "/replication")
         .add("wt", "json")
         .add("wait", "true")
         .add("command", "fetchindex")
-        .add("masterUrl", buildUrl(masterPort));
-    QueryResponse response = slaveClient.query(q);
+        .add("leaderUrl", buildUrl(leaderPort));
+    QueryResponse response = followerClient.query(q);
     NamedList<Object> resp = response.getResponse();
     assertNotNull(resp);
     assertEquals("Fetch index with wait=true should have returned an error response", "ERROR", resp.get("status"));
@@ -1595,7 +1640,7 @@
     q.add("qt", "/replication")
         .add("wt", "json");
     SolrException thrown = expectThrows(SolrException.class, () -> {
-      slaveClient.query(q);
+      followerClient.query(q);
     });
     assertEquals(SolrException.ErrorCode.BAD_REQUEST.code, thrown.code());
     assertThat(thrown.getMessage(), containsString("Missing required parameter: command"));
@@ -1608,7 +1653,7 @@
         .add("wt", "json")
         .add("command", "deletebackup");
     SolrException thrown = expectThrows(SolrException.class, () -> {
-      slaveClient.query(q);
+      followerClient.query(q);
     });
     assertEquals(SolrException.ErrorCode.BAD_REQUEST.code, thrown.code());
     assertThat(thrown.getMessage(), containsString("Missing required parameter: name"));
@@ -1617,9 +1662,9 @@
   @Test
   public void testEmptyBackups() throws Exception {
     final File backupDir = createTempDir().toFile();
-    final BackupStatusChecker backupStatus = new BackupStatusChecker(masterClient);
+    final BackupStatusChecker backupStatus = new BackupStatusChecker(leaderClient);
 
-    masterJetty.getCoreContainer().getAllowPaths().add(backupDir.toPath());
+    leaderJetty.getCoreContainer().getAllowPaths().add(backupDir.toPath());
 
     { // initial request w/o any committed docs
       final String backupName = "empty_backup1";
@@ -1629,7 +1674,7 @@
                 "location", backupDir.getAbsolutePath(),
                 "name", backupName));
       final TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-      final SimpleSolrResponse rsp = req.process(masterClient);
+      final SimpleSolrResponse rsp = req.process(leaderClient);
 
       final String dirName = backupStatus.waitForBackupSuccess(backupName, timeout);
       assertEquals("Did not get expected dir name for backup, did API change?",
@@ -1638,7 +1683,7 @@
                  new File(backupDir, dirName).exists());
     }
     
-    index(masterClient, "id", "1", "name", "foo");
+    index(leaderClient, "id", "1", "name", "foo");
     
     { // second backup w/uncommited doc
       final String backupName = "empty_backup2";
@@ -1648,7 +1693,7 @@
                 "location", backupDir.getAbsolutePath(),
                 "name", backupName));
       final TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, TimeSource.NANO_TIME);
-      final SimpleSolrResponse rsp = req.process(masterClient);
+      final SimpleSolrResponse rsp = req.process(leaderClient);
       
       final String dirName = backupStatus.waitForBackupSuccess(backupName, timeout);
       assertEquals("Did not get expected dir name for backup, did API change?",
@@ -1667,13 +1712,38 @@
     }
   }
   
+  public void testGetBoolWithBackwardCompatibility() {
+    assertTrue(ReplicationHandler.getBoolWithBackwardCompatibility(params(), "foo", "bar", true));
+    assertFalse(ReplicationHandler.getBoolWithBackwardCompatibility(params(), "foo", "bar", false));
+    assertTrue(ReplicationHandler.getBoolWithBackwardCompatibility(params("foo", "true"), "foo", "bar", false));
+    assertTrue(ReplicationHandler.getBoolWithBackwardCompatibility(params("bar", "true"), "foo", "bar", false));
+    assertTrue(ReplicationHandler.getBoolWithBackwardCompatibility(params("foo", "true", "bar", "false"), "foo", "bar", false));
+  }
+  
+  public void testGetObjectWithBackwardCompatibility() {
+    assertEquals("aaa", ReplicationHandler.getObjectWithBackwardCompatibility(params(), "foo", "bar", "aaa"));
+    assertEquals("bbb", ReplicationHandler.getObjectWithBackwardCompatibility(params("foo", "bbb"), "foo", "bar", "aaa"));
+    assertEquals("bbb", ReplicationHandler.getObjectWithBackwardCompatibility(params("bar", "bbb"), "foo", "bar", "aaa"));
+    assertEquals("bbb", ReplicationHandler.getObjectWithBackwardCompatibility(params("foo", "bbb", "bar", "aaa"), "foo", "bar", "aaa"));
+    assertNull(ReplicationHandler.getObjectWithBackwardCompatibility(params(), "foo", "bar", null));
+  }
+  
+  public void testGetObjectWithBackwardCompatibilityFromNL() {
+    NamedList<Object> nl = new NamedList<>();
+    assertNull(ReplicationHandler.getObjectWithBackwardCompatibility(nl, "foo", "bar"));
+    nl.add("bar", "bbb");
+    assertEquals("bbb", ReplicationHandler.getObjectWithBackwardCompatibility(nl, "foo", "bar"));
+    nl.add("foo", "aaa");
+    assertEquals("aaa", ReplicationHandler.getObjectWithBackwardCompatibility(nl, "foo", "bar"));
+  }
+  
   
   private class AddExtraDocs implements Runnable {
 
-    SolrClient masterClient;
+    SolrClient leaderClient;
     int startId;
-    public AddExtraDocs(SolrClient masterClient, int startId) {
-      this.masterClient = masterClient;
+    public AddExtraDocs(SolrClient leaderClient, int startId) {
+      this.leaderClient = leaderClient;
       this.startId = startId;
     }
 
@@ -1682,13 +1752,13 @@
       final int totalDocs = TestUtil.nextInt(random(), 1, 10);
       for (int i = 0; i < totalDocs; i++) {
         try {
-          index(masterClient, "id", i + startId, "name", TestUtil.randomSimpleString(random(), 1000 , 5000));
+          index(leaderClient, "id", i + startId, "name", TestUtil.randomSimpleString(random(), 1000 , 5000));
         } catch (Exception e) {
           //Do nothing. Wasn't able to add doc.
         }
       }
       try {
-        masterClient.commit();
+        leaderClient.commit();
       } catch (Exception e) {
         //Do nothing. No extra doc got committed.
       }
diff --git a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java
index 420c7c8..cab0f76 100644
--- a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java
+++ b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java
@@ -55,9 +55,9 @@
 @SolrTestCaseJ4.SuppressSSL     // Currently unknown why SSL does not work with this test
 public class TestReplicationHandlerBackup extends SolrJettyTestBase {
 
-  JettySolrRunner masterJetty;
-  TestReplicationHandler.SolrInstance master = null;
-  SolrClient masterClient;
+  JettySolrRunner leaderJetty;
+  TestReplicationHandler.SolrInstance leader = null;
+  SolrClient leaderClient;
   
   private static final String CONF_DIR = "solr" + File.separator + "collection1" + File.separator + "conf"
       + File.separator;
@@ -95,19 +95,19 @@
   @Before
   public void setUp() throws Exception {
     super.setUp();
-    String configFile = "solrconfig-master1.xml";
+    String configFile = "solrconfig-leader1.xml";
 
     if(random().nextBoolean()) {
-      configFile = "solrconfig-master1-keepOneBackup.xml";
+      configFile = "solrconfig-leader1-keepOneBackup.xml";
       addNumberToKeepInRequest = false;
       backupKeepParamName = ReplicationHandler.NUMBER_BACKUPS_TO_KEEP_INIT_PARAM;
     }
-    master = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "master", null);
-    master.setUp();
-    master.copyConfigFile(CONF_DIR + configFile, "solrconfig.xml");
+    leader = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "leader", null);
+    leader.setUp();
+    leader.copyConfigFile(CONF_DIR + configFile, "solrconfig.xml");
 
-    masterJetty = createAndStartJetty(master);
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
     docsSeed = random().nextLong();
   }
 
@@ -115,21 +115,21 @@
   @After
   public void tearDown() throws Exception {
     super.tearDown();
-    if (null != masterClient) {
-      masterClient.close();
-      masterClient  = null;
+    if (null != leaderClient) {
+      leaderClient.close();
+      leaderClient  = null;
     }
-    if (null != masterJetty) {
-      masterJetty.stop();
-      masterJetty = null;
+    if (null != leaderJetty) {
+      leaderJetty.stop();
+      leaderJetty = null;
     }
-    master = null;
+    leader = null;
   }
 
   @Test
   public void testBackupOnCommit() throws Exception {
     final BackupStatusChecker backupStatus
-      = new BackupStatusChecker(masterClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
+      = new BackupStatusChecker(leaderClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
 
     final String lastBackupDir = backupStatus.checkBackupSuccess();
     // sanity check no backups yet
@@ -137,11 +137,11 @@
                lastBackupDir);
     
     //Index
-    int nDocs = BackupRestoreUtils.indexDocs(masterClient, DEFAULT_TEST_COLLECTION_NAME, docsSeed);
+    int nDocs = BackupRestoreUtils.indexDocs(leaderClient, DEFAULT_TEST_COLLECTION_NAME, docsSeed);
     
     final String newBackupDir = backupStatus.waitForDifferentBackupDir(lastBackupDir, 30);
     //Validate
-    verify(Paths.get(master.getDataDir(), newBackupDir), nDocs);
+    verify(Paths.get(leader.getDataDir(), newBackupDir), nDocs);
   }
 
   private void verify(Path backup, int nDocs) throws IOException {
@@ -158,7 +158,7 @@
   @Test
   public void doTestBackup() throws Exception {
     final BackupStatusChecker backupStatus
-      = new BackupStatusChecker(masterClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
+      = new BackupStatusChecker(leaderClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
 
     String lastBackupDir = backupStatus.checkBackupSuccess();
     assertNull("Already have a successful backup",
@@ -167,10 +167,10 @@
     final Path[] snapDir = new Path[5]; //One extra for the backup on commit
     //First snapshot location
     
-    int nDocs = BackupRestoreUtils.indexDocs(masterClient, DEFAULT_TEST_COLLECTION_NAME, docsSeed);
+    int nDocs = BackupRestoreUtils.indexDocs(leaderClient, DEFAULT_TEST_COLLECTION_NAME, docsSeed);
 
     lastBackupDir = backupStatus.waitForDifferentBackupDir(lastBackupDir, 30);
-    snapDir[0] = Paths.get(master.getDataDir(), lastBackupDir);
+    snapDir[0] = Paths.get(leader.getDataDir(), lastBackupDir);
 
     final boolean namedBackup = random().nextBoolean();
 
@@ -182,17 +182,17 @@
       final String backupName = TestUtil.randomSimpleString(random(), 1, 20) + "_" + i;
       if (!namedBackup) {
         if (addNumberToKeepInRequest) {
-          runBackupCommand(masterJetty, ReplicationHandler.CMD_BACKUP, "&" + backupKeepParamName + "=2");
+          runBackupCommand(leaderJetty, ReplicationHandler.CMD_BACKUP, "&" + backupKeepParamName + "=2");
         } else {
-          runBackupCommand(masterJetty, ReplicationHandler.CMD_BACKUP, "");
+          runBackupCommand(leaderJetty, ReplicationHandler.CMD_BACKUP, "");
         }
         lastBackupDir = backupStatus.waitForDifferentBackupDir(lastBackupDir, 30);
       } else {
-        runBackupCommand(masterJetty, ReplicationHandler.CMD_BACKUP, "&name=" +  backupName);
+        runBackupCommand(leaderJetty, ReplicationHandler.CMD_BACKUP, "&name=" +  backupName);
         lastBackupDir = backupStatus.waitForBackupSuccess(backupName, 30);
         backupNames[i] = backupName;
       }
-      snapDir[i+1] = Paths.get(master.getDataDir(), lastBackupDir);
+      snapDir[i+1] = Paths.get(leader.getDataDir(), lastBackupDir);
       verify(snapDir[i+1], nDocs);
     }
 
@@ -205,7 +205,7 @@
       // Only the last two should still exist.
       final List<String> remainingBackups = new ArrayList<>();
       
-      try (DirectoryStream<Path> stream = Files.newDirectoryStream(Paths.get(master.getDataDir()), "snapshot*")) {
+      try (DirectoryStream<Path> stream = Files.newDirectoryStream(Paths.get(leader.getDataDir()), "snapshot*")) {
         Iterator<Path> iter = stream.iterator();
         while (iter.hasNext()) {
           remainingBackups.add(iter.next().getFileName().toString());
@@ -235,12 +235,12 @@
 
   private void testDeleteNamedBackup(String backupNames[]) throws Exception {
     final BackupStatusChecker backupStatus
-      = new BackupStatusChecker(masterClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
+      = new BackupStatusChecker(leaderClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
     for (int i = 0; i < 2; i++) {
-      final Path p = Paths.get(master.getDataDir(), "snapshot." + backupNames[i]);
+      final Path p = Paths.get(leader.getDataDir(), "snapshot." + backupNames[i]);
       assertTrue("WTF: Backup doesn't exist: " + p.toString(),
                  Files.exists(p));
-      runBackupCommand(masterJetty, ReplicationHandler.CMD_DELETE_BACKUP, "&name=" +backupNames[i]);
+      runBackupCommand(leaderJetty, ReplicationHandler.CMD_DELETE_BACKUP, "&name=" +backupNames[i]);
       backupStatus.waitForBackupDeletionSuccess(backupNames[i], 30);
       assertFalse("backup still exists after deletion: " + p.toString(),
                   Files.exists(p));
@@ -248,12 +248,12 @@
     
   }
 
-  public static void runBackupCommand(JettySolrRunner masterJetty, String cmd, String params) throws IOException {
-    String masterUrl = buildUrl(masterJetty.getLocalPort(), context) + "/" + DEFAULT_TEST_CORENAME
+  public static void runBackupCommand(JettySolrRunner leaderJetty, String cmd, String params) throws IOException {
+    String leaderUrl = buildUrl(leaderJetty.getLocalPort(), context) + "/" + DEFAULT_TEST_CORENAME
         + ReplicationHandler.PATH+"?wt=xml&command=" + cmd + params;
     InputStream stream = null;
     try {
-      URL url = new URL(masterUrl);
+      URL url = new URL(leaderUrl);
       stream = url.openStream();
       stream.close();
     } finally {
diff --git a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerDiskOverFlow.java b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerDiskOverFlow.java
index 6018181b..b3254bf 100644
--- a/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerDiskOverFlow.java
+++ b/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerDiskOverFlow.java
@@ -59,9 +59,9 @@
   Function<String, Long> originalDiskSpaceprovider = null;
   BooleanSupplier originalTestWait = null;
   
-  JettySolrRunner masterJetty, slaveJetty;
-  SolrClient masterClient, slaveClient;
-  TestReplicationHandler.SolrInstance master = null, slave = null;
+  JettySolrRunner leaderJetty, followerJetty;
+  SolrClient leaderClient, followerClient;
+  TestReplicationHandler.SolrInstance leader = null, follower = null;
 
   static String context = "/solr";
 
@@ -74,15 +74,15 @@
     System.setProperty("solr.directoryFactory", "solr.StandardDirectoryFactory");
     String factory = random().nextInt(100) < 75 ? "solr.NRTCachingDirectoryFactory" : "solr.StandardDirectoryFactory"; // test the default most of the time
     System.setProperty("solr.directoryFactory", factory);
-    master = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "master", null);
-    master.setUp();
-    masterJetty = createAndStartJetty(master);
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leader = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "leader", null);
+    leader.setUp();
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
 
-    slave = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "slave", masterJetty.getLocalPort());
-    slave.setUp();
-    slaveJetty = createAndStartJetty(slave);
-    slaveClient = createNewSolrClient(slaveJetty.getLocalPort());
+    follower = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "follower", leaderJetty.getLocalPort());
+    follower.setUp();
+    followerJetty = createAndStartJetty(follower);
+    followerClient = createNewSolrClient(followerJetty.getLocalPort());
 
     System.setProperty("solr.indexfetcher.sotimeout2", "45000");
   }
@@ -91,22 +91,22 @@
   @After
   public void tearDown() throws Exception {
     super.tearDown();
-    if (null != masterJetty) {
-      masterJetty.stop();
-      masterJetty = null;
+    if (null != leaderJetty) {
+      leaderJetty.stop();
+      leaderJetty = null;
     }
-    if (null != slaveJetty) {
-      slaveJetty.stop();
-       slaveJetty = null;
+    if (null != followerJetty) {
+      followerJetty.stop();
+       followerJetty = null;
     }
-    master = slave = null;
-    if (null != masterClient) {
-      masterClient.close();
-      masterClient = null;
+    leader = follower = null;
+    if (null != leaderClient) {
+      leaderClient.close();
+      leaderClient = null;
     }
-    if (null != slaveClient) {
-      slaveClient.close();
-      slaveClient = null;
+    if (null != followerClient) {
+      followerClient.close();
+      followerClient = null;
     }
     System.clearProperty("solr.indexfetcher.sotimeout");
     
@@ -116,18 +116,18 @@
 
   @Test
   public void testDiskOverFlow() throws Exception {
-    invokeReplicationCommand(slaveJetty.getLocalPort(), "disablepoll");
+    invokeReplicationCommand(followerJetty.getLocalPort(), "disablepoll");
     //index docs
-    log.info("Indexing to MASTER");
-    int docsInMaster = 1000;
-    long szMaster = indexDocs(masterClient, docsInMaster, 0);
-    log.info("Indexing to SLAVE");
-    long szSlave = indexDocs(slaveClient, 1200, 1000);
+    log.info("Indexing to LEADER");
+    int docsInLeader = 1000;
+    long szLeader = indexDocs(leaderClient, docsInLeader, 0);
+    log.info("Indexing to FOLLOWER");
+    long szFollower = indexDocs(followerClient, 1200, 1000);
 
     IndexFetcher.usableDiskSpaceProvider = new Function<String, Long>() {
       @Override
       public Long apply(String s) {
-        return szMaster;
+        return szLeader;
       }
     };
 
@@ -161,7 +161,7 @@
             assertNotNull("why is query thread still looping if barrier has already been cleared?",
                           barrier);
             try {
-              QueryResponse rsp = slaveClient.query(new SolrQuery()
+              QueryResponse rsp = followerClient.query(new SolrQuery()
                                                     .setQuery("*:*")
                                                     .setRows(0));
               Thread.sleep(200);
@@ -187,7 +187,7 @@
         }
       }).start();
 
-    QueryResponse response = slaveClient.query(new SolrQuery()
+    QueryResponse response = followerClient.query(new SolrQuery()
                                                .add("qt", "/replication")
                                                .add("command", CMD_FETCH_INDEX)
                                                .add("wait", "true")
@@ -198,18 +198,18 @@
     assertEquals("threads encountered failures (see logs for when)",
                  Collections.emptyList(), threadFailures);
 
-    response = slaveClient.query(new SolrQuery().setQuery("*:*").setRows(0));
-    assertEquals("docs in slave", docsInMaster, response.getResults().getNumFound());
+    response = followerClient.query(new SolrQuery().setQuery("*:*").setRows(0));
+    assertEquals("docs in follower", docsInLeader, response.getResults().getNumFound());
 
-    response = slaveClient.query(new SolrQuery()
+    response = followerClient.query(new SolrQuery()
         .add("qt", "/replication")
         .add("command", ReplicationHandler.CMD_DETAILS)
     );
     if (log.isInfoEnabled()) {
       log.info("DETAILS {}", Utils.writeJson(response, new StringWriter(), true).toString());
     }
-    assertEquals("slave's clearedLocalIndexFirst (from rep details)",
-                 "true", response._getStr("details/slave/clearedLocalIndexFirst", null));
+    assertEquals("follower's clearedLocalIndexFirst (from rep details)",
+                 "true", response._getStr("details/follower/clearedLocalIndexFirst", null));
   }
 
   @SuppressWarnings({"unchecked", "rawtypes"})
diff --git a/solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java b/solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java
index 6b60b19..261a775 100644
--- a/solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java
+++ b/solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java
@@ -47,9 +47,9 @@
 @SolrTestCaseJ4.SuppressSSL     // Currently unknown why SSL does not work with this test
 public class TestRestoreCore extends SolrJettyTestBase {
 
-  JettySolrRunner masterJetty;
-  TestReplicationHandler.SolrInstance master = null;
-  SolrClient masterClient;
+  JettySolrRunner leaderJetty;
+  TestReplicationHandler.SolrInstance leader = null;
+  SolrClient leaderClient;
 
   private static final String CONF_DIR = "solr" + File.separator + DEFAULT_TEST_CORENAME + File.separator + "conf"
       + File.separator;
@@ -83,14 +83,14 @@
   @Before
   public void setUp() throws Exception {
     super.setUp();
-    String configFile = "solrconfig-master.xml";
+    String configFile = "solrconfig-leader.xml";
 
-    master = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "master", null);
-    master.setUp();
-    master.copyConfigFile(CONF_DIR + configFile, "solrconfig.xml");
+    leader = new TestReplicationHandler.SolrInstance(createTempDir("solr-instance").toFile(), "leader", null);
+    leader.setUp();
+    leader.copyConfigFile(CONF_DIR + configFile, "solrconfig.xml");
 
-    masterJetty = createAndStartJetty(master);
-    masterClient = createNewSolrClient(masterJetty.getLocalPort());
+    leaderJetty = createAndStartJetty(leader);
+    leaderClient = createNewSolrClient(leaderJetty.getLocalPort());
     docsSeed = random().nextLong();
   }
 
@@ -98,34 +98,34 @@
   @After
   public void tearDown() throws Exception {
     super.tearDown();
-    if (null != masterClient) {
-      masterClient.close();
-      masterClient  = null;
+    if (null != leaderClient) {
+      leaderClient.close();
+      leaderClient  = null;
     }
-    if (null != masterJetty) {
-      masterJetty.stop();
-      masterJetty = null;
+    if (null != leaderJetty) {
+      leaderJetty.stop();
+      leaderJetty = null;
     }
-    master = null;
+    leader = null;
   }
 
   @Test
   public void testSimpleRestore() throws Exception {
 
-    int nDocs = usually() ? BackupRestoreUtils.indexDocs(masterClient, "collection1", docsSeed) : 0;
+    int nDocs = usually() ? BackupRestoreUtils.indexDocs(leaderClient, "collection1", docsSeed) : 0;
 
     final BackupStatusChecker backupStatus
-      = new BackupStatusChecker(masterClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
+      = new BackupStatusChecker(leaderClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
     final String oldBackupDir = backupStatus.checkBackupSuccess();
     String snapshotName = null;
     String location;
     String params = "";
-    String baseUrl = masterJetty.getBaseUrl().toString();
+    String baseUrl = leaderJetty.getBaseUrl().toString();
 
     //Use the default backup location or an externally provided location.
     if (random().nextBoolean()) {
       location = createTempDir().toFile().getAbsolutePath();
-      masterJetty.getCoreContainer().getAllowPaths().add(Path.of(location)); // Allow core to be created outside SOLR_HOME
+      leaderJetty.getCoreContainer().getAllowPaths().add(Path.of(location)); // Allow core to be created outside SOLR_HOME
       params += "&location=" + URLEncoder.encode(location, "UTF-8");
     }
 
@@ -135,7 +135,7 @@
       params += "&name=" + snapshotName;
     }
 
-    TestReplicationHandlerBackup.runBackupCommand(masterJetty, ReplicationHandler.CMD_BACKUP, params);
+    TestReplicationHandlerBackup.runBackupCommand(leaderJetty, ReplicationHandler.CMD_BACKUP, params);
 
     if (null == snapshotName) {
       backupStatus.waitForDifferentBackupDir(oldBackupDir, 30);
@@ -152,9 +152,9 @@
         //Delete a few docs
         int numDeletes = TestUtil.nextInt(random(), 1, nDocs);
         for(int i=0; i<numDeletes; i++) {
-          masterClient.deleteByQuery(DEFAULT_TEST_CORENAME, "id:" + i);
+          leaderClient.deleteByQuery(DEFAULT_TEST_CORENAME, "id:" + i);
         }
-        masterClient.commit(DEFAULT_TEST_CORENAME);
+        leaderClient.commit(DEFAULT_TEST_CORENAME);
 
         //Add a few more
         int moreAdds = TestUtil.nextInt(random(), 1, 100);
@@ -162,22 +162,22 @@
           SolrInputDocument doc = new SolrInputDocument();
           doc.addField("id", i + nDocs);
           doc.addField("name", "name = " + (i + nDocs));
-          masterClient.add(DEFAULT_TEST_CORENAME, doc);
+          leaderClient.add(DEFAULT_TEST_CORENAME, doc);
         }
         //Purposely not calling commit once in a while. There can be some docs which are not committed
         if (usually()) {
-          masterClient.commit(DEFAULT_TEST_CORENAME);
+          leaderClient.commit(DEFAULT_TEST_CORENAME);
         }
       }
 
-      TestReplicationHandlerBackup.runBackupCommand(masterJetty, ReplicationHandler.CMD_RESTORE, params);
+      TestReplicationHandlerBackup.runBackupCommand(leaderJetty, ReplicationHandler.CMD_RESTORE, params);
 
       while (!fetchRestoreStatus(baseUrl, DEFAULT_TEST_CORENAME)) {
         Thread.sleep(1000);
       }
 
       //See if restore was successful by checking if all the docs are present again
-      BackupRestoreUtils.verifyDocs(nDocs, masterClient, DEFAULT_TEST_CORENAME);
+      BackupRestoreUtils.verifyDocs(nDocs, leaderClient, DEFAULT_TEST_CORENAME);
     }
 
   }
@@ -185,7 +185,7 @@
   public void testBackupFailsMissingAllowPaths() throws Exception {
     final String params = "&location=" + URLEncoder.encode(createTempDir().toFile().getAbsolutePath(), "UTF-8");
     Throwable t = expectThrows(IOException.class, () -> {
-      TestReplicationHandlerBackup.runBackupCommand(masterJetty, ReplicationHandler.CMD_BACKUP, params);
+      TestReplicationHandlerBackup.runBackupCommand(leaderJetty, ReplicationHandler.CMD_BACKUP, params);
     });
     // The backup command will fail since the tmp dir is outside allowPaths
     assertTrue(t.getMessage().contains("Server returned HTTP response code: 400"));
@@ -193,18 +193,18 @@
 
   @Test
   public void testFailedRestore() throws Exception {
-    int nDocs = BackupRestoreUtils.indexDocs(masterClient, "collection1", docsSeed);
+    int nDocs = BackupRestoreUtils.indexDocs(leaderClient, "collection1", docsSeed);
 
     String location = createTempDir().toFile().getAbsolutePath();
-    masterJetty.getCoreContainer().getAllowPaths().add(Path.of(location));
+    leaderJetty.getCoreContainer().getAllowPaths().add(Path.of(location));
     String snapshotName = TestUtil.randomSimpleString(random(), 1, 5);
     String params = "&name=" + snapshotName + "&location=" + URLEncoder.encode(location, "UTF-8");
-    String baseUrl = masterJetty.getBaseUrl().toString();
+    String baseUrl = leaderJetty.getBaseUrl().toString();
 
-    TestReplicationHandlerBackup.runBackupCommand(masterJetty, ReplicationHandler.CMD_BACKUP, params);
+    TestReplicationHandlerBackup.runBackupCommand(leaderJetty, ReplicationHandler.CMD_BACKUP, params);
 
     final BackupStatusChecker backupStatus
-      = new BackupStatusChecker(masterClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
+      = new BackupStatusChecker(leaderClient, "/" + DEFAULT_TEST_CORENAME + "/replication");
     final String backupDirName = backupStatus.waitForBackupSuccess(snapshotName, 30);
 
     //Remove the segments_n file so that the backup index is corrupted.
@@ -216,7 +216,7 @@
       Files.delete(segmentFileName);
     }
 
-    TestReplicationHandlerBackup.runBackupCommand(masterJetty, ReplicationHandler.CMD_RESTORE, params);
+    TestReplicationHandlerBackup.runBackupCommand(leaderJetty, ReplicationHandler.CMD_RESTORE, params);
 
     expectThrows(AssertionError.class, () -> {
         for (int i = 0; i < 10; i++) {
@@ -227,22 +227,22 @@
         // if we never got an assertion let expectThrows complain
       });
 
-    BackupRestoreUtils.verifyDocs(nDocs, masterClient, DEFAULT_TEST_CORENAME);
+    BackupRestoreUtils.verifyDocs(nDocs, leaderClient, DEFAULT_TEST_CORENAME);
 
     //make sure we can write to the index again
-    nDocs = BackupRestoreUtils.indexDocs(masterClient, "collection1", docsSeed);
-    BackupRestoreUtils.verifyDocs(nDocs, masterClient, DEFAULT_TEST_CORENAME);
+    nDocs = BackupRestoreUtils.indexDocs(leaderClient, "collection1", docsSeed);
+    BackupRestoreUtils.verifyDocs(nDocs, leaderClient, DEFAULT_TEST_CORENAME);
 
   }
 
   public static boolean fetchRestoreStatus (String baseUrl, String coreName) throws IOException {
-    String masterUrl = baseUrl + "/" + coreName +
+    String leaderUrl = baseUrl + "/" + coreName +
         ReplicationHandler.PATH + "?wt=xml&command=" + ReplicationHandler.CMD_RESTORE_STATUS;
     final Pattern pException = Pattern.compile("<str name=\"exception\">(.*?)</str>");
 
     InputStream stream = null;
     try {
-      URL url = new URL(masterUrl);
+      URL url = new URL(leaderUrl);
       stream = url.openStream();
       String response = IOUtils.toString(stream, "UTF-8");
       Matcher matcher = pException.matcher(response);
diff --git a/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperStatusHandlerTest.java b/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperStatusHandlerTest.java
index cf2dd74..2414581 100644
--- a/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperStatusHandlerTest.java
+++ b/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperStatusHandlerTest.java
@@ -211,4 +211,39 @@
         "  \"status\":\"red\"}";
     assertEquals(expected, JSONUtil.toJSON(mockStatus));
   }
+
+  @Test
+  public void testZkWithPrometheusSolr14752() {
+    assumeWorkingMockito();
+    ZookeeperStatusHandler zkStatusHandler = mock(ZookeeperStatusHandler.class);
+    when(zkStatusHandler.getZkRawResponse("zoo1:2181", "ruok")).thenReturn(Arrays.asList("imok"));
+    when(zkStatusHandler.getZkRawResponse("zoo1:2181", "mntr")).thenReturn(
+        Arrays.asList("zk_version\t3.6.1--104dcb3e3fb464b30c5186d229e00af9f332524b, built on 04/21/2020 15:01 GMT",
+            "zk_avg_latency\t0.24",
+            "zk_server_state\tleader",
+            "zk_synced_followers\t0.0"));
+    when(zkStatusHandler.getZkRawResponse("zoo1:2181", "conf")).thenReturn(
+        Arrays.asList("clientPort=2181"));
+    when(zkStatusHandler.getZkStatus(anyString(), any())).thenCallRealMethod();
+    when(zkStatusHandler.monitorZookeeper(anyString())).thenCallRealMethod();
+    when(zkStatusHandler.validateZkRawResponse(ArgumentMatchers.any(), any(), any())).thenAnswer(Answers.CALLS_REAL_METHODS);
+
+    // Verifying that parsing the status strings with floating point no longer triggers a NumberFormatException, although floats are still displayed in UI
+    Map<String, Object> mockStatus = zkStatusHandler.getZkStatus("zoo1:2181", ZkDynamicConfig.fromZkConnectString("zoo1:2181"));
+    String expected = "{\n" +
+            "  \"mode\":\"ensemble\",\n" +
+            "  \"dynamicReconfig\":true,\n" +
+            "  \"ensembleSize\":1,\n" +
+            "  \"details\":[{\n" +
+            "      \"zk_synced_followers\":\"0.0\",\n" +
+            "      \"zk_version\":\"3.6.1--104dcb3e3fb464b30c5186d229e00af9f332524b, built on 04/21/2020 15:01 GMT\",\n" +
+            "      \"zk_avg_latency\":\"0.24\",\n" +
+            "      \"host\":\"zoo1:2181\",\n" +
+            "      \"clientPort\":\"2181\",\n" +
+            "      \"ok\":true,\n" +
+            "      \"zk_server_state\":\"leader\"}],\n" +
+            "  \"zkHost\":\"zoo1:2181\",\n" +
+            "  \"status\":\"green\"}";
+    assertEquals(expected, JSONUtil.toJSON(mockStatus));
+  }
 }
\ No newline at end of file
diff --git a/solr/core/src/test/org/apache/solr/handler/export/TestExportWriter.java b/solr/core/src/test/org/apache/solr/handler/export/TestExportWriter.java
index a044d5b..db27ee1 100644
--- a/solr/core/src/test/org/apache/solr/handler/export/TestExportWriter.java
+++ b/solr/core/src/test/org/apache/solr/handler/export/TestExportWriter.java
@@ -717,10 +717,8 @@
       for (int j = 0; j < BATCH_SIZE; j++) {
         docs[j] = new SolrInputDocument(
             "id", String.valueOf(i * BATCH_SIZE + j),
-            "batch_i_p", String.valueOf(i),
-            "random_i_p", String.valueOf(random().nextInt(BATCH_SIZE)),
             "sortabledv", TestUtil.randomSimpleString(random(), 2, 3),
-            "sortabledv_udvas", String.valueOf(random().nextInt(100)),
+            "sortabledv_udvas", String.valueOf((i + j) % 101),
             "small_i_p", String.valueOf((i + j) % 37)
             );
       }
@@ -746,8 +744,8 @@
     Map<String, Object> rspMap = mapper.readValue(rsp, HashMap.class);
     List<Map<String, Object>> docs = (List<Map<String, Object>>) Utils.getObjectByPath(rspMap, false, "/response/docs");
     assertNotNull("missing document results: " + rspMap, docs);
-    assertEquals("wrong number of unique docs", 100, docs.size());
-    for (int i = 0; i < 99; i++) {
+    assertEquals("wrong number of unique docs", 101, docs.size());
+    for (int i = 0; i < 100; i++) {
       boolean found = false;
       String si = String.valueOf(i);
       for (int j = 0; j < docs.size(); j++) {
@@ -763,14 +761,14 @@
     rspMap = mapper.readValue(rsp, HashMap.class);
     docs = (List<Map<String, Object>>) Utils.getObjectByPath(rspMap, false, "/response/docs");
     assertNotNull("missing document results: " + rspMap, docs);
-    assertEquals("wrong number of unique docs", 100, docs.size());
+    assertEquals("wrong number of unique docs", 101, docs.size());
     for (Map<String, Object> doc : docs) {
       assertNotNull("missing sum: " + doc, doc.get("sum(small_i_p)"));
-      assertEquals(18000.0, ((Number)doc.get("sum(small_i_p)")).doubleValue(), 2500.0);
+      assertEquals(18000.0, ((Number)doc.get("sum(small_i_p)")).doubleValue(), 1000.0);
       assertNotNull("missing avg: " + doc, doc.get("avg(small_i_p)"));
-      assertEquals(18.0, ((Number)doc.get("avg(small_i_p)")).doubleValue(), 2.5);
+      assertEquals(18.0, ((Number)doc.get("avg(small_i_p)")).doubleValue(), 1.0);
       assertNotNull("missing count: " + doc, doc.get("count(*)"));
-      assertEquals(1000.0, ((Number)doc.get("count(*)")).doubleValue(), 200.0);
+      assertEquals(1000.0, ((Number)doc.get("count(*)")).doubleValue(), 100.0);
     }
     // try invalid field types
     req = req("q", "*:*", "qt", "/export", "fl", "id,sortabledv,small_i_p", "sort", "sortabledv asc", "expr", "unique(input(),over=\"sortabledv\")");
diff --git a/solr/core/src/test/org/apache/solr/pkg/TestPackages.java b/solr/core/src/test/org/apache/solr/pkg/TestPackages.java
index 2257cf3..c4bf29e 100644
--- a/solr/core/src/test/org/apache/solr/pkg/TestPackages.java
+++ b/solr/core/src/test/org/apache/solr/pkg/TestPackages.java
@@ -63,12 +63,13 @@
 import org.apache.zookeeper.data.Stat;
 import org.junit.After;
 import org.junit.Before;
+import org.junit.Ignore;
 import org.junit.Test;
 
 import static org.apache.solr.common.cloud.ZkStateReader.SOLR_PKGS_PATH;
 import static org.apache.solr.common.params.CommonParams.JAVABIN;
 import static org.apache.solr.common.params.CommonParams.WT;
-import static org.apache.solr.core.TestDynamicLoading.getFileContent;
+import static org.apache.solr.core.TestSolrConfigHandler.getFileContent;
 import static org.apache.solr.filestore.TestDistribPackageStore.readFile;
 import static org.apache.solr.filestore.TestDistribPackageStore.uploadKey;
 import static org.apache.solr.filestore.TestDistribPackageStore.checkAllNodesForFile;
@@ -639,6 +640,7 @@
   }
 
   @SuppressWarnings("rawtypes")
+  @Ignore("SOLR-14750")
   public void testSchemaPlugins() throws Exception {
     String COLLECTION_NAME = "testSchemaLoadingColl";
     System.setProperty("managed.schema.mutable", "true");
@@ -721,6 +723,11 @@
         public SolrResponse createResponse(SolrClient client) {
           return new SolrResponseBase();
         }
+
+        @Override
+        public String getRequestType() {
+          return SolrRequestType.UNSPECIFIED.toString();
+        }
       });
       verifySchemaComponent(cluster.getSolrClient(), COLLECTION_NAME, "/schema/fieldtypes/myNewTextFieldWithAnalyzerClass",
               Utils.makeMap(":fieldType:analyzer:charFilters[0]:_packageinfo_:version" ,"1.0",
diff --git a/solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java b/solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java
index 8b93ae7..f94f421 100644
--- a/solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java
+++ b/solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java
@@ -18,6 +18,7 @@
 
 import java.util.Arrays;
 import java.util.HashSet;
+import java.util.List;
 import java.util.Map;
 import java.util.Random;
 import java.util.Set;
@@ -39,6 +40,7 @@
 import org.apache.solr.common.util.Utils;
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.util.SolrPluginUtils;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -964,6 +966,62 @@
     );
   }
 
+    @Test
+    public void testWhitespaceCharacters() throws Exception {
+        assertU(adoc("id", "whitespaceChars",
+                "cat_s", "foo\nfoo"));
+        assertU(commit());
+
+        assertQ(req("q", "(\"foo\nfoo\")",
+                        "qf", "cat_s",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "cat_s:[\"foo\nfoo\" TO \"foo\nfoo\"]",
+                        "qf", "name",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "cat_s:[ \"foo\nfoo\" TO \"foo\nfoo\"]",
+                        "qf", "name",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "{!edismax qf=cat_s v='[\"foo\nfoo\" TO \"foo\nfoo\"]'}")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "{!edismax qf=cat_s v='[ \"foo\nfoo\" TO \"foo\nfoo\"]'}")
+                , "*[count(//doc)=1]");
+
+    }
+
+    @Test
+    public void testDoubleQuoteCharacters() throws Exception {
+        assertU(adoc("id", "doubleQuote",
+                "cat_s", "foo\"foo"));
+        assertU(commit());
+
+        assertQ(req("q", "cat_s:[\"foo\\\"foo\" TO \"foo\\\"foo\"]",
+                        "qf", "name",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "cat_s:\"foo\\\"foo\"",
+                        "qf", "name",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "cat_s:foo\\\"foo",
+                        "qf", "name",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+
+        assertQ(req("q", "cat_s:foo\"foo",
+                        "qf", "name",
+                        "defType", "edismax")
+                , "*[count(//doc)=1]");
+    }
+
   /**
    * verify that all reserved characters are properly escaped when being set in
    * {@link org.apache.solr.search.ExtendedDismaxQParser.Clause#val}.
@@ -1011,8 +1069,92 @@
         "*[count(//doc)=3]");    
     
   }
-  
-  /**
+
+
+    /**
+     * Repeating some of test cases as direct calls to splitIntoClauses
+     */
+    @Test
+    public void testSplitIntoClauses() throws Exception {
+        String query = "(\"foo\nfoo\")";
+        SolrQueryRequest request = req("q", query,
+                "qf", "cat_s",
+                "defType", "edismax");
+        ExtendedDismaxQParser parser = new ExtendedDismaxQParser(query, null, request.getParams(), request);
+        List<ExtendedDismaxQParser.Clause> clauses = parser.splitIntoClauses(query, false);
+        Assert.assertEquals(3, clauses.size());
+        assertClause(clauses.get(0), "\\(", false, true);
+        assertClause(clauses.get(1), "foo\nfoo", true, false);
+        assertClause(clauses.get(2), "\\)", false, true);
+
+        query = "cat_s:[\"foo\nfoo\" TO \"foo\nfoo\"]";
+        request = req("q", query,
+                "qf", "cat_s",
+                "defType", "edismax");
+        parser = new ExtendedDismaxQParser(query, null, request.getParams(), request);
+        clauses = parser.splitIntoClauses(query, false);
+        Assert.assertEquals(5, clauses.size());
+        assertClause(clauses.get(0), "\\[", false, true, "cat_s");
+        assertClause(clauses.get(1), "foo\nfoo", true, false);
+        assertClause(clauses.get(2), "TO", true, false);
+        assertClause(clauses.get(3), "foo\nfoo", true, false);
+        assertClause(clauses.get(4), "\\]", false, true);
+
+        query = "cat_s:[ \"foo\nfoo\" TO \"foo\nfoo\"]";
+        request = req("q", query,
+                "qf", "cat_s",
+                "defType", "edismax");
+        parser = new ExtendedDismaxQParser(query, null, request.getParams(), request);
+        clauses = parser.splitIntoClauses(query, false);
+        Assert.assertEquals(5, clauses.size());
+        assertClause(clauses.get(0), "\\[", true, true, "cat_s");
+        assertClause(clauses.get(1), "foo\nfoo", true, false);
+        assertClause(clauses.get(2), "TO", true, false);
+        assertClause(clauses.get(3), "foo\nfoo", true, false);
+        assertClause(clauses.get(4), "\\]", false, true);
+
+        String allReservedCharacters = "!():^[]{}~*?\"+-\\|&/";
+        // the backslash needs to be manually escaped (the query parser sees the raw backslash as an escape the subsequent
+        // character)
+        query = allReservedCharacters.replace("\\", "\\\\");
+
+        request = req("q", query,
+                "qf", "name",
+                "mm", "100%",
+                "defType", "edismax");
+
+        parser = new ExtendedDismaxQParser(query, null, request.getParams(), request);
+        clauses = parser.splitIntoClauses(query, false);
+        Assert.assertEquals(1, clauses.size());
+        assertClause(clauses.get(0), "\\!\\(\\)\\:\\^\\[\\]\\{\\}\\~\\*\\?\\\"\\+\\-\\\\\\|\\&\\/", false, true);
+
+        query = "foo/";
+        request = req("q", query,
+                "qf", "name",
+                "mm", "100%",
+                "defType", "edismax");
+
+        parser = new ExtendedDismaxQParser(query, null, request.getParams(), request);
+        clauses = parser.splitIntoClauses(query, false);
+        Assert.assertEquals(1, clauses.size());
+        assertClause(clauses.get(0), "foo\\/", false, true);
+    }
+
+    private static void assertClause(ExtendedDismaxQParser.Clause clause, String value, boolean hasWhitespace,
+                                     boolean hasSpecialSyntax, String field) {
+        Assert.assertEquals(value, clause.val);
+        Assert.assertEquals(hasWhitespace, clause.hasWhitespace);
+        Assert.assertEquals(hasSpecialSyntax, clause.hasSpecialSyntax);
+        Assert.assertEquals(field, clause.field);
+    }
+
+    private static void assertClause(ExtendedDismaxQParser.Clause clause, String value, boolean hasWhitespace,
+                                     boolean hasSpecialSyntax) {
+        assertClause(clause, value, hasWhitespace, hasSpecialSyntax, null);
+
+    }
+
+    /**
    * SOLR-3589: Edismax parser does not honor mm parameter if analyzer splits a token
    */
   public void testCJK() throws Exception {
diff --git a/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java b/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java
index b0f684e..6e1b300 100644
--- a/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java
+++ b/solr/core/src/test/org/apache/solr/search/TestRealTimeGet.java
@@ -42,7 +42,6 @@
 
   @BeforeClass
   public static void beforeClass() throws Exception {
-    randomizeUpdateLogImpl();
     initCore("solrconfig-tlog.xml","schema_latest.xml");
   }
 
diff --git a/solr/core/src/test/org/apache/solr/search/TestRecovery.java b/solr/core/src/test/org/apache/solr/search/TestRecovery.java
index 4e7e12e..6eac90a 100644
--- a/solr/core/src/test/org/apache/solr/search/TestRecovery.java
+++ b/solr/core/src/test/org/apache/solr/search/TestRecovery.java
@@ -76,7 +76,6 @@
   public void beforeTest() throws Exception {
     savedFactory = System.getProperty("solr.DirectoryFactory");
     System.setProperty("solr.directoryFactory", "org.apache.solr.core.MockFSDirectoryFactory");
-    randomizeUpdateLogImpl();
     initCore("solrconfig-tlog.xml","schema15.xml");
     
     // validate that the schema was not changed to an unexpected state
diff --git a/solr/core/src/test/org/apache/solr/search/TestStressRecovery.java b/solr/core/src/test/org/apache/solr/search/TestStressRecovery.java
index 7e20c79..45676a0 100644
--- a/solr/core/src/test/org/apache/solr/search/TestStressRecovery.java
+++ b/solr/core/src/test/org/apache/solr/search/TestStressRecovery.java
@@ -51,7 +51,6 @@
 
   @Before
   public void beforeClass() throws Exception {
-    randomizeUpdateLogImpl();
     initCore("solrconfig-tlog.xml","schema15.xml");
   }
   
diff --git a/solr/core/src/test/org/apache/solr/search/facet/TestCloudJSONFacetJoinDomain.java b/solr/core/src/test/org/apache/solr/search/facet/TestCloudJSONFacetJoinDomain.java
index 4c87746..a88ed95 100644
--- a/solr/core/src/test/org/apache/solr/search/facet/TestCloudJSONFacetJoinDomain.java
+++ b/solr/core/src/test/org/apache/solr/search/facet/TestCloudJSONFacetJoinDomain.java
@@ -208,14 +208,48 @@
       SolrException e = expectThrows(SolrException.class, () -> {
           final SolrParams req = params("q", "*:*", "json.facet",
                                         "{ x : { type:terms, field:x_s, domain: { join:"+join+" } } }");
-          @SuppressWarnings({"rawtypes"})
-          final NamedList trash = getRandClient(random()).request(new QueryRequest(req));
+          getRandClient(random()).request(new QueryRequest(req));
         });
       assertEquals(join + " -> " + e, SolrException.ErrorCode.BAD_REQUEST.code, e.code());
       assertTrue(join + " -> " + e, e.getMessage().contains("'join' domain change"));
     }
   }
 
+  public void testJoinMethodSyntax() throws Exception {
+    // 'method' value that doesn't exist at all
+    {
+      final String joinJson = "{from:foo, to:bar, method:invalidValue}";
+      SolrException e = expectThrows(SolrException.class, () -> {
+        final SolrParams req = params("q", "*:*", "json.facet",
+            "{ x : { type:terms, field:x_s, domain: { join:"+joinJson+" } } }");
+        getRandClient(random()).request(new QueryRequest(req));
+      });
+      assertEquals(joinJson + " -> " + e, SolrException.ErrorCode.BAD_REQUEST.code, e.code());
+      assertTrue(joinJson + " -> " + e, e.getMessage().contains("join method 'invalidValue' not supported"));
+    }
+
+    // 'method' value that exists on joins generally but isn't supported for join domain transforms
+    {
+      final String joinJson = "{from:foo, to:bar, method:crossCollection}";
+      SolrException e = expectThrows(SolrException.class, () -> {
+        final SolrParams req = params("q", "*:*", "json.facet",
+            "{ x : { type:terms, field:x_s, domain: { join:"+joinJson+" } } }");
+        getRandClient(random()).request(new QueryRequest(req));
+      });
+      assertEquals(joinJson + " -> " + e, SolrException.ErrorCode.BAD_REQUEST.code, e.code());
+      assertTrue(joinJson + " -> " + e, e.getMessage().contains("Join method crossCollection not supported"));
+    }
+
+
+    // Valid, supported method value
+    {
+      final String joinJson = "{from:" +strfield(1)+ ", to:"+strfield(1)+", method:index}";
+        final SolrParams req = params("q", "*:*", "json.facet", "{ x : { type:terms, field:x_s, domain: { join:"+joinJson+" } } }");
+        getRandClient(random()).request(new QueryRequest(req));
+        // For the purposes of this test, we're not interested in the response so much as that Solr will accept a valid 'method' value
+    }
+  }
+
   public void testSanityCheckDomainMethods() throws Exception {
     { 
       final JoinDomain empty = new JoinDomain(null, null, null);
diff --git a/solr/core/src/test/org/apache/solr/search/similarities/TestBooleanSimilarityFactory.java b/solr/core/src/test/org/apache/solr/search/similarities/TestBooleanSimilarityFactory.java
new file mode 100644
index 0000000..23e7d11
--- /dev/null
+++ b/solr/core/src/test/org/apache/solr/search/similarities/TestBooleanSimilarityFactory.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.search.similarities;
+
+import org.apache.lucene.search.similarities.BooleanSimilarity;
+import org.junit.BeforeClass;
+
+/**
+ * Tests {@link BooleanSimilarityFactory} when specified on a per-fieldtype basis.
+ * @see SchemaSimilarityFactory
+ */
+public class TestBooleanSimilarityFactory extends BaseSimilarityTestCase {
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+    initCore("solrconfig-basic.xml","schema-booleansimilarity.xml");
+  }
+  
+  /** Boolean w/ default parameters */
+  public void testDefaults() throws Exception {
+    BooleanSimilarity sim = getSimilarity("text", BooleanSimilarity.class);
+    assertEquals(BooleanSimilarity.class, sim.getClass());
+  }
+
+}
diff --git a/solr/core/src/test/org/apache/solr/security/BasicAuthStandaloneTest.java b/solr/core/src/test/org/apache/solr/security/BasicAuthStandaloneTest.java
index 53dc18f..406ce58 100644
--- a/solr/core/src/test/org/apache/solr/security/BasicAuthStandaloneTest.java
+++ b/solr/core/src/test/org/apache/solr/security/BasicAuthStandaloneTest.java
@@ -184,7 +184,7 @@
     Path dataDir;
     
     /**
-     * if masterPort is null, this instance is a master -- otherwise this instance is a slave, and assumes the master is
+     * if leaderPort is null, this instance is a leader -- otherwise this instance is a follower, and assumes the leader is
      * on localhost at the specified port.
      */
     public SolrInstance(String name, Integer port) {
diff --git a/solr/core/src/test/org/apache/solr/servlet/TestRequestRateLimiter.java b/solr/core/src/test/org/apache/solr/servlet/TestRequestRateLimiter.java
new file mode 100644
index 0000000..f8023a6
--- /dev/null
+++ b/solr/core/src/test/org/apache/solr/servlet/TestRequestRateLimiter.java
@@ -0,0 +1,235 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.servlet;
+
+import javax.servlet.FilterConfig;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.solr.client.solrj.SolrQuery;
+import org.apache.solr.client.solrj.SolrRequest;
+import org.apache.solr.client.solrj.impl.CloudSolrClient;
+import org.apache.solr.client.solrj.request.CollectionAdminRequest;
+import org.apache.solr.client.solrj.response.QueryResponse;
+import org.apache.solr.cloud.SolrCloudTestCase;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.util.ExecutorUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.apache.solr.servlet.RateLimitManager.DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS;
+import static org.hamcrest.CoreMatchers.containsString;
+
+public class TestRequestRateLimiter extends SolrCloudTestCase {
+  private final static String FIRST_COLLECTION = "c1";
+  private final static String SECOND_COLLECTION = "c2";
+
+  @BeforeClass
+  public static void setupCluster() throws Exception {
+    configureCluster(1).addConfig(FIRST_COLLECTION, configset("cloud-minimal")).configure();
+  }
+
+  @Test
+  public void testConcurrentQueries() throws Exception {
+    CloudSolrClient client = cluster.getSolrClient();
+    client.setDefaultCollection(FIRST_COLLECTION);
+
+    CollectionAdminRequest.createCollection(FIRST_COLLECTION, 1, 1).process(client);
+    cluster.waitForActiveCollection(FIRST_COLLECTION, 1, 1);
+
+    SolrDispatchFilter solrDispatchFilter = cluster.getJettySolrRunner(0).getSolrDispatchFilter();
+
+    RequestRateLimiter.RateLimiterConfig rateLimiterConfig = new RequestRateLimiter.RateLimiterConfig(SolrRequest.SolrRequestType.QUERY,
+        true, 1, DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS, 5 /* allowedRequests */, true /* isSlotBorrowing */);
+    // We are fine with a null FilterConfig here since we ensure that MockBuilder never invokes its parent here
+    RateLimitManager.Builder builder = new MockBuilder(null /* dummy FilterConfig */, new MockRequestRateLimiter(rateLimiterConfig, 5));
+    RateLimitManager rateLimitManager = builder.build();
+
+    solrDispatchFilter.replaceRateLimitManager(rateLimitManager);
+
+    int numDocs = TEST_NIGHTLY ? 10000 : 100;
+
+    processTest(client, numDocs, 350 /* number of queries */);
+
+    MockRequestRateLimiter mockQueryRateLimiter = (MockRequestRateLimiter) rateLimitManager.getRequestRateLimiter(SolrRequest.SolrRequestType.QUERY);
+
+    assertEquals(350, mockQueryRateLimiter.incomingRequestCount.get());
+
+    assertTrue(mockQueryRateLimiter.acceptedNewRequestCount.get() > 0);
+    assertTrue((mockQueryRateLimiter.acceptedNewRequestCount.get() == mockQueryRateLimiter.incomingRequestCount.get()
+        || mockQueryRateLimiter.rejectedRequestCount.get() > 0));
+    assertEquals(mockQueryRateLimiter.incomingRequestCount.get(),
+        mockQueryRateLimiter.acceptedNewRequestCount.get() + mockQueryRateLimiter.rejectedRequestCount.get());
+  }
+
+  @Nightly
+  public void testSlotBorrowing() throws Exception {
+    CloudSolrClient client = cluster.getSolrClient();
+    client.setDefaultCollection(SECOND_COLLECTION);
+
+    CollectionAdminRequest.createCollection(SECOND_COLLECTION, 1, 1).process(client);
+    cluster.waitForActiveCollection(SECOND_COLLECTION, 1, 1);
+
+    SolrDispatchFilter solrDispatchFilter = cluster.getJettySolrRunner(0).getSolrDispatchFilter();
+
+    RequestRateLimiter.RateLimiterConfig queryRateLimiterConfig = new RequestRateLimiter.RateLimiterConfig(SolrRequest.SolrRequestType.QUERY,
+        true, 1, DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS, 5 /* allowedRequests */, true /* isSlotBorrowing */);
+    RequestRateLimiter.RateLimiterConfig indexRateLimiterConfig = new RequestRateLimiter.RateLimiterConfig(SolrRequest.SolrRequestType.UPDATE,
+        true, 1, DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS, 5 /* allowedRequests */, true /* isSlotBorrowing */);
+    // We are fine with a null FilterConfig here since we ensure that MockBuilder never invokes its parent
+    RateLimitManager.Builder builder = new MockBuilder(null /*dummy FilterConfig */, new MockRequestRateLimiter(queryRateLimiterConfig, 5), new MockRequestRateLimiter(indexRateLimiterConfig, 5));
+    RateLimitManager rateLimitManager = builder.build();
+
+    solrDispatchFilter.replaceRateLimitManager(rateLimitManager);
+
+    int numDocs = 10000;
+
+    processTest(client, numDocs, 400 /* Number of queries */);
+
+    MockRequestRateLimiter mockIndexRateLimiter = (MockRequestRateLimiter) rateLimitManager.getRequestRateLimiter(SolrRequest.SolrRequestType.UPDATE);
+
+    assertTrue("Incoming slots borrowed count did not match. Expected > 0  incoming " + mockIndexRateLimiter.borrowedSlotCount.get(),
+        mockIndexRateLimiter.borrowedSlotCount.get() > 0);
+  }
+
+  private void processTest(CloudSolrClient client, int numDocuments, int numQueries) throws Exception {
+
+    for (int i = 0; i < numDocuments; i++) {
+      SolrInputDocument doc = new SolrInputDocument();
+
+      doc.setField("id", i);
+      doc.setField("text", "foo");
+      client.add(doc);
+    }
+
+    client.commit();
+
+    ExecutorService executor = ExecutorUtil.newMDCAwareCachedThreadPool("threadpool");
+    List<Callable<Boolean>> callableList = new ArrayList<>();
+    List<Future<Boolean>> futures;
+
+    try {
+      for (int i = 0; i < numQueries; i++) {
+        callableList.add(() -> {
+          try {
+            QueryResponse response = client.query(new SolrQuery("*:*"));
+
+            assertEquals(numDocuments, response.getResults().getNumFound());
+          } catch (Exception e) {
+            throw new RuntimeException(e.getMessage());
+          }
+
+          return true;
+        });
+      }
+
+      futures = executor.invokeAll(callableList);
+
+      for (Future<?> future : futures) {
+        try {
+          assertTrue(future.get() != null);
+        } catch (Exception e) {
+          assertThat(e.getMessage(), containsString("non ok status: 429, message:Too Many Requests"));
+        }
+      }
+    } finally {
+      executor.shutdown();
+    }
+  }
+
+  private static class MockRequestRateLimiter extends RequestRateLimiter {
+    final AtomicInteger incomingRequestCount;
+    final AtomicInteger acceptedNewRequestCount;
+    final AtomicInteger rejectedRequestCount;
+    final AtomicInteger borrowedSlotCount;
+
+    private final int maxCount;
+
+    public MockRequestRateLimiter(RateLimiterConfig config, final int maxCount) {
+      super(config);
+
+      this.incomingRequestCount = new AtomicInteger(0);
+      this.acceptedNewRequestCount = new AtomicInteger(0);
+      this.rejectedRequestCount = new AtomicInteger(0);
+      this.borrowedSlotCount = new AtomicInteger(0);
+      this.maxCount = maxCount;
+    }
+
+    @Override
+    public SlotMetadata handleRequest() throws InterruptedException {
+      incomingRequestCount.getAndIncrement();
+
+      SlotMetadata response = super.handleRequest();
+
+      if (response != null) {
+        acceptedNewRequestCount.getAndIncrement();
+      } else {
+        rejectedRequestCount.getAndIncrement();
+      }
+
+      return response;
+    }
+
+    @Override
+    public SlotMetadata allowSlotBorrowing() throws InterruptedException {
+      SlotMetadata result = super.allowSlotBorrowing();
+
+      if (result.isReleasable()) {
+        borrowedSlotCount.incrementAndGet();
+      }
+
+      return result;
+    }
+  }
+
+  private static class MockBuilder extends RateLimitManager.Builder {
+    private final RequestRateLimiter queryRequestRateLimiter;
+    private final RequestRateLimiter indexRequestRateLimiter;
+
+    public MockBuilder(FilterConfig config, RequestRateLimiter queryRequestRateLimiter) {
+      super(config);
+
+      this.queryRequestRateLimiter = queryRequestRateLimiter;
+      this.indexRequestRateLimiter = null;
+    }
+
+    public MockBuilder(FilterConfig config, RequestRateLimiter queryRequestRateLimiter, RequestRateLimiter indexRequestRateLimiter) {
+      super(config);
+
+      this.queryRequestRateLimiter = queryRequestRateLimiter;
+      this.indexRequestRateLimiter = indexRequestRateLimiter;
+    }
+
+    @Override
+    public RateLimitManager build() {
+      RateLimitManager rateLimitManager = new RateLimitManager();
+
+      rateLimitManager.registerRequestRateLimiter(queryRequestRateLimiter, SolrRequest.SolrRequestType.QUERY);
+
+      if (indexRequestRateLimiter != null) {
+        rateLimitManager.registerRequestRateLimiter(indexRequestRateLimiter, SolrRequest.SolrRequestType.UPDATE);
+      }
+
+      return rateLimitManager;
+    }
+  }
+}
diff --git a/solr/core/src/test/org/apache/solr/update/CdcrUpdateLogTest.java b/solr/core/src/test/org/apache/solr/update/CdcrUpdateLogTest.java
deleted file mode 100644
index c1a9731..0000000
--- a/solr/core/src/test/org/apache/solr/update/CdcrUpdateLogTest.java
+++ /dev/null
@@ -1,783 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update;
-
-import java.io.File;
-import java.io.IOException;
-import java.nio.file.Files;
-import java.util.ArrayDeque;
-import java.util.Deque;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.Semaphore;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.lucene.util.LuceneTestCase.Nightly;
-import org.apache.solr.SolrTestCaseJ4;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.util.TestInjection;
-import org.junit.AfterClass;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import static org.apache.solr.common.util.Utils.fromJSONString;
-
-@Nightly
-public class CdcrUpdateLogTest extends SolrTestCaseJ4 {
-
-  private static int timeout = 60;  // acquire timeout in seconds.  change this to a huge number when debugging to prevent threads from advancing.
-
-  // TODO: fix this test to not require FSDirectory
-  static String savedFactory;
-
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-    savedFactory = System.getProperty("solr.DirectoryFactory");
-    System.setProperty("solr.directoryFactory", "org.apache.solr.core.MockFSDirectoryFactory");
-    initCore("solrconfig-cdcrupdatelog.xml", "schema15.xml");
-  }
-
-  @AfterClass
-  public static void afterClass() {
-    if (savedFactory == null) {
-      System.clearProperty("solr.directoryFactory");
-    } else {
-      System.setProperty("solr.directoryFactory", savedFactory);
-    }
-  }
-
-  private void clearCore() throws IOException {
-    clearIndex();
-    assertU(commit());
-
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-
-    h.close();
-
-    String[] files = ulog.getLogList(logDir);
-    for (String file : files) {
-
-      File toDelete = new File(logDir, file);
-      Files.delete(toDelete.toPath()); // Should we really error out here?
-    }
-
-    assertEquals(0, ulog.getLogList(logDir).length);
-
-    createCore();
-  }
-
-  private void deleteByQuery(String q) throws Exception {
-    deleteByQueryAndGetVersion(q, null);
-  }
-
-  private void addDocs(int nDocs, int start, LinkedList<Long> versions) throws Exception {
-    for (int i = 0; i < nDocs; i++) {
-      versions.addFirst(addAndGetVersion(sdoc("id", Integer.toString(start + i)), null));
-    }
-  }
-
-  private static Long getVer(SolrQueryRequest req) throws Exception {
-    @SuppressWarnings({"rawtypes"})
-    Map rsp = (Map) fromJSONString(JQ(req));
-    @SuppressWarnings({"rawtypes"})
-    Map doc = null;
-    if (rsp.containsKey("doc")) {
-      doc = (Map) rsp.get("doc");
-    } else if (rsp.containsKey("docs")) {
-      @SuppressWarnings({"rawtypes"})
-      List lst = (List) rsp.get("docs");
-      if (lst.size() > 0) {
-        doc = (Map) lst.get(0);
-      }
-    } else if (rsp.containsKey("response")) {
-      @SuppressWarnings({"rawtypes"})
-      Map responseMap = (Map) rsp.get("response");
-      @SuppressWarnings({"rawtypes"})
-      List lst = (List) responseMap.get("docs");
-      if (lst.size() > 0) {
-        doc = (Map) lst.get(0);
-      }
-    }
-
-    if (doc == null) return null;
-
-    return (Long) doc.get("_version_");
-  }
-
-  @Test
-  public void testLogReaderNext() throws Exception {
-    this.clearCore();
-
-    int start = 0;
-
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    CdcrUpdateLog.CdcrLogReader reader = ((CdcrUpdateLog) ulog).newLogReader(); // test reader on empty updates log
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(11, start, versions);
-    start += 11;
-    assertU(commit());
-
-    for (int i = 0; i < 10; i++) { // 10 adds
-      assertNotNull(reader.next());
-    }
-    Object o = reader.next();
-    assertNotNull(o);
-
-    @SuppressWarnings({"rawtypes"})
-    List entry = (List) o;
-    int opAndFlags = (Integer) entry.get(0);
-    assertEquals(UpdateLog.COMMIT, opAndFlags & UpdateLog.OPERATION_MASK);
-
-    for (int i = 0; i < 11; i++) { // 11 adds
-      assertNotNull(reader.next());
-    }
-    o = reader.next();
-    assertNotNull(o);
-
-    entry = (List) o;
-    opAndFlags = (Integer) entry.get(0);
-    assertEquals(UpdateLog.COMMIT, opAndFlags & UpdateLog.OPERATION_MASK);
-
-    assertNull(reader.next());
-
-    // add a new tlog after having exhausted the reader
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    // the reader should pick up the new tlog
-
-    for (int i = 0; i < 11; i++) { // 10 adds + 1 commit
-      assertNotNull(reader.next());
-    }
-    assertNull(reader.next());
-  }
-
-  /**
-   * Check the seek method of the log reader.
-   */
-  @Test
-  public void testLogReaderSeek() throws Exception {
-    this.clearCore();
-
-    int start = 0;
-
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    CdcrUpdateLog.CdcrLogReader reader1 = ((CdcrUpdateLog) ulog).newLogReader();
-    CdcrUpdateLog.CdcrLogReader reader2 = ((CdcrUpdateLog) ulog).newLogReader();
-    CdcrUpdateLog.CdcrLogReader reader3 = ((CdcrUpdateLog) ulog).newLogReader();
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(11, start, versions);
-    start += 11;
-    assertU(commit());
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    // Test case where target version is equal to startVersion of tlog file
-    long targetVersion = getVer(req("q", "id:10"));
-
-    assertTrue(reader1.seek(targetVersion));
-    Object o = reader1.next();
-    assertNotNull(o);
-    @SuppressWarnings({"rawtypes"})
-    List entry = (List) o;
-    long version = (Long) entry.get(1);
-
-    assertEquals(targetVersion, version);
-
-    assertNotNull(reader1.next());
-
-    // test case where target version is superior to startVersion of tlog file
-    targetVersion = getVer(req("q", "id:26"));
-
-    assertTrue(reader2.seek(targetVersion));
-    o = reader2.next();
-    assertNotNull(o);
-    entry = (List) o;
-    version = (Long) entry.get(1);
-
-    assertEquals(targetVersion, version);
-
-    assertNotNull(reader2.next());
-
-    // test case where target version is inferior to startVersion of oldest tlog file
-    targetVersion = getVer(req("q", "id:0")) - 1;
-
-    assertFalse(reader3.seek(targetVersion));
-  }
-
-  /**
-   * Check that the log reader is able to read the new tlog
-   * and pick up new entries as they appear.
-   */
-  @Test
-  public void testLogReaderNextOnNewTLog() throws Exception {
-    this.clearCore();
-
-    int start = 0;
-
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    CdcrUpdateLog.CdcrLogReader reader = ((CdcrUpdateLog) ulog).newLogReader();
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(11, start, versions);
-    start += 11;
-
-    for (int i = 0; i < 22; i++) { // 21 adds + 1 commit
-      assertNotNull(reader.next());
-    }
-
-    // we should have reach the end of the new tlog
-    assertNull(reader.next());
-
-    addDocs(5, start, versions);
-    start += 5;
-
-    // the reader should now pick up the new updates
-
-    for (int i = 0; i < 5; i++) { // 5 adds
-      assertNotNull(reader.next());
-    }
-
-    assertNull(reader.next());
-  }
-
-  @Test
-  public void testRemoveOldLogs() throws Exception {
-    this.clearCore();
-
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-
-    int start = 0;
-    int maxReq = 50;
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-    assertU(commit());
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-    assertU(commit());
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-
-    assertEquals(2, ulog.getLogList(logDir).length);
-
-    // Get a cdcr log reader to initialise a log pointer
-    CdcrUpdateLog.CdcrLogReader reader = ((CdcrUpdateLog) ulog).newLogReader();
-
-    addDocs(105, start, versions);
-    start += 105;
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-    assertU(commit());
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-
-    // the previous two tlogs should not be removed
-    assertEquals(3, ulog.getLogList(logDir).length);
-
-    // move the pointer past the first tlog
-    for (int i = 0; i <= 11; i++) { // 10 adds + 1 commit
-      assertNotNull(reader.next());
-    }
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-    assertU(commit());
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-
-    // the first tlog should be removed
-    assertEquals(3, ulog.getLogList(logDir).length);
-
-    h.close();
-    createCore();
-
-    ulog = h.getCore().getUpdateHandler().getUpdateLog();
-
-    addDocs(105, start, versions);
-    start += 105;
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-    assertU(commit());
-    assertJQ(req("qt", "/get", "getVersions", "" + maxReq), "/versions==" + versions.subList(0, Math.min(maxReq, start)));
-
-    // previous tlogs should be gone now
-    assertEquals(1, ulog.getLogList(logDir).length);
-  }
-
-  /**
-   * Check that the removal of old logs is taking into consideration
-   * multiple log pointers. Check also that the removal takes into consideration the
-   * numRecordsToKeep limit, even if the log pointers are ahead.
-   */
-  @Test
-  public void testRemoveOldLogsMultiplePointers() throws Exception {
-    this.clearCore();
-
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-    CdcrUpdateLog.CdcrLogReader reader1 = ((CdcrUpdateLog) ulog).newLogReader();
-    CdcrUpdateLog.CdcrLogReader reader2 = ((CdcrUpdateLog) ulog).newLogReader();
-
-    int start = 0;
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(105, start, versions);
-    start += 105;
-    assertU(commit());
-
-    // the previous two tlogs should not be removed
-    assertEquals(3, ulog.getLogList(logDir).length);
-
-    // move the first pointer past the first tlog
-    for (int i = 0; i <= 11; i++) { // 10 adds + 1 commit
-      assertNotNull(reader1.next());
-    }
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    // the first tlog should not be removed
-    assertEquals(4, ulog.getLogList(logDir).length);
-
-    // move the second pointer past the first tlog
-    for (int i = 0; i <= 11; i++) { // 10 adds + 1 commit
-      assertNotNull(reader2.next());
-    }
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    // the first tlog should be removed
-    assertEquals(4, ulog.getLogList(logDir).length);
-
-    // exhaust the readers
-    while (reader1.next() != null) {
-    }
-    while (reader2.next() != null) {
-    }
-
-    // the readers should point to the new tlog
-    // now add enough documents to trigger the numRecordsToKeep limit
-
-    addDocs(80, start, versions);
-    start += 80;
-    assertU(commit());
-
-    // the update log should kept the last 3 tlogs, which sum up to 100 records
-    assertEquals(3, ulog.getLogList(logDir).length);
-  }
-
-  /**
-   * Check that the output stream of an uncapped tlog is correctly reopen
-   * and that the commit is written during recovery.
-   */
-  @Test
-  public void testClosingOutputStreamAfterLogReplay() throws Exception {
-    this.clearCore();
-    try {
-      TestInjection.skipIndexWriterCommitOnClose = true;
-      final Semaphore logReplay = new Semaphore(0);
-      final Semaphore logReplayFinish = new Semaphore(0);
-
-      UpdateLog.testing_logReplayHook = () -> {
-        try {
-          assertTrue(logReplay.tryAcquire(timeout, TimeUnit.SECONDS));
-        } catch (Exception e) {
-          throw new RuntimeException(e);
-        }
-      };
-
-      UpdateLog.testing_logReplayFinishHook = () -> logReplayFinish.release();
-
-      Deque<Long> versions = new ArrayDeque<>();
-      versions.addFirst(addAndGetVersion(sdoc("id", "A11"), null));
-      versions.addFirst(addAndGetVersion(sdoc("id", "A12"), null));
-      versions.addFirst(addAndGetVersion(sdoc("id", "A13"), null));
-
-      assertJQ(req("q", "*:*"), "/response/numFound==0");
-
-      assertJQ(req("qt", "/get", "getVersions", "" + versions.size()), "/versions==" + versions);
-
-      h.close();
-      createCore();
-      // Solr should kick this off now
-      // h.getCore().getUpdateHandler().getUpdateLog().recoverFromLog();
-
-      // verify that previous close didn't do a commit
-      // recovery should be blocked by our hook
-      assertJQ(req("q", "*:*"), "/response/numFound==0");
-
-      // unblock recovery
-      logReplay.release(1000);
-
-      // wait until recovery has finished
-      assertTrue(logReplayFinish.tryAcquire(timeout, TimeUnit.SECONDS));
-
-      assertJQ(req("q", "*:*"), "/response/numFound==3");
-
-      // The transaction log should have written a commit and close its output stream
-      UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-      assertEquals(0, ulog.logs.peekLast().refcount.get());
-      assertNull(ulog.logs.peekLast().channel);
-
-      ulog.logs.peekLast().incref(); // reopen the output stream to check if its ends with a commit
-      assertTrue(ulog.logs.peekLast().endsWithCommit());
-      ulog.logs.peekLast().decref();
-    } finally {
-      TestInjection.skipIndexWriterCommitOnClose = false; // reset
-      UpdateLog.testing_logReplayHook = null;
-      UpdateLog.testing_logReplayFinishHook = null;
-    }
-  }
-
-  /**
-   * Check the buffering of the old tlogs
-   */
-  @Test
-  public void testBuffering() throws Exception {
-    this.clearCore();
-
-    CdcrUpdateLog ulog = (CdcrUpdateLog) h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-
-    int start = 0;
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(105, start, versions);
-    start += 105;
-    assertU(commit());
-
-    // the first two tlogs should have been removed
-    assertEquals(1, ulog.getLogList(logDir).length);
-
-    // enable buffer
-    ulog.enableBuffer();
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(105, start, versions);
-    start += 105;
-    assertU(commit());
-
-    // no tlog should have been removed
-    assertEquals(4, ulog.getLogList(logDir).length);
-
-    // disable buffer
-    ulog.disableBuffer();
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    // old tlogs should have been removed
-    assertEquals(2, ulog.getLogList(logDir).length);
-  }
-
-
-  @Test
-  public void testSubReader() throws Exception {
-    this.clearCore();
-
-    CdcrUpdateLog ulog = (CdcrUpdateLog) h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-    CdcrUpdateLog.CdcrLogReader reader = ulog.newLogReader();
-
-    int start = 0;
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    assertEquals(2, ulog.getLogList(logDir).length);
-
-    // start to read the first tlog
-    for (int i = 0; i < 10; i++) {
-      assertNotNull(reader.next());
-    }
-
-    // instantiate a sub reader, and finish to read the first tlog (commit operation), plus start to read the
-    // second tlog (first five adds)
-    CdcrUpdateLog.CdcrLogReader subReader = reader.getSubReader();
-    for (int i = 0; i < 6; i++) {
-      assertNotNull(subReader.next());
-    }
-
-    // Five adds + one commit
-    assertEquals(6, subReader.getNumberOfRemainingRecords());
-
-    // Generate a new tlog
-    addDocs(105, start, versions);
-    start += 105;
-    assertU(commit());
-
-    // Even if the subreader is past the first tlog, the first tlog should not have been removed
-    // since the parent reader is still pointing to it
-    assertEquals(3, ulog.getLogList(logDir).length);
-
-    // fast forward the parent reader with the subreader
-    reader.forwardSeek(subReader);
-    subReader.close();
-
-    // After fast forward, the parent reader should be position on the doc15
-    @SuppressWarnings({"rawtypes"})
-    List o = (List) reader.next();
-    assertNotNull(o);
-    assertTrue("Expected SolrInputDocument but got" + o.toString() ,o.get(3) instanceof SolrInputDocument);
-    assertEquals("15", ((SolrInputDocument) o.get(3)).getFieldValue("id"));
-
-    // Finish to read the second tlog, and start to read the third one
-    for (int i = 0; i < 6; i++) {
-      assertNotNull(reader.next());
-    }
-
-    assertEquals(105, reader.getNumberOfRemainingRecords());
-
-    // Generate a new tlog to activate tlog cleaning
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    // If the parent reader was correctly fast forwarded, it should be on the third tlog, and the first two should
-    // have been removed.
-    assertEquals(2, ulog.getLogList(logDir).length);
-  }
-
-  /**
-   * Check that the reader is correctly reset to its last position
-   */
-  @Test
-  public void testResetToLastPosition() throws Exception {
-    this.clearCore();
-
-    CdcrUpdateLog ulog = (CdcrUpdateLog) h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-    CdcrUpdateLog.CdcrLogReader reader = ulog.newLogReader();
-
-    int start = 0;
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    assertEquals(2, ulog.getLogList(logDir).length);
-
-    for (int i = 0; i < 22; i++) {
-      Object o = reader.next();
-      assertNotNull(o);
-      // reset to last position
-      reader.resetToLastPosition();
-      // we should read the same update operation, i.e., same version number
-      assertEquals(((List) o).get(1), ((List) reader.next()).get(1));
-    }
-    assertNull(reader.next());
-  }
-
-  /**
-   * Check that the reader is correctly reset to its last position
-   */
-  @Test
-  public void testGetNumberOfRemainingRecords() throws Exception {
-    try {
-      TestInjection.skipIndexWriterCommitOnClose = true;
-      final Semaphore logReplayFinish = new Semaphore(0);
-      UpdateLog.testing_logReplayFinishHook = () -> logReplayFinish.release();
-
-      this.clearCore();
-
-      int start = 0;
-
-      LinkedList<Long> versions = new LinkedList<>();
-      addDocs(10, start, versions);
-      start += 10;
-      assertU(commit());
-
-      addDocs(10, start, versions);
-      start += 10;
-
-      h.close();
-      logReplayFinish.drainPermits();
-      createCore();
-
-      // At this stage, we have re-opened a capped tlog, and an uncapped tlog.
-      // check that the number of remaining records is correctly computed in these two cases
-
-      CdcrUpdateLog ulog = (CdcrUpdateLog) h.getCore().getUpdateHandler().getUpdateLog();
-      CdcrUpdateLog.CdcrLogReader reader = ulog.newLogReader();
-
-      // wait for the replay to finish
-      assertTrue(logReplayFinish.tryAcquire(timeout, TimeUnit.SECONDS));
-
-      // 20 records + 2 commits
-      assertEquals(22, reader.getNumberOfRemainingRecords());
-
-      for (int i = 0; i < 22; i++) {
-        Object o = reader.next();
-        assertNotNull(o);
-        assertEquals(22 - (i + 1), reader.getNumberOfRemainingRecords());
-      }
-      assertNull(reader.next());
-      assertEquals(0, reader.getNumberOfRemainingRecords());
-
-      // It should pick up the new tlog files
-      addDocs(10, start, versions);
-      assertEquals(10, reader.getNumberOfRemainingRecords());
-    } finally {
-      TestInjection.skipIndexWriterCommitOnClose = false; // reset
-      UpdateLog.testing_logReplayFinishHook = null;
-    }
-  }
-
-  /**
-   * Check that the initialisation of the log reader is picking up the tlog file that is currently being
-   * written.
-   */
-  @Test
-  public void testLogReaderInitOnNewTlog() throws Exception {
-    this.clearCore();
-
-    int start = 0;
-
-    // Start to index some documents to instantiate the new tlog
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-
-    // Create the reader after the instantiation of the new tlog
-    UpdateLog ulog = h.getCore().getUpdateHandler().getUpdateLog();
-    CdcrUpdateLog.CdcrLogReader reader = ((CdcrUpdateLog) ulog).newLogReader();
-
-    // Continue to index documents and commits
-    addDocs(11, start, versions);
-    start += 11;
-    assertU(commit());
-
-    // check that the log reader was initialised with the new tlog
-    for (int i = 0; i < 22; i++) { // 21 adds + 1 commit
-      assertNotNull(reader.next());
-    }
-
-    // we should have reach the end of the new tlog
-    assertNull(reader.next());
-  }
-
-  /**
-   * Check that the absolute version number is used for the update log index and for the last entry read
-   */
-  @Test
-  public void testAbsoluteLastVersion() throws Exception {
-    this.clearCore();
-
-    CdcrUpdateLog ulog = (CdcrUpdateLog) h.getCore().getUpdateHandler().getUpdateLog();
-    File logDir = new File(h.getCore().getUpdateHandler().getUpdateLog().getLogDir());
-    CdcrUpdateLog.CdcrLogReader reader = ulog.newLogReader();
-
-    int start = 0;
-
-    LinkedList<Long> versions = new LinkedList<>();
-    addDocs(10, start, versions);
-    start += 10;
-    deleteByQuery("*:*");
-    assertU(commit());
-
-    deleteByQuery("*:*");
-    addDocs(10, start, versions);
-    start += 10;
-    assertU(commit());
-
-    assertEquals(2, ulog.getLogList(logDir).length);
-
-    for (long version : ulog.getStartingVersions()) {
-      assertTrue(version > 0);
-    }
-
-    for (int i = 0; i < 10; i++) {
-      reader.next();
-    }
-
-    // first delete
-    Object o = reader.next();
-    assertTrue((Long) ((List) o).get(1) < 0);
-    assertTrue(reader.getLastVersion() > 0);
-
-    reader.next(); // commit
-
-    // second delete
-    o = reader.next();
-    assertTrue((Long) ((List) o).get(1) < 0);
-    assertTrue(reader.getLastVersion() > 0);
-  }
-
-}
-
diff --git a/solr/core/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java b/solr/core/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java
index b3d3a5c..a839f5d 100644
--- a/solr/core/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java
+++ b/solr/core/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java
@@ -27,6 +27,7 @@
 import com.codahale.metrics.Meter;
 import com.codahale.metrics.Metric;
 import org.apache.lucene.index.DirectoryReader;
+import org.apache.lucene.index.IndexFileNames;
 import org.apache.lucene.store.Directory;
 import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.common.params.CommonParams;
@@ -38,6 +39,7 @@
 import org.apache.solr.request.LocalSolrQueryRequest;
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.util.LogLevel;
 import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -47,10 +49,7 @@
 
 import static org.apache.solr.common.params.CommonParams.VERSION_FIELD;
 
-/**
- * 
- *
- */
+@LogLevel("org.apache.solr.update=INFO")
 public class DirectUpdateHandlerTest extends SolrTestCaseJ4 {
 
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
@@ -392,7 +391,10 @@
     }
     assertU(adoc("id", "1"));
 
-    int nFiles = d.listAll().length;
+    assertFalse(Arrays.stream(d.listAll()).anyMatch(s -> s.startsWith(IndexFileNames.PENDING_SEGMENTS)));
+    String beforeSegmentsFile =
+        Arrays.stream(d.listAll()).filter(s -> s.startsWith(IndexFileNames.SEGMENTS)).findAny().get();
+
     if (log.isInfoEnabled()) {
       log.info("FILES before prepareCommit={}", Arrays.asList(d.listAll()));
     }
@@ -402,8 +404,10 @@
     if (log.isInfoEnabled()) {
       log.info("FILES after prepareCommit={}", Arrays.asList(d.listAll()));
     }
-    assertTrue( d.listAll().length > nFiles);  // make sure new index files were actually written
-    
+    assertTrue(Arrays.stream(d.listAll()).anyMatch(s -> s.startsWith(IndexFileNames.PENDING_SEGMENTS)));
+    assertEquals(beforeSegmentsFile,
+        Arrays.stream(d.listAll()).filter(s -> s.startsWith(IndexFileNames.SEGMENTS)).findAny().get());
+
     assertJQ(req("q", "id:1")
         , "/response/numFound==0"
     );
diff --git a/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java b/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
index 5ae02af..4f43b10 100644
--- a/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
+++ b/solr/core/src/test/org/apache/solr/update/SolrIndexConfigTest.java
@@ -36,6 +36,7 @@
 import org.apache.solr.index.SortingMergePolicy;
 import org.apache.solr.schema.IndexSchema;
 import org.apache.solr.schema.IndexSchemaFactory;
+import org.junit.After;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -58,6 +59,12 @@
     initCore(solrConfigFileName,schemaFileName);
   }
   
+  @After
+  public void tearDown() throws Exception {
+    System.clearProperty("solr.tests.maxCommitMergeWait");
+    super.tearDown();
+  }
+  
   private final Path instanceDir = TEST_PATH().resolve("collection1");
 
   @Test
@@ -177,6 +184,8 @@
     ++mSizeExpected; assertTrue(m.get("maxBufferedDocs") instanceof Integer);
 
     ++mSizeExpected; assertTrue(m.get("ramBufferSizeMB") instanceof Double);
+    
+    ++mSizeExpected; assertTrue(m.get("maxCommitMergeWaitTime") instanceof Integer);
 
     ++mSizeExpected; assertTrue(m.get("ramPerThreadHardLimitMB") instanceof Integer);
 
@@ -208,4 +217,14 @@
 
     assertEquals(mSizeExpected, m.size());
   }
+  
+  public void testMaxCommitMergeWaitTime() throws Exception {
+    SolrConfig sc = new SolrConfig(TEST_PATH().resolve("collection1"), "solrconfig-test-misc.xml");
+    assertEquals(-1, sc.indexConfig.maxCommitMergeWaitMillis);
+    assertEquals(IndexWriterConfig.DEFAULT_MAX_FULL_FLUSH_MERGE_WAIT_MILLIS, sc.indexConfig.toIndexWriterConfig(h.getCore()).getMaxFullFlushMergeWaitMillis());
+    System.setProperty("solr.tests.maxCommitMergeWaitTime", "10");
+    sc = new SolrConfig(TEST_PATH().resolve("collection1"), "solrconfig-test-misc.xml");
+    assertEquals(10, sc.indexConfig.maxCommitMergeWaitMillis);
+    assertEquals(10, sc.indexConfig.toIndexWriterConfig(h.getCore()).getMaxFullFlushMergeWaitMillis());
+  }
 }
diff --git a/solr/core/src/test/org/apache/solr/update/TestInPlaceUpdatesDistrib.java b/solr/core/src/test/org/apache/solr/update/TestInPlaceUpdatesDistrib.java
index cd3f874..2214f88 100644
--- a/solr/core/src/test/org/apache/solr/update/TestInPlaceUpdatesDistrib.java
+++ b/solr/core/src/test/org/apache/solr/update/TestInPlaceUpdatesDistrib.java
@@ -85,8 +85,6 @@
     // asserting inplace updates happen by checking the internal [docid]
     systemSetPropertySolrTestsMergePolicyFactory(NoMergePolicyFactory.class.getName());
 
-    randomizeUpdateLogImpl();
-
     initCore(configString, schemaString);
     
     // sanity check that autocommits are disabled
diff --git a/solr/core/src/test/org/apache/solr/update/processor/RuntimeUrp.java b/solr/core/src/test/org/apache/solr/update/processor/RuntimeUrp.java
deleted file mode 100644
index 6cee3d9..0000000
--- a/solr/core/src/test/org/apache/solr/update/processor/RuntimeUrp.java
+++ /dev/null
@@ -1,40 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update.processor;
-
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.request.SolrQueryRequest;
-import org.apache.solr.response.SolrQueryResponse;
-import org.apache.solr.update.AddUpdateCommand;
-
-public class RuntimeUrp extends SimpleUpdateProcessorFactory {
-  @Override
-  protected void process(AddUpdateCommand cmd, SolrQueryRequest req, SolrQueryResponse rsp) {
-    UpdateRequestProcessorChain processorChain = req.getCore().getUpdateProcessorChain(req.getParams());
-    List<String>  names = new ArrayList<>();
-    for (UpdateRequestProcessorFactory p : processorChain.getProcessors()) {
-      if (p instanceof UpdateRequestProcessorChain.LazyUpdateProcessorFactoryHolder.LazyUpdateRequestProcessorFactory) {
-        p = ((UpdateRequestProcessorChain.LazyUpdateProcessorFactoryHolder.LazyUpdateRequestProcessorFactory) p).getDelegate();
-      }
-      names.add(p.getClass().getSimpleName());
-    }
-    cmd.solrDoc.addField("processors_s", StrUtils.join(names,'>'));
-  }
-}
diff --git a/solr/core/src/test/org/apache/solr/update/processor/TestNamedUpdateProcessors.java b/solr/core/src/test/org/apache/solr/update/processor/TestNamedUpdateProcessors.java
deleted file mode 100644
index 45bb41c..0000000
--- a/solr/core/src/test/org/apache/solr/update/processor/TestNamedUpdateProcessors.java
+++ /dev/null
@@ -1,163 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.update.processor;
-
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.zip.ZipEntry;
-import java.util.zip.ZipOutputStream;
-
-import org.apache.solr.client.solrj.SolrQuery;
-import org.apache.solr.client.solrj.impl.HttpSolrClient;
-import org.apache.solr.client.solrj.request.UpdateRequest;
-import org.apache.solr.client.solrj.response.QueryResponse;
-import org.apache.solr.cloud.AbstractFullDistribZkTestBase;
-import org.apache.solr.common.SolrDocument;
-import org.apache.solr.common.SolrInputDocument;
-import org.apache.solr.common.util.StrUtils;
-import org.apache.solr.core.TestDynamicLoading;
-import org.apache.solr.core.TestSolrConfigHandler;
-import org.apache.solr.handler.TestBlobHandler;
-import org.apache.solr.util.RestTestHarness;
-import org.apache.solr.util.SimplePostTool;
-import org.junit.Test;
-
-public class TestNamedUpdateProcessors extends AbstractFullDistribZkTestBase {
-
-  @Test
-  //17-Aug-2018 commented @BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028") // added 20-Jul-2018
-  public void test() throws Exception {
-    System.setProperty("enable.runtime.lib", "true");
-    setupRestTestHarnesses();
-
-    String blobName = "colltest";
-
-    HttpSolrClient randomClient = (HttpSolrClient) clients.get(random().nextInt(clients.size()));
-    String baseURL = randomClient.getBaseURL();
-
-    final String solrClientUrl = baseURL.substring(0, baseURL.lastIndexOf('/'));
-    TestBlobHandler.createSystemCollection(getHttpSolrClient(solrClientUrl, randomClient.getHttpClient()));
-    waitForRecoveriesToFinish(".system", true);
-
-    TestBlobHandler.postAndCheck(cloudClient, baseURL.substring(0, baseURL.lastIndexOf('/')), blobName, TestDynamicLoading.generateZip(RuntimeUrp.class), 1);
-
-    String payload = "{\n" +
-        "'add-runtimelib' : { 'name' : 'colltest' ,'version':1}\n" +
-        "}";
-    RestTestHarness client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    TestSolrConfigHandler.testForResponseElement(client,
-        null,
-        "/config/overlay",
-        null,
-        Arrays.asList("overlay", "runtimeLib", blobName, "version"),
-        1l, 10);
-
-    payload = "{\n" +
-        "'create-updateprocessor' : { 'name' : 'firstFld', 'class': 'solr.FirstFieldValueUpdateProcessorFactory', 'fieldName':'test_s'}, \n" +
-        "'create-updateprocessor' : { 'name' : 'test', 'class': 'org.apache.solr.update.processor.RuntimeUrp', 'runtimeLib':true }, \n" +
-        "'create-updateprocessor' : { 'name' : 'maxFld', 'class': 'solr.MaxFieldValueUpdateProcessorFactory', 'fieldName':'mul_s'} \n" +
-        "}";
-
-    client = randomRestTestHarness();
-    TestSolrConfigHandler.runConfigCommand(client, "/config", payload);
-    forAllRestTestHarnesses(restTestHarness -> {
-      try {
-        TestSolrConfigHandler.testForResponseElement(restTestHarness,
-            null,
-            "/config/overlay",
-            null,
-            Arrays.asList("overlay", "updateProcessor", "firstFld", "fieldName"),
-            "test_s", 10);
-      } catch (Exception ex) {
-        fail("Caught exception: "+ex);
-      }
-    });
-
-    SolrInputDocument doc = new SolrInputDocument();
-    doc.addField("id", "123");
-    doc.addField("test_s", Arrays.asList("one", "two"));
-    doc.addField("mul_s", Arrays.asList("aaa", "bbb"));
-    randomClient.add(doc);
-    randomClient.commit(true, true);
-    QueryResponse result = randomClient.query(new SolrQuery("id:123"));
-    assertEquals(2, ((Collection) result.getResults().get(0).getFieldValues("test_s")).size());
-    assertEquals(2, ((Collection) result.getResults().get(0).getFieldValues("mul_s")).size());
-    doc = new SolrInputDocument();
-    doc.addField("id", "456");
-    doc.addField("test_s", Arrays.asList("three", "four"));
-    doc.addField("mul_s", Arrays.asList("aaa", "bbb"));
-    UpdateRequest ur = new UpdateRequest();
-    ur.add(doc).setParam("processor", "firstFld,maxFld,test");
-    randomClient.request(ur);
-    randomClient.commit(true, true);
-    result = randomClient.query(new SolrQuery("id:456"));
-    SolrDocument d = result.getResults().get(0);
-    assertEquals(1, d.getFieldValues("test_s").size());
-    assertEquals(1, d.getFieldValues("mul_s").size());
-    assertEquals("three", d.getFieldValues("test_s").iterator().next());
-    assertEquals("bbb", d.getFieldValues("mul_s").iterator().next());
-    String processors = (String) d.getFirstValue("processors_s");
-    assertNotNull(processors);
-    assertEquals(StrUtils.splitSmart(processors, '>'),
-        Arrays.asList("FirstFieldValueUpdateProcessorFactory", "MaxFieldValueUpdateProcessorFactory", "RuntimeUrp", "LogUpdateProcessorFactory", "DistributedUpdateProcessorFactory", "RunUpdateProcessorFactory"));
-
-
-  }
-
-  public static ByteBuffer getFileContent(String f) throws IOException {
-    ByteBuffer jar;
-    try (FileInputStream fis = new FileInputStream(getFile(f))) {
-      byte[] buf = new byte[fis.available()];
-      fis.read(buf);
-      jar = ByteBuffer.wrap(buf);
-    }
-    return jar;
-  }
-
-  public static ByteBuffer persistZip(String loc,
-                                      @SuppressWarnings({"rawtypes"})Class... classes) throws IOException {
-    ByteBuffer jar = generateZip(classes);
-    try (FileOutputStream fos = new FileOutputStream(loc)) {
-      fos.write(jar.array(), 0, jar.limit());
-      fos.flush();
-    }
-    return jar;
-  }
-
-
-  public static ByteBuffer generateZip(@SuppressWarnings({"rawtypes"})Class... classes) throws IOException {
-    SimplePostTool.BAOS bos = new SimplePostTool.BAOS();
-    try (ZipOutputStream zipOut = new ZipOutputStream(bos)) {
-      zipOut.setLevel(ZipOutputStream.DEFLATED);
-      for (@SuppressWarnings({"rawtypes"})Class c : classes) {
-        String path = c.getName().replace('.', '/').concat(".class");
-        ZipEntry entry = new ZipEntry(path);
-        ByteBuffer b = SimplePostTool.inputStreamToByteArray(c.getClassLoader().getResourceAsStream(path));
-        zipOut.putNextEntry(entry);
-        zipOut.write(b.array(), 0, b.limit());
-        zipOut.closeEntry();
-      }
-    }
-    return bos.getByteBuffer();
-  }
-
-}
diff --git a/solr/core/src/test/org/apache/solr/util/TestCircuitBreaker.java b/solr/core/src/test/org/apache/solr/util/TestCircuitBreaker.java
index f41667f..9b1075c 100644
--- a/solr/core/src/test/org/apache/solr/util/TestCircuitBreaker.java
+++ b/solr/core/src/test/org/apache/solr/util/TestCircuitBreaker.java
@@ -18,8 +18,11 @@
 package org.apache.solr.util;
 
 import java.lang.invoke.MethodHandles;
+import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.List;
 import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -29,9 +32,11 @@
 import org.apache.solr.common.params.CommonParams;
 import org.apache.solr.common.util.ExecutorUtil;
 import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.apache.solr.core.SolrConfig;
+import org.apache.solr.core.PluginInfo;
 import org.apache.solr.search.QueryParsing;
+import org.apache.solr.util.circuitbreaker.CPUCircuitBreaker;
 import org.apache.solr.util.circuitbreaker.CircuitBreaker;
+import org.apache.solr.util.circuitbreaker.CircuitBreakerManager;
 import org.apache.solr.util.circuitbreaker.MemoryCircuitBreaker;
 import org.junit.After;
 import org.junit.BeforeClass;
@@ -41,6 +46,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.hamcrest.CoreMatchers.containsString;
+
+@SuppressWarnings({"rawtypes"})
 public class TestCircuitBreaker extends SolrTestCaseJ4 {
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   private final static int NUM_DOCS = 20;
@@ -84,7 +92,13 @@
     args.put(QueryParsing.DEFTYPE, CircuitBreaker.NAME);
     args.put(CommonParams.FL, "id");
 
-    CircuitBreaker circuitBreaker = new MockCircuitBreaker(h.getCore().getSolrConfig());
+    removeAllExistingCircuitBreakers();
+
+    PluginInfo pluginInfo = h.getCore().getSolrConfig().getPluginInfo(CircuitBreakerManager.class.getName());
+
+    CircuitBreaker.CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerManager.buildCBConfig(pluginInfo);
+
+    CircuitBreaker circuitBreaker = new MockCircuitBreaker(circuitBreakerConfig);
 
     h.getCore().getCircuitBreakerManager().register(circuitBreaker);
 
@@ -99,7 +113,12 @@
     args.put(QueryParsing.DEFTYPE, CircuitBreaker.NAME);
     args.put(CommonParams.FL, "id");
 
-    CircuitBreaker circuitBreaker = new FakeMemoryPressureCircuitBreaker(h.getCore().getSolrConfig());
+    removeAllExistingCircuitBreakers();
+
+    PluginInfo pluginInfo = h.getCore().getSolrConfig().getPluginInfo(CircuitBreakerManager.class.getName());
+
+    CircuitBreaker.CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerManager.buildCBConfig(pluginInfo);
+    CircuitBreaker circuitBreaker = new FakeMemoryPressureCircuitBreaker(circuitBreakerConfig);
 
     h.getCore().getCircuitBreakerManager().register(circuitBreaker);
 
@@ -119,33 +138,45 @@
     AtomicInteger failureCount = new AtomicInteger();
 
     try {
-      CircuitBreaker circuitBreaker = new BuildingUpMemoryPressureCircuitBreaker(h.getCore().getSolrConfig());
+      removeAllExistingCircuitBreakers();
+
+      PluginInfo pluginInfo = h.getCore().getSolrConfig().getPluginInfo(CircuitBreakerManager.class.getName());
+
+      CircuitBreaker.CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerManager.buildCBConfig(pluginInfo);
+      CircuitBreaker circuitBreaker = new BuildingUpMemoryPressureCircuitBreaker(circuitBreakerConfig);
 
       h.getCore().getCircuitBreakerManager().register(circuitBreaker);
 
+      List<Future<?>> futures = new ArrayList<>();
+
       for (int i = 0; i < 5; i++) {
-        executor.submit(() -> {
+        Future<?> future = executor.submit(() -> {
           try {
             h.query(req("name:\"john smith\""));
           } catch (SolrException e) {
-            if (!e.getMessage().startsWith("Circuit Breakers tripped")) {
-              if (log.isInfoEnabled()) {
-                String logMessage = "Expected error message for testBuildingMemoryPressure was not received. Error message " + e.getMessage();
-                log.info(logMessage);
-              }
-              throw new RuntimeException("Expected error message was not received. Error message " + e.getMessage());
-            }
+            assertThat(e.getMessage(), containsString("Circuit Breakers tripped"));
             failureCount.incrementAndGet();
           } catch (Exception e) {
             throw new RuntimeException(e.getMessage());
           }
         });
+
+        futures.add(future);
+      }
+
+      for  (Future<?> future : futures) {
+        try {
+          future.get();
+        } catch (Exception e) {
+          throw new RuntimeException(e.getMessage());
+        }
       }
 
       executor.shutdown();
       try {
         executor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
       } catch (InterruptedException e) {
+        Thread.currentThread().interrupt();
         throw new RuntimeException(e.getMessage());
       }
 
@@ -157,6 +188,62 @@
     }
   }
 
+  public void testFakeCPUCircuitBreaker() {
+    AtomicInteger failureCount = new AtomicInteger();
+
+    ExecutorService executor = ExecutorUtil.newMDCAwareCachedThreadPool(
+        new SolrNamedThreadFactory("TestCircuitBreaker"));
+    try {
+      removeAllExistingCircuitBreakers();
+
+      PluginInfo pluginInfo = h.getCore().getSolrConfig().getPluginInfo(CircuitBreakerManager.class.getName());
+
+      CircuitBreaker.CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerManager.buildCBConfig(pluginInfo);
+      CircuitBreaker circuitBreaker = new FakeCPUCircuitBreaker(circuitBreakerConfig);
+
+      h.getCore().getCircuitBreakerManager().register(circuitBreaker);
+
+      List<Future<?>> futures = new ArrayList<>();
+
+      for (int i = 0; i < 5; i++) {
+        Future<?> future = executor.submit(() -> {
+          try {
+            h.query(req("name:\"john smith\""));
+          } catch (SolrException e) {
+            assertThat(e.getMessage(), containsString("Circuit Breakers tripped"));
+            failureCount.incrementAndGet();
+          } catch (Exception e) {
+            throw new RuntimeException(e.getMessage());
+          }
+        });
+
+        futures.add(future);
+      }
+
+      for  (Future<?> future : futures) {
+        try {
+          future.get();
+        } catch (Exception e) {
+          throw new RuntimeException(e.getMessage());
+        }
+      }
+
+      executor.shutdown();
+      try {
+        executor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
+      } catch (InterruptedException e) {
+        Thread.currentThread().interrupt();
+        throw new RuntimeException(e.getMessage());
+      }
+
+      assertEquals("Number of failed queries is not correct",5, failureCount.get());
+    } finally {
+      if (!executor.isShutdown()) {
+        executor.shutdown();
+      }
+    }
+  }
+
   public void testResponseWithCBTiming() {
     assertQ(req("q", "*:*", CommonParams.DEBUG_QUERY, "true"),
         "//str[@name='rawquerystring']='*:*'",
@@ -179,10 +266,16 @@
     );
   }
 
-  private class MockCircuitBreaker extends CircuitBreaker {
+  private void removeAllExistingCircuitBreakers() {
+    List<CircuitBreaker> registeredCircuitBreakers = h.getCore().getCircuitBreakerManager().getRegisteredCircuitBreakers();
 
-    public MockCircuitBreaker(SolrConfig solrConfig) {
-      super(solrConfig);
+    registeredCircuitBreakers.clear();
+  }
+
+  private static class MockCircuitBreaker extends MemoryCircuitBreaker {
+
+    public MockCircuitBreaker(CircuitBreakerConfig config) {
+      super(config);
     }
 
     @Override
@@ -197,10 +290,10 @@
     }
   }
 
-  private class FakeMemoryPressureCircuitBreaker extends MemoryCircuitBreaker {
+  private static class FakeMemoryPressureCircuitBreaker extends MemoryCircuitBreaker {
 
-    public FakeMemoryPressureCircuitBreaker(SolrConfig solrConfig) {
-      super(solrConfig);
+    public FakeMemoryPressureCircuitBreaker(CircuitBreakerConfig config) {
+      super(config);
     }
 
     @Override
@@ -210,11 +303,11 @@
     }
   }
 
-  private class BuildingUpMemoryPressureCircuitBreaker extends MemoryCircuitBreaker {
+  private static class BuildingUpMemoryPressureCircuitBreaker extends MemoryCircuitBreaker {
     private AtomicInteger count;
 
-    public BuildingUpMemoryPressureCircuitBreaker(SolrConfig solrConfig) {
-      super(solrConfig);
+    public BuildingUpMemoryPressureCircuitBreaker(CircuitBreakerConfig config) {
+      super(config);
 
       this.count = new AtomicInteger(0);
     }
@@ -240,4 +333,15 @@
       return Long.MIN_VALUE; // Random number guaranteed to not trip the circuit breaker
     }
   }
+
+  private static class FakeCPUCircuitBreaker extends CPUCircuitBreaker {
+    public FakeCPUCircuitBreaker(CircuitBreakerConfig config) {
+      super(config);
+    }
+
+    @Override
+    protected double calculateLiveCPUUsage() {
+      return 92; // Return a value large enough to trigger the circuit breaker
+    }
+  }
 }
diff --git a/solr/example/README.md b/solr/example/README.md
index 4ee9248..1ba2d62 100644
--- a/solr/example/README.md
+++ b/solr/example/README.md
@@ -25,15 +25,14 @@
   bin/solr -e <EXAMPLE> where <EXAMPLE> is one of:
   
     cloud        : SolrCloud example
-    dih          : Data Import Handler (rdbms, mail, atom, tika)
     schemaless   : Schema-less example (schema is inferred from data during indexing)
     techproducts : Kitchen sink example providing comprehensive examples of Solr features
 ```
 
-For instance, if you want to run the Solr Data Import Handler example, do:
+For instance, if you want to run the SolrCloud example, do:
 
 ```
-  bin/solr -e dih
+  bin/solr -e cloud
 ```
 
 To see all the options available when starting Solr:
@@ -80,8 +79,8 @@
 this directory for loading "contrib" plugins via relative paths.  
 
 If you make a copy of this example server and wish to use the 
-ExtractingRequestHandler (SolrCell), DataImportHandler (DIH), the 
-clustering component, or any other modules in "contrib", you will need to 
+ExtractingRequestHandler (SolrCell), the clustering component, 
+or any other modules in "contrib", you will need to
 copy the required jars or update the paths to those jars in your 
 solrconfig.xml.
 
diff --git a/solr/example/build.gradle b/solr/example/build.gradle
index 3b6b3d1..b4f3ae9 100644
--- a/solr/example/build.gradle
+++ b/solr/example/build.gradle
@@ -24,13 +24,10 @@
 configurations {
   packaging
   postJar
-  dih
 }
 
 dependencies {
   postJar project(path: ":solr:core", configuration: "postJar")
-  dih 'org.hsqldb:hsqldb'
-  dih 'org.apache.derby:derby'
 }
 
 ext {
@@ -39,7 +36,6 @@
 
 task assemblePackaging(type: Sync) {
   from(projectDir, {
-    include "example-DIH/**"
     include "exampledocs/**"
     include "files/**"
     include "films/**"
@@ -51,10 +47,6 @@
     into "exampledocs/"
   })
 
-  from(configurations.dih, {
-    into "example-DIH/solr/db/lib"
-  })
-
   into packagingDir
 }
 
diff --git a/solr/example/example-DIH/.gitignore b/solr/example/example-DIH/.gitignore
deleted file mode 100644
index 6d9594a..0000000
--- a/solr/example/example-DIH/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-/logs/
diff --git a/solr/example/example-DIH/README.md b/solr/example/example-DIH/README.md
deleted file mode 100644
index ab98905..0000000
--- a/solr/example/example-DIH/README.md
+++ /dev/null
@@ -1,55 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-Solr DataImportHandler example configuration
---------------------------------------------
-
-NOTE: The DataImportHandler is deprecated as of v8.6. See SOLR-14066 for more details.
-
-To run this multi-core example, use the "-e" option of the bin/solr script:
-
-```
-> bin/solr -e dih
-```
-
-When Solr is started connect to:
-
-  http://localhost:8983/solr/
-
-* To import data from the hsqldb database, connect to:
-
-  http://localhost:8983/solr/db/dataimport?command=full-import
-
-* To import data from an ATOM feed, connect to:
-
-  http://localhost:8983/solr/atom/dataimport?command=full-import
-
-* To import data from your IMAP server:
-
-  1. Edit the example-DIH/solr/mail/conf/mail-data-config.xml and add details about username, password, IMAP server
-  2. Connect to http://localhost:8983/solr/mail/dataimport?command=full-import
-
-* To copy data from db Solr core, connect to:
-
-  http://localhost:8983/solr/solr/dataimport?command=full-import
-
-* To index a full text document using Tika integration:
-
-  http://localhost:8983/solr/tika/dataimport?command=full-import
-
-Check also the Solr Reference Guide for detailed usage guide:
-https://lucene.apache.org/solr/guide/uploading-structured-data-store-data-with-the-data-import-handler.html
diff --git a/solr/example/example-DIH/build.xml b/solr/example/example-DIH/build.xml
deleted file mode 100644
index 77abc2d..0000000
--- a/solr/example/example-DIH/build.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="solr-example-DIH" default="resolve">
-  <description>Solr Example-DIH</description>
-
-  <property name="ivy.retrieve.pattern" value="${example}/example-DIH/solr/db/lib/[artifact]-[revision].[ext]"/>
-
-  <import file="../../common-build.xml"/>
-
-  <!-- example tests are currently elsewhere -->
-  <target name="test"/>
-  <target name="test-nocompile"/>
-
-  <!-- this module has no javadocs -->
-  <target name="javadocs"/>
-
-  <!-- this module has no jar either -->
-  <target name="jar-core"/>
-
-  <!-- nothing to compile -->
-  <target name="compile-core"/>
-  <target name="compile-test"/>
-
-  <!-- nothing to cover -->
-  <target name="pitest"/>
-
-</project>
diff --git a/solr/example/example-DIH/hsqldb/.gitignore b/solr/example/example-DIH/hsqldb/.gitignore
deleted file mode 100644
index e75d109..0000000
--- a/solr/example/example-DIH/hsqldb/.gitignore
+++ /dev/null
@@ -1,5 +0,0 @@
-/ex.tmp/
-ex.log
-ex.lck
-ex.properties
-
diff --git a/solr/example/example-DIH/hsqldb/ex.script b/solr/example/example-DIH/hsqldb/ex.script
deleted file mode 100644
index b78f6cf..0000000
--- a/solr/example/example-DIH/hsqldb/ex.script
+++ /dev/null
@@ -1,165 +0,0 @@
-SET DATABASE UNIQUE NAME HSQLDB5E727295B6
-SET DATABASE GC 0
-SET DATABASE DEFAULT RESULT MEMORY ROWS 0
-SET DATABASE EVENT LOG LEVEL 0
-SET DATABASE TRANSACTION CONTROL LOCKS
-SET DATABASE DEFAULT ISOLATION LEVEL READ COMMITTED
-SET DATABASE TRANSACTION ROLLBACK ON CONFLICT TRUE
-SET DATABASE TEXT TABLE DEFAULTS ''
-SET DATABASE SQL NAMES FALSE
-SET DATABASE SQL REFERENCES FALSE
-SET DATABASE SQL SIZE TRUE
-SET DATABASE SQL TYPES FALSE
-SET DATABASE SQL TDC DELETE TRUE
-SET DATABASE SQL TDC UPDATE TRUE
-SET DATABASE SQL CONCAT NULLS TRUE
-SET DATABASE SQL UNIQUE NULLS TRUE
-SET DATABASE SQL CONVERT TRUNCATE TRUE
-SET DATABASE SQL AVG SCALE 0
-SET DATABASE SQL DOUBLE NAN TRUE
-SET FILES WRITE DELAY 500 MILLIS
-SET FILES BACKUP INCREMENT TRUE
-SET FILES CACHE SIZE 10000
-SET FILES CACHE ROWS 50000
-SET FILES SCALE 32
-SET FILES LOB SCALE 32
-SET FILES DEFRAG 0
-SET FILES NIO TRUE
-SET FILES NIO SIZE 256
-SET FILES LOG TRUE
-SET FILES LOG SIZE 50
-CREATE USER SA PASSWORD DIGEST 'd41d8cd98f00b204e9800998ecf8427e'
-ALTER USER SA SET LOCAL TRUE
-CREATE SCHEMA PUBLIC AUTHORIZATION DBA
-SET SCHEMA PUBLIC
-CREATE MEMORY TABLE PUBLIC.ITEM(ID VARCHAR(100),NAME VARCHAR(1024),MANU VARCHAR(50),WEIGHT DOUBLE,PRICE DOUBLE,POPULARITY INTEGER,INCLUDES VARCHAR(200),LAST_MODIFIED TIMESTAMP)
-CREATE MEMORY TABLE PUBLIC.FEATURE(ITEM_ID VARCHAR(100),DESCRIPTION VARCHAR(1024),LAST_MODIFIED TIMESTAMP)
-CREATE MEMORY TABLE PUBLIC.CATEGORY(ID INTEGER,DESCRIPTION VARCHAR(30),LAST_MODIFIED TIMESTAMP)
-CREATE MEMORY TABLE PUBLIC.ITEM_CATEGORY(ITEM_ID VARCHAR(100),CATEGORY_ID INTEGER,LAST_MODIFIED TIMESTAMP)
-ALTER SEQUENCE SYSTEM_LOBS.LOB_ID RESTART WITH 1
-SET DATABASE DEFAULT INITIAL SCHEMA PUBLIC
-GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.SQL_IDENTIFIER TO PUBLIC
-GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.YES_OR_NO TO PUBLIC
-GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.TIME_STAMP TO PUBLIC
-GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.CARDINAL_NUMBER TO PUBLIC
-GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.CHARACTER_DATA TO PUBLIC
-GRANT DBA TO SA
-SET SCHEMA SYSTEM_LOBS
-INSERT INTO BLOCKS VALUES(0,2147483647,0)
-SET SCHEMA PUBLIC
-INSERT INTO ITEM VALUES('6H500F0','Maxtor DiamondMax 11 - hard drive - 500 GB - SATA-300','Maxtor Corp.',0.0E0,350.0E0,6,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('F8V7067-APL-KIT','Belkin Mobile Power Cord for iPod w/ Dock','Belkin',4.0E0,19.95E0,1,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('IW-02','iPod & iPod Mini USB 2.0 Cable','Belkin',2.0E0,11.5E0,1,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('MA147LL/A','Apple 60 GB iPod with Video Playback Black','Apple Computer Inc.',5.5E0,399.0E0,10,'earbud headphones, USB cable','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('TWINX2048-3200PRO','CORSAIR  XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Memory - Retail','Corsair Microsystems Inc.',0.0E0,185.0E0,5,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('VS1GB400C3','CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) System Memory - Retail','Corsair Microsystems Inc.',0.0E0,74.99E0,7,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('VDBDB1A16','A-DATA V-Series 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) System Memory - OEM','A-DATA Technology Inc.',0.0E0,0.0E0,5,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('3007WFP','Dell Widescreen UltraSharp 3007WFP','Dell, Inc.',401.6E0,2199.0E0,6,'USB cable','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('VA902B','ViewSonic VA902B - flat panel display - TFT - 19"','ViewSonic Corp.',190.4E0,279.95E0,6,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('0579B002','Canon PIXMA MP500 All-In-One Photo Printer','Canon Inc.',352.0E0,179.99E0,6,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('9885A004','Canon PowerShot SD500','Canon Inc.',6.4E0,329.95E0,7,'32MB SD card, USB cable, AV cable, battery','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('SOLR1000','Solr, the Enterprise Search Server','Apache Software Foundation',0.0E0,0.0E0,10,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('UTF8TEST','Test with some UTF-8 encoded characters','Apache Software Foundation',0.0E0,0.0E0,0,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('EN7800GTX/2DHTV/256M','ASUS Extreme N7800GTX/2DHTV (256 MB)','ASUS Computer Inc.',16.0E0,479.95E0,7,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('100-435805','ATI Radeon X1900 XTX 512 MB PCIE Video Card','ATI Technologies',48.0E0,649.99E0,7,'null','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM VALUES('SP2514N','Samsung SpinPoint P120 SP2514N - hard drive - 250 GB - ATA-133','Samsung Electronics Co. Ltd.',0.0E0,92.0E0,6,'null','2008-03-12 13:30:00.000000')
-INSERT INTO FEATURE VALUES('SP2514N','7200RPM, 8MB cache, IDE Ultra ATA-133','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SP2514N','NoiseGuard, SilentSeek technology, Fluid Dynamic Bearing (FDB) motor','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('6H500F0','SATA 3.0Gb/s, NCQ','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('6H500F0','8.5ms seek','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('6H500F0','16MB cache','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('F8V7067-APL-KIT','car power adapter, white','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('IW-02','car power adapter for iPod, white','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('MA147LL/A','iTunes, Podcasts, Audiobooks','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('MA147LL/A','Stores up to 15,000 songs, 25,000 photos, or 150 hours of video','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('MA147LL/A','2.5-inch, 320x240 color TFT LCD display with LED backlight','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('MA147LL/A','Up to 20 hours of battery life','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('MA147LL/A','Plays AAC, MP3, WAV, AIFF, Audible, Apple Lossless, H.264 video','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('MA147LL/A','Notes, Calendar, Phone book, Hold button, Date display, Photo wallet, Built-in games, JPEG photo playback, Upgradeable firmware, USB 2.0 compatibility, Playback speed control, Rechargeable capability, Battery level indication','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('TWINX2048-3200PRO','CAS latency 2,\u00092-3-3-6 timing, 2.75v, unbuffered, heat-spreader','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('VDBDB1A16','CAS latency 3,\u0009 2.7v','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('3007WFP','30" TFT active matrix LCD, 2560 x 1600, .25mm dot pitch, 700:1 contrast','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('VA902B','19" TFT active matrix LCD, 8ms response time, 1280 x 1024 native resolution','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','Multifunction ink-jet color photo printer','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','Flatbed scanner, optical scan resolution of 1,200 x 2,400 dpi','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','2.5" color LCD preview screen','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','Duplex Copying','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','Printing speed up to 29ppm black, 19ppm color','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','Hi-Speed USB','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('0579B002','memory card: CompactFlash, Micro Drive, SmartMedia, Memory Stick, Memory Stick Pro, SD Card, and MultiMediaCard','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('9885A004','3x zoop, 7.1 megapixel Digital ELPH','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('9885A004','movie clips up to 640x480 @30 fps','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('9885A004','2.0" TFT LCD, 118,000 pixels','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('9885A004','built in flash, red-eye reduction','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Advanced Full-Text Search Capabilities using Lucene','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Optimizied for High Volume Web Traffic','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Standards Based Open Interfaces - XML and HTTP','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Comprehensive HTML Administration Interfaces','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Scalability - Efficient Replication to other Solr Search Servers','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Flexible and Adaptable with XML configuration and Schema','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('SOLR1000','Good unicode support: h\u00e9llo (hello with an accent over the e)','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('UTF8TEST','No accents here','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('UTF8TEST','This is an e acute: \u00e9','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('UTF8TEST','eaiou with circumflexes: \u00ea\u00e2\u00ee\u00f4\u00fb','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('UTF8TEST','This is in Turkish: bu T\u00fcrk\u00e7e','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('UTF8TEST','This is in Korean: \uc774\uac83\uc740 \ud55c\uad6d\uc5b4\uc774\ub2e4.','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('UTF8TEST','This is in Greek: \u0391\u03c5\u03c4\u03cc \u03b5\u03af\u03bd\u03b1\u03b9 \u03c3\u03c4\u03b1 \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('EN7800GTX/2DHTV/256M','NVIDIA GeForce 7800 GTX GPU/VPU clocked at 486MHz','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('EN7800GTX/2DHTV/256M','256MB GDDR3 Memory clocked at 1.35GHz','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('EN7800GTX/2DHTV/256M','PCI Express x16','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('EN7800GTX/2DHTV/256M','Dual DVI connectors, HDTV out, video input','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('EN7800GTX/2DHTV/256M','OpenGL 2.0, DirectX 9.0','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('100-435805','ATI RADEON X1900 GPU/VPU clocked at 650MHz','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('100-435805','512MB GDDR3 SDRAM clocked at 1.55GHz','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('100-435805','PCI Express x16','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('100-435805','dual DVI, HDTV, svideo, composite out','2017-09-01 12:34:56.000000')
-INSERT INTO FEATURE VALUES('100-435805','OpenGL 2.0, DirectX 9.0','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(1,'electronics','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(2,'hard drive','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(3,'connector','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(4,'music','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(5,'memory','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(6,'monitor','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(7,'multifunction printer','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(8,'printer','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(9,'scanner','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(10,'copier','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(11,'camera','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(12,'software','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(13,'search','2017-09-01 12:34:56.000000')
-INSERT INTO CATEGORY VALUES(14,'graphics card','2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('SP2514N',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('SP2514N',2,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('6H500F0',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('6H500F0',2,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('F8V7067-APL-KIT',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('F8V7067-APL-KIT',3,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('IW-02',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('IW-02',3,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('MA147LL/A',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('MA147LL/A',4,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('TWINX2048-3200PRO',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('TWINX2048-3200PRO',5,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('VS1GB400C3',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('VS1GB400C3',5,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('VDBDB1A16',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('VDBDB1A16',5,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('3007WFP',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('3007WFP',6,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('VA902B',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('VA902B',6,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('0579B002',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('0579B002',7,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('0579B002',8,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('0579B002',9,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('0579B002',10,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('9885A004',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('9885A004',11,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('SOLR1000',12,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('SOLR1000',13,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('UTF8TEST',12,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('UTF8TEST',13,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('EN7800GTX/2DHTV/256M',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('EN7800GTX/2DHTV/256M',14,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('100-435805',1,'2017-09-01 12:34:56.000000')
-INSERT INTO ITEM_CATEGORY VALUES('100-435805',14,'2017-09-01 12:34:56.000000')
diff --git a/solr/example/example-DIH/ivy.xml b/solr/example/example-DIH/ivy.xml
deleted file mode 100644
index 3f67a79..0000000
--- a/solr/example/example-DIH/ivy.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-    <info organisation="org.apache.solr" module="example-DIH"/>
-    <configurations defaultconfmapping="compile->master">
-      <conf name="compile" transitive="false"/>
-    </configurations>
-    <dependencies>
-      <dependency org="org.hsqldb" name="hsqldb" rev="${/org.hsqldb/hsqldb}" conf="compile"/>
-      <dependency org="org.apache.derby" name="derby" rev="${/org.apache.derby/derby}" conf="compile"/>
-      <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-    </dependencies>
-</ivy-module>
diff --git a/solr/example/example-DIH/solr/atom/conf/atom-data-config.xml b/solr/example/example-DIH/solr/atom/conf/atom-data-config.xml
deleted file mode 100644
index b7de812..0000000
--- a/solr/example/example-DIH/solr/atom/conf/atom-data-config.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-<dataConfig>
-  <dataSource type="URLDataSource"/>
-  <document>
-
-    <entity name="stackoverflow"
-            url="https://stackoverflow.com/feeds/tag/solr"
-            processor="XPathEntityProcessor"
-            forEach="/feed|/feed/entry"
-            transformer="HTMLStripTransformer,RegexTransformer">
-
-      <!-- Pick this value up from the feed level and apply to all documents -->
-      <field column="lastchecked_dt" xpath="/feed/updated" commonField="true"/>
-
-      <!-- Keep only the final numeric part of the URL -->
-      <field column="id" xpath="/feed/entry/id" regex=".*/" replaceWith=""/>
-
-      <field column="title"    xpath="/feed/entry/title"/>
-      <field column="author"   xpath="/feed/entry/author/name"/>
-      <field column="category" xpath="/feed/entry/category/@term"/>
-      <field column="link"     xpath="/feed/entry/link[@rel='alternate']/@href"/>
-
-      <!-- Use transformers to convert HTML into plain text.
-        There is also an UpdateRequestProcess to trim remaining spaces.
-      -->
-      <field column="summary" xpath="/feed/entry/summary" stripHTML="true" regex="( |\n)+" replaceWith=" "/>
-
-      <!-- Ignore namespaces when matching XPath -->
-      <field column="rank" xpath="/feed/entry/rank"/>
-
-      <field column="published_dt" xpath="/feed/entry/published"/>
-      <field column="updated_dt" xpath="/feed/entry/updated"/>
-    </entity>
-
-  </document>
-</dataConfig>
diff --git a/solr/example/example-DIH/solr/atom/conf/lang/stopwords_en.txt b/solr/example/example-DIH/solr/atom/conf/lang/stopwords_en.txt
deleted file mode 100644
index 2c164c0..0000000
--- a/solr/example/example-DIH/solr/atom/conf/lang/stopwords_en.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# a couple of test stopwords to test that the words are really being
-# configured from this file:
-stopworda
-stopwordb
-
-# Standard english stop words taken from Lucene's StopAnalyzer
-a
-an
-and
-are
-as
-at
-be
-but
-by
-for
-if
-in
-into
-is
-it
-no
-not
-of
-on
-or
-such
-that
-the
-their
-then
-there
-these
-they
-this
-to
-was
-will
-with
diff --git a/solr/example/example-DIH/solr/atom/conf/managed-schema b/solr/example/example-DIH/solr/atom/conf/managed-schema
deleted file mode 100644
index 5376c5b..0000000
--- a/solr/example/example-DIH/solr/atom/conf/managed-schema
+++ /dev/null
@@ -1,106 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<schema name="example-DIH-atom" version="1.6">
-  <uniqueKey>id</uniqueKey>
-
-  <field name="id" type="string" indexed="true" stored="true" required="true"/>
-  <field name="title" type="text_en_splitting" indexed="true" stored="true"/>
-  <field name="author" type="string" indexed="true" stored="true"/>
-  <field name="category" type="string" indexed="true" stored="true" multiValued="true"/>
-  <field name="link" type="string" indexed="true" stored="true"/>
-  <field name="summary" type="text_en_splitting" indexed="true" stored="true"/>
-  <field name="rank" type="pint" indexed="true" stored="true"/>
-
-  <dynamicField name="*_dt" type="pdate" indexed="true" stored="true"/>
-
-  <!-- Catch-all field, aggregating all "useful to search as text" fields via the copyField instructions -->
-  <field name="text" type="text_en_splitting" indexed="true" stored="false" multiValued="true"/>
-
-  <field name="urls" type="url_only" indexed="true" stored="false"/>
-
-
-  <copyField source="id" dest="text"/>
-  <copyField source="title" dest="text"/>
-  <copyField source="author" dest="text"/>
-  <copyField source="category" dest="text"/>
-  <copyField source="summary" dest="text"/>
-
-  <!-- extract URLs from summary for faceting -->
-  <copyField source="summary" dest="urls"/>
-
-  <fieldType name="string" class="solr.StrField" sortMissingLast="true" docValues="true"/>
-  <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
-  <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
-
-
-  <!-- A text field with defaults appropriate for English, plus
-   aggressive word-splitting and autophrase features enabled.
-   This field is just like text_en, except it adds
-   WordDelimiterFilter to enable splitting and matching of
-   words on case-change, alpha numeric boundaries, and
-   non-alphanumeric chars.  This means certain compound word
-   cases will work, for example query "wi fi" will match
-   document "WiFi" or "wi-fi".
-  -->
-  <fieldType name="text_en_splitting" class="solr.TextField"
-             positionIncrementGap="100" autoGeneratePhraseQueries="true">
-    <analyzer type="index">
-      <tokenizer name="whitespace"/>
-      <!-- in this example, we will only use synonyms at query time
-      <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-      -->
-      <!-- Case insensitive stop word removal. -->
-      <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-      <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1"
-              catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-      <filter name="lowercase"/>
-      <filter name="keywordMarker" protected="protwords.txt"/>
-      <filter name="porterStem"/>
-      <filter name="flattenGraph"/>
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer name="whitespace"/>
-      <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-      <filter name="stop"
-              ignoreCase="true"
-              words="lang/stopwords_en.txt"
-      />
-      <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1"
-              catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-      <filter name="lowercase"/>
-      <filter name="keywordMarker" protected="protwords.txt"/>
-      <filter name="porterStem"/>
-    </analyzer>
-  </fieldType>
-
-  <!-- Field type that extracts URLs from the text.
-   As the stored representation is not changed, it is only useful for faceting.
-   It is not terribly useful for searching URLs either, as there are too many special symbols.
-  -->
-  <fieldType name="url_only" class="solr.TextField" positionIncrementGap="100">
-    <analyzer type="index">
-      <tokenizer name="UAX29URLEmail" maxTokenLength="255"/>
-      <filter name="type" types="url_types.txt" useWhitelist="true"/>
-    </analyzer>
-    <analyzer type="query">
-      <tokenizer name="keyword"/>
-    </analyzer>
-  </fieldType>
-
-</schema>
diff --git a/solr/example/example-DIH/solr/atom/conf/protwords.txt b/solr/example/example-DIH/solr/atom/conf/protwords.txt
deleted file mode 100644
index 1303e42..0000000
--- a/solr/example/example-DIH/solr/atom/conf/protwords.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-# Use a protected word file to protect against the stemmer reducing two
-# unrelated words to the same base word.
-
-lucene
diff --git a/solr/example/example-DIH/solr/atom/conf/solrconfig.xml b/solr/example/example-DIH/solr/atom/conf/solrconfig.xml
deleted file mode 100644
index 5431986..0000000
--- a/solr/example/example-DIH/solr/atom/conf/solrconfig.xml
+++ /dev/null
@@ -1,64 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
- This is a DEMO configuration, highlighting elements
- specifically needed to get this example running
- such as libraries and request handler specifics.
-
- It uses defaults or does not define most of production-level settings
- such as various caches or auto-commit policies.
-
- See Solr Reference Guide and other examples for
- more details on a well configured solrconfig.xml
- https://lucene.apache.org/solr/guide/the-well-configured-solr-instance.html
--->
-<config>
-
-  <!-- Controls what version of Lucene various components of Solr
-    adhere to.  Generally, you want to use the latest version to
-    get all bug fixes and improvements. It is highly recommended
-    that you fully re-index after changing this setting as it can
-    affect both how text is indexed and queried.
-  -->
-  <luceneMatchVersion>9.0.0</luceneMatchVersion>
-
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar"/>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="df">text</str>
-       <!-- Change from JSON to XML format (the default prior to Solr 7.0)
-          <str name="wt">xml</str> 
-         -->
-    </lst>
-  </requestHandler>
-
-  <requestHandler name="/dataimport" class="solr.DataImportHandler">
-    <lst name="defaults">
-      <str name="config">atom-data-config.xml</str>
-      <str name="processor">trim_text</str>
-    </lst>
-  </requestHandler>
-
-  <updateProcessor class="solr.processor.TrimFieldUpdateProcessorFactory" name="trim_text">
-    <str name="typeName">text_en_splitting</str>
-  </updateProcessor>
-
-</config>
diff --git a/solr/example/example-DIH/solr/atom/conf/synonyms.txt b/solr/example/example-DIH/solr/atom/conf/synonyms.txt
deleted file mode 100644
index eab4ee8..0000000
--- a/solr/example/example-DIH/solr/atom/conf/synonyms.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-#some test synonym mappings unlikely to appear in real input text
-aaafoo => aaabar
-bbbfoo => bbbfoo bbbbar
-cccfoo => cccbar cccbaz
-fooaaa,baraaa,bazaaa
-
-# Some synonym groups specific to this example
-GB,gib,gigabyte,gigabytes
-MB,mib,megabyte,megabytes
-Television, Televisions, TV, TVs
-#notice we use "gib" instead of "GiB" so any WordDelimiterGraphFilter coming
-#after us won't split it into two words.
-
-# Synonym mappings can be used for spelling correction too
-pixima => pixma
-
diff --git a/solr/example/example-DIH/solr/atom/conf/url_types.txt b/solr/example/example-DIH/solr/atom/conf/url_types.txt
deleted file mode 100644
index 808f313..0000000
--- a/solr/example/example-DIH/solr/atom/conf/url_types.txt
+++ /dev/null
@@ -1 +0,0 @@
-<URL>
diff --git a/solr/example/example-DIH/solr/atom/core.properties b/solr/example/example-DIH/solr/atom/core.properties
deleted file mode 100644
index e69de29..0000000
--- a/solr/example/example-DIH/solr/atom/core.properties
+++ /dev/null
diff --git a/solr/example/example-DIH/solr/db/conf/clustering/carrot2/kmeans-attributes.xml b/solr/example/example-DIH/solr/db/conf/clustering/carrot2/kmeans-attributes.xml
deleted file mode 100644
index d802465..0000000
--- a/solr/example/example-DIH/solr/db/conf/clustering/carrot2/kmeans-attributes.xml
+++ /dev/null
@@ -1,19 +0,0 @@
-<!-- 
-  Default configuration for the bisecting k-means clustering algorithm.
-  
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="MultilingualClustering.languageAggregationStrategy">
-            <value type="org.carrot2.text.clustering.MultilingualClustering$LanguageAggregationStrategy" value="FLATTEN_MAJOR_LANGUAGE"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
diff --git a/solr/example/example-DIH/solr/db/conf/clustering/carrot2/lingo-attributes.xml b/solr/example/example-DIH/solr/db/conf/clustering/carrot2/lingo-attributes.xml
deleted file mode 100644
index 4bf1360..0000000
--- a/solr/example/example-DIH/solr/db/conf/clustering/carrot2/lingo-attributes.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-<!-- 
-  Default configuration for the Lingo clustering algorithm.
-
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <!-- 
-          The language to assume for clustered documents.
-          For a list of allowed values, see: 
-          http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage
-          -->
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="LingoClusteringAlgorithm.desiredClusterCountBase">
-            <value type="java.lang.Integer" value="20"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/db/conf/clustering/carrot2/stc-attributes.xml b/solr/example/example-DIH/solr/db/conf/clustering/carrot2/stc-attributes.xml
deleted file mode 100644
index c1bf110..0000000
--- a/solr/example/example-DIH/solr/db/conf/clustering/carrot2/stc-attributes.xml
+++ /dev/null
@@ -1,19 +0,0 @@
-<!-- 
-  Default configuration for the STC clustering algorithm.
-
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="MultilingualClustering.languageAggregationStrategy">
-            <value type="org.carrot2.text.clustering.MultilingualClustering$LanguageAggregationStrategy" value="FLATTEN_MAJOR_LANGUAGE"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
diff --git a/solr/example/example-DIH/solr/db/conf/currency.xml b/solr/example/example-DIH/solr/db/conf/currency.xml
deleted file mode 100644
index 3a9c58a..0000000
--- a/solr/example/example-DIH/solr/db/conf/currency.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version="1.0" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- Example exchange rates file for CurrencyField type named "currency" in example schema -->
-
-<currencyConfig version="1.0">
-  <rates>
-    <!-- Updated from http://www.exchangerate.com/ at 2011-09-27 -->
-    <rate from="USD" to="ARS" rate="4.333871" comment="ARGENTINA Peso" />
-    <rate from="USD" to="AUD" rate="1.025768" comment="AUSTRALIA Dollar" />
-    <rate from="USD" to="EUR" rate="0.743676" comment="European Euro" />
-    <rate from="USD" to="BRL" rate="1.881093" comment="BRAZIL Real" />
-    <rate from="USD" to="CAD" rate="1.030815" comment="CANADA Dollar" />
-    <rate from="USD" to="CLP" rate="519.0996" comment="CHILE Peso" />
-    <rate from="USD" to="CNY" rate="6.387310" comment="CHINA Yuan" />
-    <rate from="USD" to="CZK" rate="18.47134" comment="CZECH REP. Koruna" />
-    <rate from="USD" to="DKK" rate="5.515436" comment="DENMARK Krone" />
-    <rate from="USD" to="HKD" rate="7.801922" comment="HONG KONG Dollar" />
-    <rate from="USD" to="HUF" rate="215.6169" comment="HUNGARY Forint" />
-    <rate from="USD" to="ISK" rate="118.1280" comment="ICELAND Krona" />
-    <rate from="USD" to="INR" rate="49.49088" comment="INDIA Rupee" />
-    <rate from="USD" to="XDR" rate="0.641358" comment="INTNL MON. FUND SDR" />
-    <rate from="USD" to="ILS" rate="3.709739" comment="ISRAEL Sheqel" />
-    <rate from="USD" to="JPY" rate="76.32419" comment="JAPAN Yen" />
-    <rate from="USD" to="KRW" rate="1169.173" comment="KOREA (SOUTH) Won" />
-    <rate from="USD" to="KWD" rate="0.275142" comment="KUWAIT Dinar" />
-    <rate from="USD" to="MXN" rate="13.85895" comment="MEXICO Peso" />
-    <rate from="USD" to="NZD" rate="1.285159" comment="NEW ZEALAND Dollar" />
-    <rate from="USD" to="NOK" rate="5.859035" comment="NORWAY Krone" />
-    <rate from="USD" to="PKR" rate="87.57007" comment="PAKISTAN Rupee" />
-    <rate from="USD" to="PEN" rate="2.730683" comment="PERU Sol" />
-    <rate from="USD" to="PHP" rate="43.62039" comment="PHILIPPINES Peso" />
-    <rate from="USD" to="PLN" rate="3.310139" comment="POLAND Zloty" />
-    <rate from="USD" to="RON" rate="3.100932" comment="ROMANIA Leu" />
-    <rate from="USD" to="RUB" rate="32.14663" comment="RUSSIA Ruble" />
-    <rate from="USD" to="SAR" rate="3.750465" comment="SAUDI ARABIA Riyal" />
-    <rate from="USD" to="SGD" rate="1.299352" comment="SINGAPORE Dollar" />
-    <rate from="USD" to="ZAR" rate="8.329761" comment="SOUTH AFRICA Rand" />
-    <rate from="USD" to="SEK" rate="6.883442" comment="SWEDEN Krona" />
-    <rate from="USD" to="CHF" rate="0.906035" comment="SWITZERLAND Franc" />
-    <rate from="USD" to="TWD" rate="30.40283" comment="TAIWAN Dollar" />
-    <rate from="USD" to="THB" rate="30.89487" comment="THAILAND Baht" />
-    <rate from="USD" to="AED" rate="3.672955" comment="U.A.E. Dirham" />
-    <rate from="USD" to="UAH" rate="7.988582" comment="UKRAINE Hryvnia" />
-    <rate from="USD" to="GBP" rate="0.647910" comment="UNITED KINGDOM Pound" />
-    
-    <!-- Cross-rates for some common currencies -->
-    <rate from="EUR" to="GBP" rate="0.869914" />  
-    <rate from="EUR" to="NOK" rate="7.800095" />  
-    <rate from="GBP" to="NOK" rate="8.966508" />  
-  </rates>
-</currencyConfig>
diff --git a/solr/example/example-DIH/solr/db/conf/db-data-config.xml b/solr/example/example-DIH/solr/db/conf/db-data-config.xml
deleted file mode 100644
index 4a7dba9..0000000
--- a/solr/example/example-DIH/solr/db/conf/db-data-config.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<dataConfig>
-    <dataSource driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:${solr.install.dir}/example/example-DIH/hsqldb/ex" user="sa" />
-    <document>
-        <entity name="item" query="select * from item"
-                deltaQuery="select id from item where last_modified > '${dataimporter.last_index_time}'">
-            <field column="NAME" name="name" />
-
-            <entity name="feature"  
-                    query="select DESCRIPTION from FEATURE where ITEM_ID='${item.ID}'"
-                    deltaQuery="select ITEM_ID from FEATURE where last_modified > '${dataimporter.last_index_time}'"
-                    parentDeltaQuery="select ID from item where ID=${feature.ITEM_ID}">
-                <field name="features" column="DESCRIPTION" />
-            </entity>
-            
-            <entity name="item_category"
-                    query="select CATEGORY_ID from item_category where ITEM_ID='${item.ID}'"
-                    deltaQuery="select ITEM_ID, CATEGORY_ID from item_category where last_modified > '${dataimporter.last_index_time}'"
-                    parentDeltaQuery="select ID from item where ID=${item_category.ITEM_ID}">
-                <entity name="category"
-                        query="select DESCRIPTION from category where ID = '${item_category.CATEGORY_ID}'"
-                        deltaQuery="select ID from category where last_modified > '${dataimporter.last_index_time}'"
-                        parentDeltaQuery="select ITEM_ID, CATEGORY_ID from item_category where CATEGORY_ID=${category.ID}">
-                    <field column="DESCRIPTION" name="cat" />
-                </entity>
-            </entity>
-        </entity>
-    </document>
-</dataConfig>
-
diff --git a/solr/example/example-DIH/solr/db/conf/elevate.xml b/solr/example/example-DIH/solr/db/conf/elevate.xml
deleted file mode 100644
index 2c09ebe..0000000
--- a/solr/example/example-DIH/solr/db/conf/elevate.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- If this file is found in the config directory, it will only be
-     loaded once at startup.  If it is found in Solr's data
-     directory, it will be re-loaded every commit.
-
-   See http://wiki.apache.org/solr/QueryElevationComponent for more info
-
--->
-<elevate>
- <!-- Query elevation examples
-  <query text="foo bar">
-    <doc id="1" />
-    <doc id="2" />
-    <doc id="3" />
-  </query>
-
-for use with techproducts example
- 
-  <query text="ipod">
-    <doc id="MA147LL/A" />  put the actual ipod at the top 
-    <doc id="IW-02" exclude="true" /> exclude this cable
-  </query>
--->
-
-</elevate>
diff --git a/solr/example/example-DIH/solr/db/conf/lang/contractions_ca.txt b/solr/example/example-DIH/solr/db/conf/lang/contractions_ca.txt
deleted file mode 100644
index 307a85f..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/contractions_ca.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-# Set of Catalan contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-d
-l
-m
-n
-s
-t
diff --git a/solr/example/example-DIH/solr/db/conf/lang/contractions_fr.txt b/solr/example/example-DIH/solr/db/conf/lang/contractions_fr.txt
deleted file mode 100644
index f1bba51..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/contractions_fr.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-# Set of French contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-l
-m
-t
-qu
-n
-s
-j
-d
-c
-jusqu
-quoiqu
-lorsqu
-puisqu
diff --git a/solr/example/example-DIH/solr/db/conf/lang/contractions_ga.txt b/solr/example/example-DIH/solr/db/conf/lang/contractions_ga.txt
deleted file mode 100644
index 9ebe7fa..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/contractions_ga.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# Set of Irish contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-d
-m
-b
diff --git a/solr/example/example-DIH/solr/db/conf/lang/contractions_it.txt b/solr/example/example-DIH/solr/db/conf/lang/contractions_it.txt
deleted file mode 100644
index cac0409..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/contractions_it.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-# Set of Italian contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-c
-l 
-all 
-dall 
-dell 
-nell 
-sull 
-coll 
-pell 
-gl 
-agl 
-dagl 
-degl 
-negl 
-sugl 
-un 
-m 
-t 
-s 
-v 
-d
diff --git a/solr/example/example-DIH/solr/db/conf/lang/hyphenations_ga.txt b/solr/example/example-DIH/solr/db/conf/lang/hyphenations_ga.txt
deleted file mode 100644
index 4d2642c..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/hyphenations_ga.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# Set of Irish hyphenations for StopFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-h
-n
-t
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stemdict_nl.txt b/solr/example/example-DIH/solr/db/conf/lang/stemdict_nl.txt
deleted file mode 100644
index 4410729..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stemdict_nl.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-# Set of overrides for the dutch stemmer
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-fiets	fiets
-bromfiets	bromfiets
-ei	eier
-kind	kinder
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stoptags_ja.txt b/solr/example/example-DIH/solr/db/conf/lang/stoptags_ja.txt
deleted file mode 100644
index 71b7508..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stoptags_ja.txt
+++ /dev/null
@@ -1,420 +0,0 @@
-#
-# This file defines a Japanese stoptag set for JapanesePartOfSpeechStopFilter.
-#
-# Any token with a part-of-speech tag that exactly matches those defined in this
-# file are removed from the token stream.
-#
-# Set your own stoptags by uncommenting the lines below.  Note that comments are
-# not allowed on the same line as a stoptag.  See LUCENE-3745 for frequency lists,
-# etc. that can be useful for building you own stoptag set.
-#
-# The entire possible tagset is provided below for convenience.
-#
-#####
-#  noun: unclassified nouns
-#名詞
-#
-#  noun-common: Common nouns or nouns where the sub-classification is undefined
-#名詞-一般
-#
-#  noun-proper: Proper nouns where the sub-classification is undefined 
-#名詞-固有名詞
-#
-#  noun-proper-misc: miscellaneous proper nouns
-#名詞-固有名詞-一般
-#
-#  noun-proper-person: Personal names where the sub-classification is undefined
-#名詞-固有名詞-人名
-#
-#  noun-proper-person-misc: names that cannot be divided into surname and 
-#  given name; foreign names; names where the surname or given name is unknown.
-#  e.g. お市の方
-#名詞-固有名詞-人名-一般
-#
-#  noun-proper-person-surname: Mainly Japanese surnames.
-#  e.g. 山田
-#名詞-固有名詞-人名-姓
-#
-#  noun-proper-person-given_name: Mainly Japanese given names.
-#  e.g. 太郎
-#名詞-固有名詞-人名-名
-#
-#  noun-proper-organization: Names representing organizations.
-#  e.g. 通産省, NHK
-#名詞-固有名詞-組織
-#
-#  noun-proper-place: Place names where the sub-classification is undefined
-#名詞-固有名詞-地域
-#
-#  noun-proper-place-misc: Place names excluding countries.
-#  e.g. アジア, バルセロナ, 京都
-#名詞-固有名詞-地域-一般
-#
-#  noun-proper-place-country: Country names. 
-#  e.g. 日本, オーストラリア
-#名詞-固有名詞-地域-国
-#
-#  noun-pronoun: Pronouns where the sub-classification is undefined
-#名詞-代名詞
-#
-#  noun-pronoun-misc: miscellaneous pronouns: 
-#  e.g. それ, ここ, あいつ, あなた, あちこち, いくつ, どこか, なに, みなさん, みんな, わたくし, われわれ
-#名詞-代名詞-一般
-#
-#  noun-pronoun-contraction: Spoken language contraction made by combining a 
-#  pronoun and the particle 'wa'.
-#  e.g. ありゃ, こりゃ, こりゃあ, そりゃ, そりゃあ 
-#名詞-代名詞-縮約
-#
-#  noun-adverbial: Temporal nouns such as names of days or months that behave 
-#  like adverbs. Nouns that represent amount or ratios and can be used adverbially,
-#  e.g. 金曜, 一月, 午後, 少量
-#名詞-副詞可能
-#
-#  noun-verbal: Nouns that take arguments with case and can appear followed by 
-#  'suru' and related verbs (する, できる, なさる, くださる)
-#  e.g. インプット, 愛着, 悪化, 悪戦苦闘, 一安心, 下取り
-#名詞-サ変接続
-#
-#  noun-adjective-base: The base form of adjectives, words that appear before な ("na")
-#  e.g. 健康, 安易, 駄目, だめ
-#名詞-形容動詞語幹
-#
-#  noun-numeric: Arabic numbers, Chinese numerals, and counters like 何 (回), 数.
-#  e.g. 0, 1, 2, 何, 数, 幾
-#名詞-数
-#
-#  noun-affix: noun affixes where the sub-classification is undefined
-#名詞-非自立
-#
-#  noun-affix-misc: Of adnominalizers, the case-marker の ("no"), and words that 
-#  attach to the base form of inflectional words, words that cannot be classified 
-#  into any of the other categories below. This category includes indefinite nouns.
-#  e.g. あかつき, 暁, かい, 甲斐, 気, きらい, 嫌い, くせ, 癖, こと, 事, ごと, 毎, しだい, 次第, 
-#       順, せい, 所為, ついで, 序で, つもり, 積もり, 点, どころ, の, はず, 筈, はずみ, 弾み, 
-#       拍子, ふう, ふり, 振り, ほう, 方, 旨, もの, 物, 者, ゆえ, 故, ゆえん, 所以, わけ, 訳,
-#       わり, 割り, 割, ん-口語/, もん-口語/
-#名詞-非自立-一般
-#
-#  noun-affix-adverbial: noun affixes that that can behave as adverbs.
-#  e.g. あいだ, 間, あげく, 挙げ句, あと, 後, 余り, 以外, 以降, 以後, 以上, 以前, 一方, うえ, 
-#       上, うち, 内, おり, 折り, かぎり, 限り, きり, っきり, 結果, ころ, 頃, さい, 際, 最中, さなか, 
-#       最中, じたい, 自体, たび, 度, ため, 為, つど, 都度, とおり, 通り, とき, 時, ところ, 所, 
-#       とたん, 途端, なか, 中, のち, 後, ばあい, 場合, 日, ぶん, 分, ほか, 他, まえ, 前, まま, 
-#       儘, 侭, みぎり, 矢先
-#名詞-非自立-副詞可能
-#
-#  noun-affix-aux: noun affixes treated as 助動詞 ("auxiliary verb") in school grammars 
-#  with the stem よう(だ) ("you(da)").
-#  e.g.  よう, やう, 様 (よう)
-#名詞-非自立-助動詞語幹
-#  
-#  noun-affix-adjective-base: noun affixes that can connect to the indeclinable
-#  connection form な (aux "da").
-#  e.g. みたい, ふう
-#名詞-非自立-形容動詞語幹
-#
-#  noun-special: special nouns where the sub-classification is undefined.
-#名詞-特殊
-#
-#  noun-special-aux: The そうだ ("souda") stem form that is used for reporting news, is 
-#  treated as 助動詞 ("auxiliary verb") in school grammars, and attach to the base 
-#  form of inflectional words.
-#  e.g. そう
-#名詞-特殊-助動詞語幹
-#
-#  noun-suffix: noun suffixes where the sub-classification is undefined.
-#名詞-接尾
-#
-#  noun-suffix-misc: Of the nouns or stem forms of other parts of speech that connect 
-#  to ガル or タイ and can combine into compound nouns, words that cannot be classified into
-#  any of the other categories below. In general, this category is more inclusive than 
-#  接尾語 ("suffix") and is usually the last element in a compound noun.
-#  e.g. おき, かた, 方, 甲斐 (がい), がかり, ぎみ, 気味, ぐるみ, (~した) さ, 次第, 済 (ず) み,
-#       よう, (でき)っこ, 感, 観, 性, 学, 類, 面, 用
-#名詞-接尾-一般
-#
-#  noun-suffix-person: Suffixes that form nouns and attach to person names more often
-#  than other nouns.
-#  e.g. 君, 様, 著
-#名詞-接尾-人名
-#
-#  noun-suffix-place: Suffixes that form nouns and attach to place names more often 
-#  than other nouns.
-#  e.g. 町, 市, 県
-#名詞-接尾-地域
-#
-#  noun-suffix-verbal: Of the suffixes that attach to nouns and form nouns, those that 
-#  can appear before スル ("suru").
-#  e.g. 化, 視, 分け, 入り, 落ち, 買い
-#名詞-接尾-サ変接続
-#
-#  noun-suffix-aux: The stem form of そうだ (様態) that is used to indicate conditions, 
-#  is treated as 助動詞 ("auxiliary verb") in school grammars, and attach to the 
-#  conjunctive form of inflectional words.
-#  e.g. そう
-#名詞-接尾-助動詞語幹
-#
-#  noun-suffix-adjective-base: Suffixes that attach to other nouns or the conjunctive 
-#  form of inflectional words and appear before the copula だ ("da").
-#  e.g. 的, げ, がち
-#名詞-接尾-形容動詞語幹
-#
-#  noun-suffix-adverbial: Suffixes that attach to other nouns and can behave as adverbs.
-#  e.g. 後 (ご), 以後, 以降, 以前, 前後, 中, 末, 上, 時 (じ)
-#名詞-接尾-副詞可能
-#
-#  noun-suffix-classifier: Suffixes that attach to numbers and form nouns. This category 
-#  is more inclusive than 助数詞 ("classifier") and includes common nouns that attach 
-#  to numbers.
-#  e.g. 個, つ, 本, 冊, パーセント, cm, kg, カ月, か国, 区画, 時間, 時半
-#名詞-接尾-助数詞
-#
-#  noun-suffix-special: Special suffixes that mainly attach to inflecting words.
-#  e.g. (楽し) さ, (考え) 方
-#名詞-接尾-特殊
-#
-#  noun-suffix-conjunctive: Nouns that behave like conjunctions and join two words 
-#  together.
-#  e.g. (日本) 対 (アメリカ), 対 (アメリカ), (3) 対 (5), (女優) 兼 (主婦)
-#名詞-接続詞的
-#
-#  noun-verbal_aux: Nouns that attach to the conjunctive particle て ("te") and are 
-#  semantically verb-like.
-#  e.g. ごらん, ご覧, 御覧, 頂戴
-#名詞-動詞非自立的
-#
-#  noun-quotation: text that cannot be segmented into words, proverbs, Chinese poetry, 
-#  dialects, English, etc. Currently, the only entry for 名詞 引用文字列 ("noun quotation") 
-#  is いわく ("iwaku").
-#名詞-引用文字列
-#
-#  noun-nai_adjective: Words that appear before the auxiliary verb ない ("nai") and
-#  behave like an adjective.
-#  e.g. 申し訳, 仕方, とんでも, 違い
-#名詞-ナイ形容詞語幹
-#
-#####
-#  prefix: unclassified prefixes
-#接頭詞
-#
-#  prefix-nominal: Prefixes that attach to nouns (including adjective stem forms) 
-#  excluding numerical expressions.
-#  e.g. お (水), 某 (氏), 同 (社), 故 (~氏), 高 (品質), お (見事), ご (立派)
-#接頭詞-名詞接続
-#
-#  prefix-verbal: Prefixes that attach to the imperative form of a verb or a verb
-#  in conjunctive form followed by なる/なさる/くださる.
-#  e.g. お (読みなさい), お (座り)
-#接頭詞-動詞接続
-#
-#  prefix-adjectival: Prefixes that attach to adjectives.
-#  e.g. お (寒いですねえ), バカ (でかい)
-#接頭詞-形容詞接続
-#
-#  prefix-numerical: Prefixes that attach to numerical expressions.
-#  e.g. 約, およそ, 毎時
-#接頭詞-数接続
-#
-#####
-#  verb: unclassified verbs
-#動詞
-#
-#  verb-main:
-#動詞-自立
-#
-#  verb-auxiliary:
-#動詞-非自立
-#
-#  verb-suffix:
-#動詞-接尾
-#
-#####
-#  adjective: unclassified adjectives
-#形容詞
-#
-#  adjective-main:
-#形容詞-自立
-#
-#  adjective-auxiliary:
-#形容詞-非自立
-#
-#  adjective-suffix:
-#形容詞-接尾
-#
-#####
-#  adverb: unclassified adverbs
-#副詞
-#
-#  adverb-misc: Words that can be segmented into one unit and where adnominal 
-#  modification is not possible.
-#  e.g. あいかわらず, 多分
-#副詞-一般
-#
-#  adverb-particle_conjunction: Adverbs that can be followed by の, は, に, 
-#  な, する, だ, etc.
-#  e.g. こんなに, そんなに, あんなに, なにか, なんでも
-#副詞-助詞類接続
-#
-#####
-#  adnominal: Words that only have noun-modifying forms.
-#  e.g. この, その, あの, どの, いわゆる, なんらかの, 何らかの, いろんな, こういう, そういう, ああいう, 
-#       どういう, こんな, そんな, あんな, どんな, 大きな, 小さな, おかしな, ほんの, たいした, 
-#       「(, も) さる (ことながら)」, 微々たる, 堂々たる, 単なる, いかなる, 我が」「同じ, 亡き
-#連体詞
-#
-#####
-#  conjunction: Conjunctions that can occur independently.
-#  e.g. が, けれども, そして, じゃあ, それどころか
-接続詞
-#
-#####
-#  particle: unclassified particles.
-助詞
-#
-#  particle-case: case particles where the subclassification is undefined.
-助詞-格助詞
-#
-#  particle-case-misc: Case particles.
-#  e.g. から, が, で, と, に, へ, より, を, の, にて
-助詞-格助詞-一般
-#
-#  particle-case-quote: the "to" that appears after nouns, a person’s speech, 
-#  quotation marks, expressions of decisions from a meeting, reasons, judgements,
-#  conjectures, etc.
-#  e.g. ( だ) と (述べた.), ( である) と (して執行猶予...)
-助詞-格助詞-引用
-#
-#  particle-case-compound: Compounds of particles and verbs that mainly behave 
-#  like case particles.
-#  e.g. という, といった, とかいう, として, とともに, と共に, でもって, にあたって, に当たって, に当って,
-#       にあたり, に当たり, に当り, に当たる, にあたる, において, に於いて,に於て, における, に於ける, 
-#       にかけ, にかけて, にかんし, に関し, にかんして, に関して, にかんする, に関する, に際し, 
-#       に際して, にしたがい, に従い, に従う, にしたがって, に従って, にたいし, に対し, にたいして, 
-#       に対して, にたいする, に対する, について, につき, につけ, につけて, につれ, につれて, にとって,
-#       にとり, にまつわる, によって, に依って, に因って, により, に依り, に因り, による, に依る, に因る, 
-#       にわたって, にわたる, をもって, を以って, を通じ, を通じて, を通して, をめぐって, をめぐり, をめぐる,
-#       って-口語/, ちゅう-関西弁「という」/, (何) ていう (人)-口語/, っていう-口語/, といふ, とかいふ
-助詞-格助詞-連語
-#
-#  particle-conjunctive:
-#  e.g. から, からには, が, けれど, けれども, けど, し, つつ, て, で, と, ところが, どころか, とも, ども, 
-#       ながら, なり, ので, のに, ば, ものの, や ( した), やいなや, (ころん) じゃ(いけない)-口語/, 
-#       (行っ) ちゃ(いけない)-口語/, (言っ) たって (しかたがない)-口語/, (それがなく)ったって (平気)-口語/
-助詞-接続助詞
-#
-#  particle-dependency:
-#  e.g. こそ, さえ, しか, すら, は, も, ぞ
-助詞-係助詞
-#
-#  particle-adverbial:
-#  e.g. がてら, かも, くらい, 位, ぐらい, しも, (学校) じゃ(これが流行っている)-口語/, 
-#       (それ)じゃあ (よくない)-口語/, ずつ, (私) なぞ, など, (私) なり (に), (先生) なんか (大嫌い)-口語/,
-#       (私) なんぞ, (先生) なんて (大嫌い)-口語/, のみ, だけ, (私) だって-口語/, だに, 
-#       (彼)ったら-口語/, (お茶) でも (いかが), 等 (とう), (今後) とも, ばかり, ばっか-口語/, ばっかり-口語/,
-#       ほど, 程, まで, 迄, (誰) も (が)([助詞-格助詞] および [助詞-係助詞] の前に位置する「も」)
-助詞-副助詞
-#
-#  particle-interjective: particles with interjective grammatical roles.
-#  e.g. (松島) や
-助詞-間投助詞
-#
-#  particle-coordinate:
-#  e.g. と, たり, だの, だり, とか, なり, や, やら
-助詞-並立助詞
-#
-#  particle-final:
-#  e.g. かい, かしら, さ, ぜ, (だ)っけ-口語/, (とまってる) で-方言/, な, ナ, なあ-口語/, ぞ, ね, ネ, 
-#       ねぇ-口語/, ねえ-口語/, ねん-方言/, の, のう-口語/, や, よ, ヨ, よぉ-口語/, わ, わい-口語/
-助詞-終助詞
-#
-#  particle-adverbial/conjunctive/final: The particle "ka" when unknown whether it is 
-#  adverbial, conjunctive, or sentence final. For example:
-#       (a) 「A か B か」. Ex:「(国内で運用する) か,(海外で運用する) か (.)」
-#       (b) Inside an adverb phrase. Ex:「(幸いという) か (, 死者はいなかった.)」
-#           「(祈りが届いたせい) か (, 試験に合格した.)」
-#       (c) 「かのように」. Ex:「(何もなかった) か (のように振る舞った.)」
-#  e.g. か
-助詞-副助詞/並立助詞/終助詞
-#
-#  particle-adnominalizer: The "no" that attaches to nouns and modifies 
-#  non-inflectional words.
-助詞-連体化
-#
-#  particle-adnominalizer: The "ni" and "to" that appear following nouns and adverbs 
-#  that are giongo, giseigo, or gitaigo.
-#  e.g. に, と
-助詞-副詞化
-#
-#  particle-special: A particle that does not fit into one of the above classifications. 
-#  This includes particles that are used in Tanka, Haiku, and other poetry.
-#  e.g. かな, けむ, ( しただろう) に, (あんた) にゃ(わからん), (俺) ん (家)
-助詞-特殊
-#
-#####
-#  auxiliary-verb:
-助動詞
-#
-#####
-#  interjection: Greetings and other exclamations.
-#  e.g. おはよう, おはようございます, こんにちは, こんばんは, ありがとう, どうもありがとう, ありがとうございます, 
-#       いただきます, ごちそうさま, さよなら, さようなら, はい, いいえ, ごめん, ごめんなさい
-#感動詞
-#
-#####
-#  symbol: unclassified Symbols.
-記号
-#
-#  symbol-misc: A general symbol not in one of the categories below.
-#  e.g. [○◎@$〒→+]
-記号-一般
-#
-#  symbol-comma: Commas
-#  e.g. [,、]
-記号-読点
-#
-#  symbol-period: Periods and full stops.
-#  e.g. [..。]
-記号-句点
-#
-#  symbol-space: Full-width whitespace.
-記号-空白
-#
-#  symbol-open_bracket:
-#  e.g. [({‘“『【]
-記号-括弧開
-#
-#  symbol-close_bracket:
-#  e.g. [)}’”』」】]
-記号-括弧閉
-#
-#  symbol-alphabetic:
-#記号-アルファベット
-#
-#####
-#  other: unclassified other
-#その他
-#
-#  other-interjection: Words that are hard to classify as noun-suffixes or 
-#  sentence-final particles.
-#  e.g. (だ)ァ
-その他-間投
-#
-#####
-#  filler: Aizuchi that occurs during a conversation or sounds inserted as filler.
-#  e.g. あの, うんと, えと
-フィラー
-#
-#####
-#  non-verbal: non-verbal sound.
-非言語音
-#
-#####
-#  fragment:
-#語断片
-#
-#####
-#  unknown: unknown part of speech.
-#未知語
-#
-##### End of file
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ar.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ar.txt
deleted file mode 100644
index 046829d..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ar.txt
+++ /dev/null
@@ -1,125 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# Cleaned on October 11, 2009 (not normalized, so use before normalization)
-# This means that when modifying this list, you might need to add some 
-# redundant entries, for example containing forms with both أ and ا
-من
-ومن
-منها
-منه
-في
-وفي
-فيها
-فيه


-ثم
-او
-أو

-بها
-به


-اى
-اي
-أي
-أى
-لا
-ولا
-الا
-ألا
-إلا
-لكن
-ما
-وما
-كما
-فما
-عن
-مع
-اذا
-إذا
-ان
-أن
-إن
-انها
-أنها
-إنها
-انه
-أنه
-إنه
-بان
-بأن
-فان
-فأن
-وان
-وأن
-وإن
-التى
-التي
-الذى
-الذي
-الذين
-الى
-الي
-إلى
-إلي
-على
-عليها
-عليه
-اما
-أما
-إما
-ايضا
-أيضا
-كل
-وكل
-لم
-ولم
-لن
-ولن
-هى
-هي
-هو
-وهى
-وهي
-وهو
-فهى
-فهي
-فهو
-انت
-أنت
-لك
-لها
-له
-هذه
-هذا
-تلك
-ذلك
-هناك
-كانت
-كان
-يكون
-تكون
-وكانت
-وكان
-غير
-بعض
-قد
-نحو
-بين
-بينما
-منذ
-ضمن
-حيث
-الان
-الآن
-خلال
-بعد
-قبل
-حتى
-عند
-عندما
-لدى
-جميع
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_bg.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_bg.txt
deleted file mode 100644
index 1ae4ba2..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_bg.txt
+++ /dev/null
@@ -1,193 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html

-аз
-ако
-ала
-бе
-без
-беше
-би
-бил
-била
-били
-било
-близо
-бъдат
-бъде
-бяха

-вас
-ваш
-ваша
-вероятно
-вече
-взема
-ви
-вие
-винаги
-все
-всеки
-всички
-всичко
-всяка
-във
-въпреки
-върху

-ги
-главно
-го

-да
-дали
-до
-докато
-докога
-дори
-досега
-доста

-едва
-един
-ето
-за
-зад
-заедно
-заради
-засега
-затова
-защо
-защото

-из
-или
-им
-има
-имат
-иска

-каза
-как
-каква
-какво
-както
-какъв
-като
-кога
-когато
-което
-които
-кой
-който
-колко
-която
-къде
-където
-към
-ли

-ме
-между
-мен
-ми
-мнозина
-мога
-могат
-може
-моля
-момента
-му

-на
-над
-назад
-най
-направи
-напред
-например
-нас
-не
-него
-нея
-ни
-ние
-никой
-нито
-но
-някои
-някой
-няма
-обаче
-около
-освен
-особено
-от
-отгоре
-отново
-още
-пак
-по
-повече
-повечето
-под
-поне
-поради
-после
-почти
-прави
-пред
-преди
-през
-при
-пък
-първо

-са
-само
-се
-сега
-си
-скоро
-след
-сме
-според
-сред
-срещу
-сте
-съм
-със
-също

-тази
-така
-такива
-такъв
-там
-твой
-те
-тези
-ти
-тн
-то
-това
-тогава
-този
-той
-толкова
-точно
-трябва
-тук
-тъй
-тя
-тях

-харесва

-че
-често
-чрез
-ще
-щом

diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ca.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ca.txt
deleted file mode 100644
index 3da65de..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ca.txt
+++ /dev/null
@@ -1,220 +0,0 @@
-# Catalan stopwords from http://github.com/vcl/cue.language (Apache 2 Licensed)
-a
-abans
-ací
-ah
-així
-això
-al
-als
-aleshores
-algun
-alguna
-algunes
-alguns
-alhora
-allà
-allí
-allò
-altra
-altre
-altres
-amb
-ambdós
-ambdues
-apa
-aquell
-aquella
-aquelles
-aquells
-aquest
-aquesta
-aquestes
-aquests
-aquí
-baix
-cada
-cadascú
-cadascuna
-cadascunes
-cadascuns
-com
-contra
-d'un
-d'una
-d'unes
-d'uns
-dalt
-de
-del
-dels
-des
-després
-dins
-dintre
-donat
-doncs
-durant
-e
-eh
-el
-els
-em
-en
-encara
-ens
-entre
-érem
-eren
-éreu
-es
-és
-esta
-està
-estàvem
-estaven
-estàveu
-esteu
-et
-etc
-ets
-fins
-fora
-gairebé
-ha
-han
-has
-havia
-he
-hem
-heu
-hi 
-ho
-i
-igual
-iguals
-ja
-l'hi
-la
-les
-li
-li'n
-llavors
-m'he
-ma
-mal
-malgrat
-mateix
-mateixa
-mateixes
-mateixos
-me
-mentre
-més
-meu
-meus
-meva
-meves
-molt
-molta
-moltes
-molts
-mon
-mons
-n'he
-n'hi
-ne
-ni
-no
-nogensmenys
-només
-nosaltres
-nostra
-nostre
-nostres
-o
-oh
-oi
-on
-pas
-pel
-pels
-per
-però
-perquè
-poc 
-poca
-pocs
-poques
-potser
-propi
-qual
-quals
-quan
-quant 
-que
-què
-quelcom
-qui
-quin
-quina
-quines
-quins
-s'ha
-s'han
-sa
-semblant
-semblants
-ses
-seu 
-seus
-seva
-seva
-seves
-si
-sobre
-sobretot
-sóc
-solament
-sols
-son 
-són
-sons 
-sota
-sou
-t'ha
-t'han
-t'he
-ta
-tal
-també
-tampoc
-tan
-tant
-tanta
-tantes
-teu
-teus
-teva
-teves
-ton
-tons
-tot
-tota
-totes
-tots
-un
-una
-unes
-uns
-us
-va
-vaig
-vam
-van
-vas
-veu
-vosaltres
-vostra
-vostre
-vostres
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ckb.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ckb.txt
deleted file mode 100644
index 87abf11..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ckb.txt
+++ /dev/null
@@ -1,136 +0,0 @@
-# set of kurdish stopwords
-# note these have been normalized with our scheme (e represented with U+06D5, etc)
-# constructed from:
-# * Fig 5 of "Building A Test Collection For Sorani Kurdish" (Esmaili et al)
-# * "Sorani Kurdish: A Reference Grammar with selected readings" (Thackston)
-# * Corpus-based analysis of 77M word Sorani collection: wikipedia, news, blogs, etc
-
-# and

-# which
-کە
-# of

-# made/did
-کرد
-# that/which
-ئەوەی
-# on/head
-سەر
-# two
-دوو
-# also
-هەروەها
-# from/that
-لەو
-# makes/does
-دەکات
-# some
-چەند
-# every
-هەر
-
-# demonstratives
-# that
-ئەو
-# this
-ئەم
-
-# personal pronouns
-# I
-من
-# we
-ئێمە
-# you
-تۆ
-# you
-ئێوە
-# he/she/it
-ئەو
-# they
-ئەوان
-
-# prepositions
-# to/with/by
-بە
-پێ
-# without
-بەبێ
-# along with/while/during
-بەدەم
-# in the opinion of
-بەلای
-# according to
-بەپێی
-# before
-بەرلە
-# in the direction of
-بەرەوی
-# in front of/toward
-بەرەوە
-# before/in the face of
-بەردەم
-# without
-بێ
-# except for
-بێجگە
-# for
-بۆ
-# on/in
-دە
-تێ
-# with
-دەگەڵ
-# after
-دوای
-# except for/aside from
-جگە
-# in/from
-لە
-لێ
-# in front of/before/because of
-لەبەر
-# between/among
-لەبەینی
-# concerning/about
-لەبابەت
-# concerning
-لەبارەی
-# instead of
-لەباتی
-# beside
-لەبن
-# instead of
-لەبرێتی
-# behind
-لەدەم
-# with/together with
-لەگەڵ
-# by
-لەلایەن
-# within
-لەناو
-# between/among
-لەنێو
-# for the sake of
-لەپێناوی
-# with respect to
-لەرەوی
-# by means of/for
-لەرێ
-# for the sake of
-لەرێگا
-# on/on top of/according to
-لەسەر
-# under
-لەژێر
-# between/among
-ناو
-# between/among
-نێوان
-# after
-پاش
-# before
-پێش
-# like
-وەک
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_cz.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_cz.txt
deleted file mode 100644
index 53c6097..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_cz.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-a
-s
-k
-o
-i
-u
-v
-z
-dnes
-cz
-tímto
-budeš
-budem
-byli
-jseš
-můj
-svým
-ta
-tomto
-tohle
-tuto
-tyto
-jej
-zda
-proč
-máte
-tato
-kam
-tohoto
-kdo
-kteří
-mi
-nám
-tom
-tomuto
-mít
-nic
-proto
-kterou
-byla
-toho
-protože
-asi
-ho
-naši
-napište
-re
-což
-tím
-takže
-svých
-její
-svými
-jste
-aj
-tu
-tedy
-teto
-bylo
-kde
-ke
-pravé
-ji
-nad
-nejsou
-či
-pod
-téma
-mezi
-přes
-ty
-pak
-vám
-ani
-když
-však
-neg
-jsem
-tento
-článku
-články
-aby
-jsme
-před
-pta
-jejich
-byl
-ještě
-až
-bez
-také
-pouze
-první
-vaše
-která
-nás
-nový
-tipy
-pokud
-může
-strana
-jeho
-své
-jiné
-zprávy
-nové
-není
-vás
-jen
-podle
-zde
-už
-být
-více
-bude
-již
-než
-který
-by
-které
-co
-nebo
-ten
-tak
-má
-při
-od
-po
-jsou
-jak
-další
-ale
-si
-se
-ve
-to
-jako
-za
-zpět
-ze
-do
-pro
-je
-na
-atd
-atp
-jakmile
-přičemž
-já
-on
-ona
-ono
-oni
-ony
-my
-vy
-jí
-ji
-mě
-mne
-jemu
-tomu
-těm
-těmu
-němu
-němuž
-jehož
-jíž
-jelikož
-jež
-jakož
-načež
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_da.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_da.txt
deleted file mode 100644
index 42e6145..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_da.txt
+++ /dev/null
@@ -1,110 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/danish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Danish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
-
-og           | and
-i            | in
-jeg          | I
-det          | that (dem. pronoun)/it (pers. pronoun)
-at           | that (in front of a sentence)/to (with infinitive)
-en           | a/an
-den          | it (pers. pronoun)/that (dem. pronoun)
-til          | to/at/for/until/against/by/of/into, more
-er           | present tense of "to be"
-som          | who, as
-på           | on/upon/in/on/at/to/after/of/with/for, on
-de           | they
-med          | with/by/in, along
-han          | he
-af           | of/by/from/off/for/in/with/on, off
-for          | at/for/to/from/by/of/ago, in front/before, because
-ikke         | not
-der          | who/which, there/those
-var          | past tense of "to be"
-mig          | me/myself
-sig          | oneself/himself/herself/itself/themselves
-men          | but
-et           | a/an/one, one (number), someone/somebody/one
-har          | present tense of "to have"
-om           | round/about/for/in/a, about/around/down, if
-vi           | we
-min          | my
-havde        | past tense of "to have"
-ham          | him
-hun          | she
-nu           | now
-over         | over/above/across/by/beyond/past/on/about, over/past
-da           | then, when/as/since
-fra          | from/off/since, off, since
-du           | you
-ud           | out
-sin          | his/her/its/one's
-dem          | them
-os           | us/ourselves
-op           | up
-man          | you/one
-hans         | his
-hvor         | where
-eller        | or
-hvad         | what
-skal         | must/shall etc.
-selv         | myself/youself/herself/ourselves etc., even
-her          | here
-alle         | all/everyone/everybody etc.
-vil          | will (verb)
-blev         | past tense of "to stay/to remain/to get/to become"
-kunne        | could
-ind          | in
-når          | when
-være         | present tense of "to be"
-dog          | however/yet/after all
-noget        | something
-ville        | would
-jo           | you know/you see (adv), yes
-deres        | their/theirs
-efter        | after/behind/according to/for/by/from, later/afterwards
-ned          | down
-skulle       | should
-denne        | this
-end          | than
-dette        | this
-mit          | my/mine
-også         | also
-under        | under/beneath/below/during, below/underneath
-have         | have
-dig          | you
-anden        | other
-hende        | her
-mine         | my
-alt          | everything
-meget        | much/very, plenty of
-sit          | his, her, its, one's
-sine         | his, her, its, one's
-vor          | our
-mod          | against
-disse        | these
-hvis         | if
-din          | your/yours
-nogle        | some
-hos          | by/at
-blive        | be/become
-mange        | many
-ad           | by/through
-bliver       | present tense of "to be/to become"
-hendes       | her/hers
-været        | be
-thi          | for (conj)
-jer          | you
-sådan        | such, like this/like that
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_de.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_de.txt
deleted file mode 100644
index 86525e7..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_de.txt
+++ /dev/null
@@ -1,294 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/german/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A German stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | The number of forms in this list is reduced significantly by passing it
- | through the German stemmer.
-
-
-aber           |  but
-
-alle           |  all
-allem
-allen
-aller
-alles
-
-als            |  than, as
-also           |  so
-am             |  an + dem
-an             |  at
-
-ander          |  other
-andere
-anderem
-anderen
-anderer
-anderes
-anderm
-andern
-anderr
-anders
-
-auch           |  also
-auf            |  on
-aus            |  out of
-bei            |  by
-bin            |  am
-bis            |  until
-bist           |  art
-da             |  there
-damit          |  with it
-dann           |  then
-
-der            |  the
-den
-des
-dem
-die
-das
-
-daß            |  that
-
-derselbe       |  the same
-derselben
-denselben
-desselben
-demselben
-dieselbe
-dieselben
-dasselbe
-
-dazu           |  to that
-
-dein           |  thy
-deine
-deinem
-deinen
-deiner
-deines
-
-denn           |  because
-
-derer          |  of those
-dessen         |  of him
-
-dich           |  thee
-dir            |  to thee
-du             |  thou
-
-dies           |  this
-diese
-diesem
-diesen
-dieser
-dieses
-
-
-doch           |  (several meanings)
-dort           |  (over) there
-
-
-durch          |  through
-
-ein            |  a
-eine
-einem
-einen
-einer
-eines
-
-einig          |  some
-einige
-einigem
-einigen
-einiger
-einiges
-
-einmal         |  once
-
-er             |  he
-ihn            |  him
-ihm            |  to him
-
-es             |  it
-etwas          |  something
-
-euer           |  your
-eure
-eurem
-euren
-eurer
-eures
-
-für            |  for
-gegen          |  towards
-gewesen        |  p.p. of sein
-hab            |  have
-habe           |  have
-haben          |  have
-hat            |  has
-hatte          |  had
-hatten         |  had
-hier           |  here
-hin            |  there
-hinter         |  behind
-
-ich            |  I
-mich           |  me
-mir            |  to me
-
-
-ihr            |  you, to her
-ihre
-ihrem
-ihren
-ihrer
-ihres
-euch           |  to you
-
-im             |  in + dem
-in             |  in
-indem          |  while
-ins            |  in + das
-ist            |  is
-
-jede           |  each, every
-jedem
-jeden
-jeder
-jedes
-
-jene           |  that
-jenem
-jenen
-jener
-jenes
-
-jetzt          |  now
-kann           |  can
-
-kein           |  no
-keine
-keinem
-keinen
-keiner
-keines
-
-können         |  can
-könnte         |  could
-machen         |  do
-man            |  one
-
-manche         |  some, many a
-manchem
-manchen
-mancher
-manches
-
-mein           |  my
-meine
-meinem
-meinen
-meiner
-meines
-
-mit            |  with
-muss           |  must
-musste         |  had to
-nach           |  to(wards)
-nicht          |  not
-nichts         |  nothing
-noch           |  still, yet
-nun            |  now
-nur            |  only
-ob             |  whether
-oder           |  or
-ohne           |  without
-sehr           |  very
-
-sein           |  his
-seine
-seinem
-seinen
-seiner
-seines
-
-selbst         |  self
-sich           |  herself
-
-sie            |  they, she
-ihnen          |  to them
-
-sind           |  are
-so             |  so
-
-solche         |  such
-solchem
-solchen
-solcher
-solches
-
-soll           |  shall
-sollte         |  should
-sondern        |  but
-sonst          |  else
-über           |  over
-um             |  about, around
-und            |  and
-
-uns            |  us
-unse
-unsem
-unsen
-unser
-unses
-
-unter          |  under
-viel           |  much
-vom            |  von + dem
-von            |  from
-vor            |  before
-während        |  while
-war            |  was
-waren          |  were
-warst          |  wast
-was            |  what
-weg            |  away, off
-weil           |  because
-weiter         |  further
-
-welche         |  which
-welchem
-welchen
-welcher
-welches
-
-wenn           |  when
-werde          |  will
-werden         |  will
-wie            |  how
-wieder         |  again
-will           |  want
-wir            |  we
-wird           |  will
-wirst          |  willst
-wo             |  where
-wollen         |  want
-wollte         |  wanted
-würde          |  would
-würden         |  would
-zu             |  to
-zum            |  zu + dem
-zur            |  zu + der
-zwar           |  indeed
-zwischen       |  between
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_el.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_el.txt
deleted file mode 100644
index 232681f..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_el.txt
+++ /dev/null
@@ -1,78 +0,0 @@
-# Lucene Greek Stopwords list
-# Note: by default this file is used after GreekLowerCaseFilter,
-# so when modifying this file use 'σ' instead of 'ς' 
-ο

-το
-οι
-τα
-του
-τησ
-των
-τον
-την
-και 
-κι

-ειμαι
-εισαι
-ειναι
-ειμαστε
-ειστε
-στο
-στον
-στη
-στην
-μα
-αλλα
-απο
-για
-προσ
-με
-σε
-ωσ
-παρα
-αντι
-κατα
-μετα
-θα
-να
-δε
-δεν
-μη
-μην
-επι
-ενω
-εαν
-αν
-τοτε
-που
-πωσ
-ποιοσ
-ποια
-ποιο
-ποιοι
-ποιεσ
-ποιων
-ποιουσ
-αυτοσ
-αυτη
-αυτο
-αυτοι
-αυτων
-αυτουσ
-αυτεσ
-αυτα
-εκεινοσ
-εκεινη
-εκεινο
-εκεινοι
-εκεινεσ
-εκεινα
-εκεινων
-εκεινουσ
-οπωσ
-ομωσ
-ισωσ
-οσο
-οτι
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_en.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_en.txt
deleted file mode 100644
index 2c164c0..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_en.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# a couple of test stopwords to test that the words are really being
-# configured from this file:
-stopworda
-stopwordb
-
-# Standard english stop words taken from Lucene's StopAnalyzer
-a
-an
-and
-are
-as
-at
-be
-but
-by
-for
-if
-in
-into
-is
-it
-no
-not
-of
-on
-or
-such
-that
-the
-their
-then
-there
-these
-they
-this
-to
-was
-will
-with
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_es.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_es.txt
deleted file mode 100644
index 487d78c..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_es.txt
+++ /dev/null
@@ -1,356 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/spanish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Spanish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-
- | The following is a ranked list (commonest to rarest) of stopwords
- | deriving from a large sample of text.
-
- | Extra words have been added at the end.
-
-de             |  from, of
-la             |  the, her
-que            |  who, that
-el             |  the
-en             |  in
-y              |  and
-a              |  to
-los            |  the, them
-del            |  de + el
-se             |  himself, from him etc
-las            |  the, them
-por            |  for, by, etc
-un             |  a
-para           |  for
-con            |  with
-no             |  no
-una            |  a
-su             |  his, her
-al             |  a + el
-  | es         from SER
-lo             |  him
-como           |  how
-más            |  more
-pero           |  pero
-sus            |  su plural
-le             |  to him, her
-ya             |  already
-o              |  or
-  | fue        from SER
-este           |  this
-  | ha         from HABER
-sí             |  himself etc
-porque         |  because
-esta           |  this
-  | son        from SER
-entre          |  between
-  | está     from ESTAR
-cuando         |  when
-muy            |  very
-sin            |  without
-sobre          |  on
-  | ser        from SER
-  | tiene      from TENER
-también        |  also
-me             |  me
-hasta          |  until
-hay            |  there is/are
-donde          |  where
-  | han        from HABER
-quien          |  whom, that
-  | están      from ESTAR
-  | estado     from ESTAR
-desde          |  from
-todo           |  all
-nos            |  us
-durante        |  during
-  | estados    from ESTAR
-todos          |  all
-uno            |  a
-les            |  to them
-ni             |  nor
-contra         |  against
-otros          |  other
-  | fueron     from SER
-ese            |  that
-eso            |  that
-  | había      from HABER
-ante           |  before
-ellos          |  they
-e              |  and (variant of y)
-esto           |  this
-mí             |  me
-antes          |  before
-algunos        |  some
-qué            |  what?
-unos           |  a
-yo             |  I
-otro           |  other
-otras          |  other
-otra           |  other
-él             |  he
-tanto          |  so much, many
-esa            |  that
-estos          |  these
-mucho          |  much, many
-quienes        |  who
-nada           |  nothing
-muchos         |  many
-cual           |  who
-  | sea        from SER
-poco           |  few
-ella           |  she
-estar          |  to be
-  | haber      from HABER
-estas          |  these
-  | estaba     from ESTAR
-  | estamos    from ESTAR
-algunas        |  some
-algo           |  something
-nosotros       |  we
-
-      | other forms
-
-mi             |  me
-mis            |  mi plural
-tú             |  thou
-te             |  thee
-ti             |  thee
-tu             |  thy
-tus            |  tu plural
-ellas          |  they
-nosotras       |  we
-vosotros       |  you
-vosotras       |  you
-os             |  you
-mío            |  mine
-mía            |
-míos           |
-mías           |
-tuyo           |  thine
-tuya           |
-tuyos          |
-tuyas          |
-suyo           |  his, hers, theirs
-suya           |
-suyos          |
-suyas          |
-nuestro        |  ours
-nuestra        |
-nuestros       |
-nuestras       |
-vuestro        |  yours
-vuestra        |
-vuestros       |
-vuestras       |
-esos           |  those
-esas           |  those
-
-               | forms of estar, to be (not including the infinitive):
-estoy
-estás
-está
-estamos
-estáis
-están
-esté
-estés
-estemos
-estéis
-estén
-estaré
-estarás
-estará
-estaremos
-estaréis
-estarán
-estaría
-estarías
-estaríamos
-estaríais
-estarían
-estaba
-estabas
-estábamos
-estabais
-estaban
-estuve
-estuviste
-estuvo
-estuvimos
-estuvisteis
-estuvieron
-estuviera
-estuvieras
-estuviéramos
-estuvierais
-estuvieran
-estuviese
-estuvieses
-estuviésemos
-estuvieseis
-estuviesen
-estando
-estado
-estada
-estados
-estadas
-estad
-
-               | forms of haber, to have (not including the infinitive):
-he
-has
-ha
-hemos
-habéis
-han
-haya
-hayas
-hayamos
-hayáis
-hayan
-habré
-habrás
-habrá
-habremos
-habréis
-habrán
-habría
-habrías
-habríamos
-habríais
-habrían
-había
-habías
-habíamos
-habíais
-habían
-hube
-hubiste
-hubo
-hubimos
-hubisteis
-hubieron
-hubiera
-hubieras
-hubiéramos
-hubierais
-hubieran
-hubiese
-hubieses
-hubiésemos
-hubieseis
-hubiesen
-habiendo
-habido
-habida
-habidos
-habidas
-
-               | forms of ser, to be (not including the infinitive):
-soy
-eres
-es
-somos
-sois
-son
-sea
-seas
-seamos
-seáis
-sean
-seré
-serás
-será
-seremos
-seréis
-serán
-sería
-serías
-seríamos
-seríais
-serían
-era
-eras
-éramos
-erais
-eran
-fui
-fuiste
-fue
-fuimos
-fuisteis
-fueron
-fuera
-fueras
-fuéramos
-fuerais
-fueran
-fuese
-fueses
-fuésemos
-fueseis
-fuesen
-siendo
-sido
-  |  sed also means 'thirst'
-
-               | forms of tener, to have (not including the infinitive):
-tengo
-tienes
-tiene
-tenemos
-tenéis
-tienen
-tenga
-tengas
-tengamos
-tengáis
-tengan
-tendré
-tendrás
-tendrá
-tendremos
-tendréis
-tendrán
-tendría
-tendrías
-tendríamos
-tendríais
-tendrían
-tenía
-tenías
-teníamos
-teníais
-tenían
-tuve
-tuviste
-tuvo
-tuvimos
-tuvisteis
-tuvieron
-tuviera
-tuvieras
-tuviéramos
-tuvierais
-tuvieran
-tuviese
-tuvieses
-tuviésemos
-tuvieseis
-tuviesen
-teniendo
-tenido
-tenida
-tenidos
-tenidas
-tened
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_eu.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_eu.txt
deleted file mode 100644
index 25f1db9..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_eu.txt
+++ /dev/null
@@ -1,99 +0,0 @@
-# example set of basque stopwords
-al
-anitz
-arabera
-asko
-baina
-bat
-batean
-batek
-bati
-batzuei
-batzuek
-batzuetan
-batzuk
-bera
-beraiek
-berau
-berauek
-bere
-berori
-beroriek
-beste
-bezala
-da
-dago
-dira
-ditu
-du
-dute
-edo
-egin
-ere
-eta
-eurak
-ez
-gainera
-gu
-gutxi
-guzti
-haiei
-haiek
-haietan
-hainbeste
-hala
-han
-handik
-hango
-hara
-hari
-hark
-hartan
-hau
-hauei
-hauek
-hauetan
-hemen
-hemendik
-hemengo
-hi
-hona
-honek
-honela
-honetan
-honi
-hor
-hori
-horiei
-horiek
-horietan
-horko
-horra
-horrek
-horrela
-horretan
-horri
-hortik
-hura
-izan
-ni
-noiz
-nola
-non
-nondik
-nongo
-nor
-nora
-ze
-zein
-zen
-zenbait
-zenbat
-zer
-zergatik
-ziren
-zituen
-zu
-zuek
-zuen
-zuten
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_fa.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_fa.txt
deleted file mode 100644
index 723641c..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_fa.txt
+++ /dev/null
@@ -1,313 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# Note: by default this file is used after normalization, so when adding entries
-# to this file, use the arabic 'ي' instead of 'ی'
-انان
-نداشته
-سراسر
-خياه
-ايشان
-وي
-تاكنون
-بيشتري
-دوم
-پس
-ناشي
-وگو
-يا
-داشتند
-سپس
-هنگام
-هرگز
-پنج
-نشان
-امسال
-ديگر
-گروهي
-شدند
-چطور
-ده

-دو
-نخستين
-ولي
-چرا
-چه
-وسط

-كدام
-قابل
-يك
-رفت
-هفت
-همچنين
-در
-هزار
-بله
-بلي
-شايد
-اما
-شناسي
-گرفته
-دهد
-داشته
-دانست
-داشتن
-خواهيم
-ميليارد
-وقتيكه
-امد
-خواهد
-جز
-اورده
-شده
-بلكه
-خدمات
-شدن
-برخي
-نبود
-بسياري
-جلوگيري
-حق
-كردند
-نوعي
-بعري
-نكرده
-نظير
-نبايد
-بوده
-بودن
-داد
-اورد
-هست
-جايي
-شود
-دنبال
-داده
-بايد
-سابق
-هيچ
-همان
-انجا
-كمتر
-كجاست
-گردد
-كسي
-تر
-مردم
-تان
-دادن
-بودند
-سري
-جدا
-ندارند
-مگر
-يكديگر
-دارد
-دهند
-بنابراين
-هنگامي
-سمت
-جا
-انچه
-خود
-دادند
-زياد
-دارند
-اثر
-بدون
-بهترين
-بيشتر
-البته
-به
-براساس
-بيرون
-كرد
-بعضي
-گرفت
-توي
-اي
-ميليون
-او
-جريان
-تول
-بر
-مانند
-برابر
-باشيم
-مدتي
-گويند
-اكنون
-تا
-تنها
-جديد
-چند
-بي
-نشده
-كردن
-كردم
-گويد
-كرده
-كنيم
-نمي
-نزد
-روي
-قصد
-فقط
-بالاي
-ديگران
-اين
-ديروز
-توسط
-سوم
-ايم
-دانند
-سوي
-استفاده
-شما
-كنار
-داريم
-ساخته
-طور
-امده
-رفته
-نخست
-بيست
-نزديك
-طي
-كنيد
-از
-انها
-تمامي
-داشت
-يكي
-طريق
-اش
-چيست
-روب
-نمايد
-گفت
-چندين
-چيزي
-تواند
-ام
-ايا
-با
-ان
-ايد
-ترين
-اينكه
-ديگري
-راه
-هايي
-بروز
-همچنان
-پاعين
-كس
-حدود
-مختلف
-مقابل
-چيز
-گيرد
-ندارد
-ضد
-همچون
-سازي
-شان
-مورد
-باره
-مرسي
-خويش
-برخوردار
-چون
-خارج
-شش
-هنوز
-تحت
-ضمن
-هستيم
-گفته
-فكر
-بسيار
-پيش
-براي
-روزهاي
-انكه
-نخواهد
-بالا
-كل
-وقتي
-كي
-چنين
-كه
-گيري
-نيست
-است
-كجا
-كند
-نيز
-يابد
-بندي
-حتي
-توانند
-عقب
-خواست
-كنند
-بين
-تمام
-همه
-ما
-باشند
-مثل
-شد
-اري
-باشد
-اره
-طبق
-بعد
-اگر
-صورت
-غير
-جاي
-بيش
-ريزي
-اند
-زيرا
-چگونه
-بار
-لطفا
-مي
-درباره
-من
-ديده
-همين
-گذاري
-برداري
-علت
-گذاشته
-هم
-فوق
-نه
-ها
-شوند
-اباد
-همواره
-هر
-اول
-خواهند
-چهار
-نام
-امروز
-مان
-هاي
-قبل
-كنم
-سعي
-تازه
-را
-هستند
-زير
-جلوي
-عنوان
-بود
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_fi.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_fi.txt
deleted file mode 100644
index 4372c9a..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_fi.txt
+++ /dev/null
@@ -1,97 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/finnish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
- 
-| forms of BE
-
-olla
-olen
-olet
-on
-olemme
-olette
-ovat
-ole        | negative form
-
-oli
-olisi
-olisit
-olisin
-olisimme
-olisitte
-olisivat
-olit
-olin
-olimme
-olitte
-olivat
-ollut
-olleet
-
-en         | negation
-et
-ei
-emme
-ette
-eivät
-
-|Nom   Gen    Acc    Part   Iness   Elat    Illat  Adess   Ablat   Allat   Ess    Trans
-minä   minun  minut  minua  minussa minusta minuun minulla minulta minulle               | I
-sinä   sinun  sinut  sinua  sinussa sinusta sinuun sinulla sinulta sinulle               | you
-hän    hänen  hänet  häntä  hänessä hänestä häneen hänellä häneltä hänelle               | he she
-me     meidän meidät meitä  meissä  meistä  meihin meillä  meiltä  meille                | we
-te     teidän teidät teitä  teissä  teistä  teihin teillä  teiltä  teille                | you
-he     heidän heidät heitä  heissä  heistä  heihin heillä  heiltä  heille                | they
-
-tämä   tämän         tätä   tässä   tästä   tähän  tallä   tältä   tälle   tänä   täksi  | this
-tuo    tuon          tuotä  tuossa  tuosta  tuohon tuolla  tuolta  tuolle  tuona  tuoksi | that
-se     sen           sitä   siinä   siitä   siihen sillä   siltä   sille   sinä   siksi  | it
-nämä   näiden        näitä  näissä  näistä  näihin näillä  näiltä  näille  näinä  näiksi | these
-nuo    noiden        noita  noissa  noista  noihin noilla  noilta  noille  noina  noiksi | those
-ne     niiden        niitä  niissä  niistä  niihin niillä  niiltä  niille  niinä  niiksi | they
-
-kuka   kenen kenet   ketä   kenessä kenestä keneen kenellä keneltä kenelle kenenä keneksi| who
-ketkä  keiden ketkä  keitä  keissä  keistä  keihin keillä  keiltä  keille  keinä  keiksi | (pl)
-mikä   minkä minkä   mitä   missä   mistä   mihin  millä   miltä   mille   minä   miksi  | which what
-mitkä                                                                                    | (pl)
-
-joka   jonka         jota   jossa   josta   johon  jolla   jolta   jolle   jona   joksi  | who which
-jotka  joiden        joita  joissa  joista  joihin joilla  joilta  joille  joina  joiksi | (pl)
-
-| conjunctions
-
-että   | that
-ja     | and
-jos    | if
-koska  | because
-kuin   | than
-mutta  | but
-niin   | so
-sekä   | and
-sillä  | for
-tai    | or
-vaan   | but
-vai    | or
-vaikka | although
-
-
-| prepositions
-
-kanssa  | with
-mukaan  | according to
-noin    | about
-poikki  | across
-yli     | over, across
-
-| other
-
-kun    | when
-niin   | so
-nyt    | now
-itse   | self
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_fr.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_fr.txt
deleted file mode 100644
index 749abae..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_fr.txt
+++ /dev/null
@@ -1,186 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/french/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A French stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-au             |  a + le
-aux            |  a + les
-avec           |  with
-ce             |  this
-ces            |  these
-dans           |  with
-de             |  of
-des            |  de + les
-du             |  de + le
-elle           |  she
-en             |  `of them' etc
-et             |  and
-eux            |  them
-il             |  he
-je             |  I
-la             |  the
-le             |  the
-leur           |  their
-lui            |  him
-ma             |  my (fem)
-mais           |  but
-me             |  me
-même           |  same; as in moi-même (myself) etc
-mes            |  me (pl)
-moi            |  me
-mon            |  my (masc)
-ne             |  not
-nos            |  our (pl)
-notre          |  our
-nous           |  we
-on             |  one
-ou             |  where
-par            |  by
-pas            |  not
-pour           |  for
-qu             |  que before vowel
-que            |  that
-qui            |  who
-sa             |  his, her (fem)
-se             |  oneself
-ses            |  his (pl)
-son            |  his, her (masc)
-sur            |  on
-ta             |  thy (fem)
-te             |  thee
-tes            |  thy (pl)
-toi            |  thee
-ton            |  thy (masc)
-tu             |  thou
-un             |  a
-une            |  a
-vos            |  your (pl)
-votre          |  your
-vous           |  you
-
-               |  single letter forms
-
-c              |  c'
-d              |  d'
-j              |  j'
-l              |  l'
-à              |  to, at
-m              |  m'
-n              |  n'
-s              |  s'
-t              |  t'
-y              |  there
-
-               | forms of être (not including the infinitive):
-été
-étée
-étées
-étés
-étant
-suis
-es
-est
-sommes
-êtes
-sont
-serai
-seras
-sera
-serons
-serez
-seront
-serais
-serait
-serions
-seriez
-seraient
-étais
-était
-étions
-étiez
-étaient
-fus
-fut
-fûmes
-fûtes
-furent
-sois
-soit
-soyons
-soyez
-soient
-fusse
-fusses
-fût
-fussions
-fussiez
-fussent
-
-               | forms of avoir (not including the infinitive):
-ayant
-eu
-eue
-eues
-eus
-ai
-as
-avons
-avez
-ont
-aurai
-auras
-aura
-aurons
-aurez
-auront
-aurais
-aurait
-aurions
-auriez
-auraient
-avais
-avait
-avions
-aviez
-avaient
-eut
-eûmes
-eûtes
-eurent
-aie
-aies
-ait
-ayons
-ayez
-aient
-eusse
-eusses
-eût
-eussions
-eussiez
-eussent
-
-               | Later additions (from Jean-Christophe Deschamps)
-ceci           |  this
-cela           |  that
-celà           |  that
-cet            |  this
-cette          |  this
-ici            |  here
-ils            |  they
-les            |  the (pl)
-leurs          |  their (pl)
-quel           |  which
-quels          |  which
-quelle         |  which
-quelles        |  which
-sans           |  without
-soi            |  oneself
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ga.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ga.txt
deleted file mode 100644
index 9ff88d7..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ga.txt
+++ /dev/null
@@ -1,110 +0,0 @@
-
-a
-ach
-ag
-agus
-an
-aon
-ar
-arna
-as
-b'
-ba
-beirt
-bhúr
-caoga
-ceathair
-ceathrar
-chomh
-chtó
-chuig
-chun
-cois
-céad
-cúig
-cúigear
-d'
-daichead
-dar
-de
-deich
-deichniúr
-den
-dhá
-do
-don
-dtí
-dá
-dár
-dó
-faoi
-faoin
-faoina
-faoinár
-fara
-fiche
-gach
-gan
-go
-gur
-haon
-hocht
-i
-iad
-idir
-in
-ina
-ins
-inár
-is
-le
-leis
-lena
-lenár
-m'
-mar
-mo
-mé
-na
-nach
-naoi
-naonúr
-ná
-ní
-níor
-nó
-nócha
-ocht
-ochtar
-os
-roimh
-sa
-seacht
-seachtar
-seachtó
-seasca
-seisear
-siad
-sibh
-sinn
-sna
-sé
-sí
-tar
-thar
-thú
-triúr
-trí
-trína
-trínár
-tríocha
-tú
-um
-ár

-éis


-ón
-óna
-ónár
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_gl.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_gl.txt
deleted file mode 100644
index d8760b1..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_gl.txt
+++ /dev/null
@@ -1,161 +0,0 @@
-# galican stopwords
-a
-aínda
-alí
-aquel
-aquela
-aquelas
-aqueles
-aquilo
-aquí
-ao
-aos
-as
-así

-ben
-cando
-che
-co
-coa
-comigo
-con
-connosco
-contigo
-convosco
-coas
-cos
-cun
-cuns
-cunha
-cunhas
-da
-dalgunha
-dalgunhas
-dalgún
-dalgúns
-das
-de
-del
-dela
-delas
-deles
-desde
-deste
-do
-dos
-dun
-duns
-dunha
-dunhas
-e
-el
-ela
-elas
-eles
-en
-era
-eran
-esa
-esas
-ese
-eses
-esta
-estar
-estaba
-está
-están
-este
-estes
-estiven
-estou
-eu

-facer
-foi
-foron
-fun
-había
-hai
-iso
-isto
-la
-las
-lle
-lles
-lo
-los
-mais
-me
-meu
-meus
-min
-miña
-miñas
-moi
-na
-nas
-neste
-nin
-no
-non
-nos
-nosa
-nosas
-noso
-nosos
-nós
-nun
-nunha
-nuns
-nunhas
-o
-os
-ou

-ós
-para
-pero
-pode
-pois
-pola
-polas
-polo
-polos
-por
-que
-se
-senón
-ser
-seu
-seus
-sexa
-sido
-sobre
-súa
-súas
-tamén
-tan
-te
-ten
-teñen
-teño
-ter
-teu
-teus
-ti
-tido
-tiña
-tiven
-túa
-túas
-un
-unha
-unhas
-uns
-vos
-vosa
-vosas
-voso
-vosos
-vós
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_hi.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_hi.txt
deleted file mode 100644
index 86286bb..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_hi.txt
+++ /dev/null
@@ -1,235 +0,0 @@
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# Note: by default this file also contains forms normalized by HindiNormalizer 
-# for spelling variation (see section below), such that it can be used whether or 
-# not you enable that feature. When adding additional entries to this list,
-# please add the normalized form as well. 
-अंदर
-अत
-अपना
-अपनी
-अपने
-अभी
-आदि
-आप
-इत्यादि
-इन 
-इनका
-इन्हीं
-इन्हें
-इन्हों
-इस
-इसका
-इसकी
-इसके
-इसमें
-इसी
-इसे
-उन
-उनका
-उनकी
-उनके
-उनको
-उन्हीं
-उन्हें
-उन्हों
-उस
-उसके
-उसी
-उसे
-एक
-एवं
-एस
-ऐसे
-और
-कई
-कर
-करता
-करते
-करना
-करने
-करें
-कहते
-कहा
-का
-काफ़ी
-कि
-कितना
-किन्हें
-किन्हों
-किया
-किर
-किस
-किसी
-किसे
-की
-कुछ
-कुल
-के
-को
-कोई
-कौन
-कौनसा
-गया
-घर
-जब
-जहाँ
-जा
-जितना
-जिन
-जिन्हें
-जिन्हों
-जिस
-जिसे
-जीधर
-जैसा
-जैसे
-जो
-तक
-तब
-तरह
-तिन
-तिन्हें
-तिन्हों
-तिस
-तिसे
-तो
-था
-थी
-थे
-दबारा
-दिया
-दुसरा
-दूसरे
-दो
-द्वारा
-न
-नहीं
-ना
-निहायत
-नीचे
-ने
-पर
-पर  
-पहले
-पूरा
-पे
-फिर
-बनी
-बही
-बहुत
-बाद
-बाला
-बिलकुल
-भी
-भीतर
-मगर
-मानो
-मे
-में
-यदि
-यह
-यहाँ
-यही
-या
-यिह 
-ये
-रखें
-रहा
-रहे
-ऱ्वासा
-लिए
-लिये
-लेकिन
-व
-वर्ग
-वह
-वह 
-वहाँ
-वहीं
-वाले
-वुह 
-वे
-वग़ैरह
-संग
-सकता
-सकते
-सबसे
-सभी
-साथ
-साबुत
-साभ
-सारा
-से
-सो
-ही
-हुआ
-हुई
-हुए
-है
-हैं
-हो
-होता
-होती
-होते
-होना
-होने
-# additional normalized forms of the above
-अपनि
-जेसे
-होति
-सभि
-तिंहों
-इंहों
-दवारा
-इसि
-किंहें
-थि
-उंहों
-ओर
-जिंहें
-वहिं
-अभि
-बनि
-हि
-उंहिं
-उंहें
-हें
-वगेरह
-एसे
-रवासा
-कोन
-निचे
-काफि
-उसि
-पुरा
-भितर
-हे
-बहि
-वहां
-कोइ
-यहां
-जिंहों
-तिंहें
-किसि
-कइ
-यहि
-इंहिं
-जिधर
-इंहें
-अदि
-इतयादि
-हुइ
-कोनसा
-इसकि
-दुसरे
-जहां
-अप
-किंहों
-उनकि
-भि
-वरग
-हुअ
-जेसा
-नहिं
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_hu.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_hu.txt
deleted file mode 100644
index 37526da..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_hu.txt
+++ /dev/null
@@ -1,211 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/hungarian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
- 
-| Hungarian stop word list
-| prepared by Anna Tordai
-
-a
-ahogy
-ahol
-aki
-akik
-akkor
-alatt
-által
-általában
-amely
-amelyek
-amelyekben
-amelyeket
-amelyet
-amelynek
-ami
-amit
-amolyan
-amíg
-amikor
-át
-abban
-ahhoz
-annak
-arra
-arról
-az
-azok
-azon
-azt
-azzal
-azért
-aztán
-azután
-azonban
-bár
-be
-belül
-benne
-cikk
-cikkek
-cikkeket
-csak
-de
-e
-eddig
-egész
-egy
-egyes
-egyetlen
-egyéb
-egyik
-egyre
-ekkor
-el
-elég
-ellen
-elő
-először
-előtt
-első
-én
-éppen
-ebben
-ehhez
-emilyen
-ennek
-erre
-ez
-ezt
-ezek
-ezen
-ezzel
-ezért
-és
-fel
-felé
-hanem
-hiszen
-hogy
-hogyan
-igen
-így
-illetve
-ill.
-ill
-ilyen
-ilyenkor
-ison
-ismét
-itt
-jó
-jól
-jobban
-kell
-kellett
-keresztül
-keressünk
-ki
-kívül
-között
-közül
-legalább
-lehet
-lehetett
-legyen
-lenne
-lenni
-lesz
-lett
-maga
-magát
-majd
-majd
-már
-más
-másik
-meg
-még
-mellett
-mert
-mely
-melyek
-mi
-mit
-míg
-miért
-milyen
-mikor
-minden
-mindent
-mindenki
-mindig
-mint
-mintha
-mivel
-most
-nagy
-nagyobb
-nagyon
-ne
-néha
-nekem
-neki
-nem
-néhány
-nélkül
-nincs
-olyan
-ott
-össze

-ők
-őket
-pedig
-persze
-rá
-s
-saját
-sem
-semmi
-sok
-sokat
-sokkal
-számára
-szemben
-szerint
-szinte
-talán
-tehát
-teljes
-tovább
-továbbá
-több
-úgy
-ugyanis
-új
-újabb
-újra
-után
-utána
-utolsó
-vagy
-vagyis
-valaki
-valami
-valamint
-való
-vagyok
-van
-vannak
-volt
-voltam
-voltak
-voltunk
-vissza
-vele
-viszont
-volna
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_hy.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_hy.txt
deleted file mode 100644
index 60c1c50..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_hy.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-# example set of Armenian stopwords.
-այդ
-այլ
-այն
-այս
-դու
-դուք
-եմ
-են
-ենք
-ես
-եք

-էի
-էին
-էինք
-էիր
-էիք
-էր
-ըստ


-ին
-իսկ
-իր
-կամ
-համար
-հետ
-հետո
-մենք
-մեջ
-մի

-նա
-նաև
-նրա
-նրանք
-որ
-որը
-որոնք
-որպես
-ու
-ում
-պիտի
-վրա

diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_id.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_id.txt
deleted file mode 100644
index 4617f83..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_id.txt
+++ /dev/null
@@ -1,359 +0,0 @@
-# from appendix D of: A Study of Stemming Effects on Information
-# Retrieval in Bahasa Indonesia
-ada
-adanya
-adalah
-adapun
-agak
-agaknya
-agar
-akan
-akankah
-akhirnya
-aku
-akulah
-amat
-amatlah
-anda
-andalah
-antar
-diantaranya
-antara
-antaranya
-diantara
-apa
-apaan
-mengapa
-apabila
-apakah
-apalagi
-apatah
-atau
-ataukah
-ataupun
-bagai
-bagaikan
-sebagai
-sebagainya
-bagaimana
-bagaimanapun
-sebagaimana
-bagaimanakah
-bagi
-bahkan
-bahwa
-bahwasanya
-sebaliknya
-banyak
-sebanyak
-beberapa
-seberapa
-begini
-beginian
-beginikah
-beginilah
-sebegini
-begitu
-begitukah
-begitulah
-begitupun
-sebegitu
-belum
-belumlah
-sebelum
-sebelumnya
-sebenarnya
-berapa
-berapakah
-berapalah
-berapapun
-betulkah
-sebetulnya
-biasa
-biasanya
-bila
-bilakah
-bisa
-bisakah
-sebisanya
-boleh
-bolehkah
-bolehlah
-buat
-bukan
-bukankah
-bukanlah
-bukannya
-cuma
-percuma
-dahulu
-dalam
-dan
-dapat
-dari
-daripada
-dekat
-demi
-demikian
-demikianlah
-sedemikian
-dengan
-depan
-di
-dia
-dialah
-dini
-diri
-dirinya
-terdiri
-dong
-dulu
-enggak
-enggaknya
-entah
-entahlah
-terhadap
-terhadapnya
-hal
-hampir
-hanya
-hanyalah
-harus
-haruslah
-harusnya
-seharusnya
-hendak
-hendaklah
-hendaknya
-hingga
-sehingga
-ia
-ialah
-ibarat
-ingin
-inginkah
-inginkan
-ini
-inikah
-inilah
-itu
-itukah
-itulah
-jangan
-jangankan
-janganlah
-jika
-jikalau
-juga
-justru
-kala
-kalau
-kalaulah
-kalaupun
-kalian
-kami
-kamilah
-kamu
-kamulah
-kan
-kapan
-kapankah
-kapanpun
-dikarenakan
-karena
-karenanya
-ke
-kecil
-kemudian
-kenapa
-kepada
-kepadanya
-ketika
-seketika
-khususnya
-kini
-kinilah
-kiranya
-sekiranya
-kita
-kitalah
-kok
-lagi
-lagian
-selagi
-lah
-lain
-lainnya
-melainkan
-selaku
-lalu
-melalui
-terlalu
-lama
-lamanya
-selama
-selama
-selamanya
-lebih
-terlebih
-bermacam
-macam
-semacam
-maka
-makanya
-makin
-malah
-malahan
-mampu
-mampukah
-mana
-manakala
-manalagi
-masih
-masihkah
-semasih
-masing
-mau
-maupun
-semaunya
-memang
-mereka
-merekalah
-meski
-meskipun
-semula
-mungkin
-mungkinkah
-nah
-namun
-nanti
-nantinya
-nyaris
-oleh
-olehnya
-seorang
-seseorang
-pada
-padanya
-padahal
-paling
-sepanjang
-pantas
-sepantasnya
-sepantasnyalah
-para
-pasti
-pastilah
-per
-pernah
-pula
-pun
-merupakan
-rupanya
-serupa
-saat
-saatnya
-sesaat
-saja
-sajalah
-saling
-bersama
-sama
-sesama
-sambil
-sampai
-sana
-sangat
-sangatlah
-saya
-sayalah
-se
-sebab
-sebabnya
-sebuah
-tersebut
-tersebutlah
-sedang
-sedangkan
-sedikit
-sedikitnya
-segala
-segalanya
-segera
-sesegera
-sejak
-sejenak
-sekali
-sekalian
-sekalipun
-sesekali
-sekaligus
-sekarang
-sekarang
-sekitar
-sekitarnya
-sela
-selain
-selalu
-seluruh
-seluruhnya
-semakin
-sementara
-sempat
-semua
-semuanya
-sendiri
-sendirinya
-seolah
-seperti
-sepertinya
-sering
-seringnya
-serta
-siapa
-siapakah
-siapapun
-disini
-disinilah
-sini
-sinilah
-sesuatu
-sesuatunya
-suatu
-sesudah
-sesudahnya
-sudah
-sudahkah
-sudahlah
-supaya
-tadi
-tadinya
-tak
-tanpa
-setelah
-telah
-tentang
-tentu
-tentulah
-tentunya
-tertentu
-seterusnya
-tapi
-tetapi
-setiap
-tiap
-setidaknya
-tidak
-tidakkah
-tidaklah
-toh
-waduh
-wah
-wahai
-sewaktu
-walau
-walaupun
-wong
-yaitu
-yakni
-yang
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_it.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_it.txt
deleted file mode 100644
index 1219cc7..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_it.txt
+++ /dev/null
@@ -1,303 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/italian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | An Italian stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-ad             |  a (to) before vowel
-al             |  a + il
-allo           |  a + lo
-ai             |  a + i
-agli           |  a + gli
-all            |  a + l'
-agl            |  a + gl'
-alla           |  a + la
-alle           |  a + le
-con            |  with
-col            |  con + il
-coi            |  con + i (forms collo, cogli etc are now very rare)
-da             |  from
-dal            |  da + il
-dallo          |  da + lo
-dai            |  da + i
-dagli          |  da + gli
-dall           |  da + l'
-dagl           |  da + gll'
-dalla          |  da + la
-dalle          |  da + le
-di             |  of
-del            |  di + il
-dello          |  di + lo
-dei            |  di + i
-degli          |  di + gli
-dell           |  di + l'
-degl           |  di + gl'
-della          |  di + la
-delle          |  di + le
-in             |  in
-nel            |  in + el
-nello          |  in + lo
-nei            |  in + i
-negli          |  in + gli
-nell           |  in + l'
-negl           |  in + gl'
-nella          |  in + la
-nelle          |  in + le
-su             |  on
-sul            |  su + il
-sullo          |  su + lo
-sui            |  su + i
-sugli          |  su + gli
-sull           |  su + l'
-sugl           |  su + gl'
-sulla          |  su + la
-sulle          |  su + le
-per            |  through, by
-tra            |  among
-contro         |  against
-io             |  I
-tu             |  thou
-lui            |  he
-lei            |  she
-noi            |  we
-voi            |  you
-loro           |  they
-mio            |  my
-mia            |
-miei           |
-mie            |
-tuo            |
-tua            |
-tuoi           |  thy
-tue            |
-suo            |
-sua            |
-suoi           |  his, her
-sue            |
-nostro         |  our
-nostra         |
-nostri         |
-nostre         |
-vostro         |  your
-vostra         |
-vostri         |
-vostre         |
-mi             |  me
-ti             |  thee
-ci             |  us, there
-vi             |  you, there
-lo             |  him, the
-la             |  her, the
-li             |  them
-le             |  them, the
-gli            |  to him, the
-ne             |  from there etc
-il             |  the
-un             |  a
-uno            |  a
-una            |  a
-ma             |  but
-ed             |  and
-se             |  if
-perché         |  why, because
-anche          |  also
-come           |  how
-dov            |  where (as dov')
-dove           |  where
-che            |  who, that
-chi            |  who
-cui            |  whom
-non            |  not
-più            |  more
-quale          |  who, that
-quanto         |  how much
-quanti         |
-quanta         |
-quante         |
-quello         |  that
-quelli         |
-quella         |
-quelle         |
-questo         |  this
-questi         |
-questa         |
-queste         |
-si             |  yes
-tutto          |  all
-tutti          |  all
-
-               |  single letter forms:
-
-a              |  at
-c              |  as c' for ce or ci
-e              |  and
-i              |  the
-l              |  as l'
-o              |  or
-
-               | forms of avere, to have (not including the infinitive):
-
-ho
-hai
-ha
-abbiamo
-avete
-hanno
-abbia
-abbiate
-abbiano
-avrò
-avrai
-avrà
-avremo
-avrete
-avranno
-avrei
-avresti
-avrebbe
-avremmo
-avreste
-avrebbero
-avevo
-avevi
-aveva
-avevamo
-avevate
-avevano
-ebbi
-avesti
-ebbe
-avemmo
-aveste
-ebbero
-avessi
-avesse
-avessimo
-avessero
-avendo
-avuto
-avuta
-avuti
-avute
-
-               | forms of essere, to be (not including the infinitive):
-sono
-sei

-siamo
-siete
-sia
-siate
-siano
-sarò
-sarai
-sarà
-saremo
-sarete
-saranno
-sarei
-saresti
-sarebbe
-saremmo
-sareste
-sarebbero
-ero
-eri
-era
-eravamo
-eravate
-erano
-fui
-fosti
-fu
-fummo
-foste
-furono
-fossi
-fosse
-fossimo
-fossero
-essendo
-
-               | forms of fare, to do (not including the infinitive, fa, fat-):
-faccio
-fai
-facciamo
-fanno
-faccia
-facciate
-facciano
-farò
-farai
-farà
-faremo
-farete
-faranno
-farei
-faresti
-farebbe
-faremmo
-fareste
-farebbero
-facevo
-facevi
-faceva
-facevamo
-facevate
-facevano
-feci
-facesti
-fece
-facemmo
-faceste
-fecero
-facessi
-facesse
-facessimo
-facessero
-facendo
-
-               | forms of stare, to be (not including the infinitive):
-sto
-stai
-sta
-stiamo
-stanno
-stia
-stiate
-stiano
-starò
-starai
-starà
-staremo
-starete
-staranno
-starei
-staresti
-starebbe
-staremmo
-stareste
-starebbero
-stavo
-stavi
-stava
-stavamo
-stavate
-stavano
-stetti
-stesti
-stette
-stemmo
-steste
-stettero
-stessi
-stesse
-stessimo
-stessero
-stando
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ja.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ja.txt
deleted file mode 100644
index d4321be..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ja.txt
+++ /dev/null
@@ -1,127 +0,0 @@
-#
-# This file defines a stopword set for Japanese.
-#
-# This set is made up of hand-picked frequent terms from segmented Japanese Wikipedia.
-# Punctuation characters and frequent kanji have mostly been left out.  See LUCENE-3745
-# for frequency lists, etc. that can be useful for making your own set (if desired)
-#
-# Note that there is an overlap between these stopwords and the terms stopped when used
-# in combination with the JapanesePartOfSpeechStopFilter.  When editing this file, note
-# that comments are not allowed on the same line as stopwords.
-#
-# Also note that stopping is done in a case-insensitive manner.  Change your StopFilter
-# configuration if you need case-sensitive stopping.  Lastly, note that stopping is done
-# using the same character width as the entries in this file.  Since this StopFilter is
-# normally done after a CJKWidthFilter in your chain, you would usually want your romaji
-# entries to be in half-width and your kana entries to be in full-width.
-#
-の
-に
-は
-を
-た
-が
-で
-て
-と
-し
-れ
-さ
-ある
-いる
-も
-する
-から
-な
-こと
-として
-い
-や
-れる
-など
-なっ
-ない
-この
-ため
-その
-あっ
-よう
-また
-もの
-という
-あり
-まで
-られ
-なる
-へ
-か
-だ
-これ
-によって
-により
-おり
-より
-による
-ず
-なり
-られる
-において
-ば
-なかっ
-なく
-しかし
-について
-せ
-だっ
-その後
-できる
-それ
-う
-ので
-なお
-のみ
-でき
-き
-つ
-における
-および
-いう
-さらに
-でも
-ら
-たり
-その他
-に関する
-たち
-ます
-ん
-なら
-に対して
-特に
-せる
-及び
-これら
-とき
-では
-にて
-ほか
-ながら
-うち
-そして
-とともに
-ただし
-かつて
-それぞれ
-または
-お
-ほど
-ものの
-に対する
-ほとんど
-と共に
-といった
-です
-とも
-ところ
-ここ
-##### End of file
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_lv.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_lv.txt
deleted file mode 100644
index e21a23c..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_lv.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-# Set of Latvian stopwords from A Stemming Algorithm for Latvian, Karlis Kreslins
-# the original list of over 800 forms was refined: 
-#   pronouns, adverbs, interjections were removed
-# 
-# prepositions
-aiz
-ap
-ar
-apakš
-ārpus
-augšpus
-bez
-caur
-dēļ
-gar
-iekš
-iz
-kopš
-labad
-lejpus
-līdz
-no
-otrpus
-pa
-par
-pār
-pēc
-pie
-pirms
-pret
-priekš
-starp
-šaipus
-uz
-viņpus
-virs
-virspus
-zem
-apakšpus
-# Conjunctions
-un
-bet
-jo
-ja
-ka
-lai
-tomēr
-tikko
-turpretī
-arī
-kaut
-gan
-tādēļ
-tā
-ne
-tikvien
-vien
-kā
-ir
-te
-vai
-kamēr
-# Particles
-ar
-diezin
-droši
-diemžēl
-nebūt
-ik
-it
-taču
-nu
-pat
-tiklab
-iekšpus
-nedz
-tik
-nevis
-turpretim
-jeb
-iekam
-iekām
-iekāms
-kolīdz
-līdzko
-tiklīdz
-jebšu
-tālab
-tāpēc
-nekā
-itin
-jā
-jau
-jel
-nē
-nezin
-tad
-tikai
-vis
-tak
-iekams
-vien
-# modal verbs
-būt  
-biju 
-biji
-bija
-bijām
-bijāt
-esmu
-esi
-esam
-esat 
-būšu     
-būsi
-būs
-būsim
-būsiet
-tikt
-tiku
-tiki
-tika
-tikām
-tikāt
-tieku
-tiec
-tiek
-tiekam
-tiekat
-tikšu
-tiks
-tiksim
-tiksiet
-tapt
-tapi
-tapāt
-topat
-tapšu
-tapsi
-taps
-tapsim
-tapsiet
-kļūt
-kļuvu
-kļuvi
-kļuva
-kļuvām
-kļuvāt
-kļūstu
-kļūsti
-kļūst
-kļūstam
-kļūstat
-kļūšu
-kļūsi
-kļūs
-kļūsim
-kļūsiet
-# verbs
-varēt
-varēju
-varējām
-varēšu
-varēsim
-var
-varēji
-varējāt
-varēsi
-varēsiet
-varat
-varēja
-varēs
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_nl.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_nl.txt
deleted file mode 100644
index 47a2aea..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_nl.txt
+++ /dev/null
@@ -1,119 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/dutch/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Dutch stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large sample of Dutch text.
-
- | Dutch stop words frequently exhibit homonym clashes. These are indicated
- | clearly below.
-
-de             |  the
-en             |  and
-van            |  of, from
-ik             |  I, the ego
-te             |  (1) chez, at etc, (2) to, (3) too
-dat            |  that, which
-die            |  that, those, who, which
-in             |  in, inside
-een            |  a, an, one
-hij            |  he
-het            |  the, it
-niet           |  not, nothing, naught
-zijn           |  (1) to be, being, (2) his, one's, its
-is             |  is
-was            |  (1) was, past tense of all persons sing. of 'zijn' (to be) (2) wax, (3) the washing, (4) rise of river
-op             |  on, upon, at, in, up, used up
-aan            |  on, upon, to (as dative)
-met            |  with, by
-als            |  like, such as, when
-voor           |  (1) before, in front of, (2) furrow
-had            |  had, past tense all persons sing. of 'hebben' (have)
-er             |  there
-maar           |  but, only
-om             |  round, about, for etc
-hem            |  him
-dan            |  then
-zou            |  should/would, past tense all persons sing. of 'zullen'
-of             |  or, whether, if
-wat            |  what, something, anything
-mijn           |  possessive and noun 'mine'
-men            |  people, 'one'
-dit            |  this
-zo             |  so, thus, in this way
-door           |  through by
-over           |  over, across
-ze             |  she, her, they, them
-zich           |  oneself
-bij            |  (1) a bee, (2) by, near, at
-ook            |  also, too
-tot            |  till, until
-je             |  you
-mij            |  me
-uit            |  out of, from
-der            |  Old Dutch form of 'van der' still found in surnames
-daar           |  (1) there, (2) because
-haar           |  (1) her, their, them, (2) hair
-naar           |  (1) unpleasant, unwell etc, (2) towards, (3) as
-heb            |  present first person sing. of 'to have'
-hoe            |  how, why
-heeft          |  present third person sing. of 'to have'
-hebben         |  'to have' and various parts thereof
-deze           |  this
-u              |  you
-want           |  (1) for, (2) mitten, (3) rigging
-nog            |  yet, still
-zal            |  'shall', first and third person sing. of verb 'zullen' (will)
-me             |  me
-zij            |  she, they
-nu             |  now
-ge             |  'thou', still used in Belgium and south Netherlands
-geen           |  none
-omdat          |  because
-iets           |  something, somewhat
-worden         |  to become, grow, get
-toch           |  yet, still
-al             |  all, every, each
-waren          |  (1) 'were' (2) to wander, (3) wares, (3)
-veel           |  much, many
-meer           |  (1) more, (2) lake
-doen           |  to do, to make
-toen           |  then, when
-moet           |  noun 'spot/mote' and present form of 'to must'
-ben            |  (1) am, (2) 'are' in interrogative second person singular of 'to be'
-zonder         |  without
-kan            |  noun 'can' and present form of 'to be able'
-hun            |  their, them
-dus            |  so, consequently
-alles          |  all, everything, anything
-onder          |  under, beneath
-ja             |  yes, of course
-eens           |  once, one day
-hier           |  here
-wie            |  who
-werd           |  imperfect third person sing. of 'become'
-altijd         |  always
-doch           |  yet, but etc
-wordt          |  present third person sing. of 'become'
-wezen          |  (1) to be, (2) 'been' as in 'been fishing', (3) orphans
-kunnen         |  to be able
-ons            |  us/our
-zelf           |  self
-tegen          |  against, towards, at
-na             |  after, near
-reeds          |  already
-wil            |  (1) present tense of 'want', (2) 'will', noun, (3) fender
-kon            |  could; past tense of 'to be able'
-niets          |  nothing
-uw             |  your
-iemand         |  somebody
-geweest        |  been; past participle of 'be'
-andere         |  other
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_no.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_no.txt
deleted file mode 100644
index a7a2c28..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_no.txt
+++ /dev/null
@@ -1,194 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/norwegian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Norwegian stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This stop word list is for the dominant bokmål dialect. Words unique
- | to nynorsk are marked *.
-
- | Revised by Jan Bruusgaard <Jan.Bruusgaard@ssb.no>, Jan 2005
-
-og             | and
-i              | in
-jeg            | I
-det            | it/this/that
-at             | to (w. inf.)
-en             | a/an
-et             | a/an
-den            | it/this/that
-til            | to
-er             | is/am/are
-som            | who/that
-på             | on
-de             | they / you(formal)
-med            | with
-han            | he
-av             | of
-ikke           | not
-ikkje          | not *
-der            | there
-så             | so
-var            | was/were
-meg            | me
-seg            | you
-men            | but
-ett            | one
-har            | have
-om             | about
-vi             | we
-min            | my
-mitt           | my
-ha             | have
-hadde          | had
-hun            | she
-nå             | now
-over           | over
-da             | when/as
-ved            | by/know
-fra            | from
-du             | you
-ut             | out
-sin            | your
-dem            | them
-oss            | us
-opp            | up
-man            | you/one
-kan            | can
-hans           | his
-hvor           | where
-eller          | or
-hva            | what
-skal           | shall/must
-selv           | self (reflective)
-sjøl           | self (reflective)
-her            | here
-alle           | all
-vil            | will
-bli            | become
-ble            | became
-blei           | became *
-blitt          | have become
-kunne          | could
-inn            | in
-når            | when
-være           | be
-kom            | come
-noen           | some
-noe            | some
-ville          | would
-dere           | you
-som            | who/which/that
-deres          | their/theirs
-kun            | only/just
-ja             | yes
-etter          | after
-ned            | down
-skulle         | should
-denne          | this
-for            | for/because
-deg            | you
-si             | hers/his
-sine           | hers/his
-sitt           | hers/his
-mot            | against
-å              | to
-meget          | much
-hvorfor        | why
-dette          | this
-disse          | these/those
-uten           | without
-hvordan        | how
-ingen          | none
-din            | your
-ditt           | your
-blir           | become
-samme          | same
-hvilken        | which
-hvilke         | which (plural)
-sånn           | such a
-inni           | inside/within
-mellom         | between
-vår            | our
-hver           | each
-hvem           | who
-vors           | us/ours
-hvis           | whose
-både           | both
-bare           | only/just
-enn            | than
-fordi          | as/because
-før            | before
-mange          | many
-også           | also
-slik           | just
-vært           | been
-være           | to be
-båe            | both *
-begge          | both
-siden          | since
-dykk           | your *
-dykkar         | yours *
-dei            | they *
-deira          | them *
-deires         | theirs *
-deim           | them *
-di             | your (fem.) *
-då             | as/when *
-eg             | I *
-ein            | a/an *
-eit            | a/an *
-eitt           | a/an *
-elles          | or *
-honom          | he *
-hjå            | at *
-ho             | she *
-hoe            | she *
-henne          | her
-hennar         | her/hers
-hennes         | hers
-hoss           | how *
-hossen         | how *
-ikkje          | not *
-ingi           | noone *
-inkje          | noone *
-korleis        | how *
-korso          | how *
-kva            | what/which *
-kvar           | where *
-kvarhelst      | where *
-kven           | who/whom *
-kvi            | why *
-kvifor         | why *
-me             | we *
-medan          | while *
-mi             | my *
-mine           | my *
-mykje          | much *
-no             | now *
-nokon          | some (masc./neut.) *
-noka           | some (fem.) *
-nokor          | some *
-noko           | some *
-nokre          | some *
-si             | his/hers *
-sia            | since *
-sidan          | since *
-so             | so *
-somt           | some *
-somme          | some *
-um             | about*
-upp            | up *
-vere           | be *
-vore           | was *
-verte          | become *
-vort           | become *
-varte          | became *
-vart           | became *
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_pt.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_pt.txt
deleted file mode 100644
index acfeb01..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_pt.txt
+++ /dev/null
@@ -1,253 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/portuguese/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Portuguese stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-
- | The following is a ranked list (commonest to rarest) of stopwords
- | deriving from a large sample of text.
-
- | Extra words have been added at the end.
-
-de             |  of, from
-a              |  the; to, at; her
-o              |  the; him
-que            |  who, that
-e              |  and
-do             |  de + o
-da             |  de + a
-em             |  in
-um             |  a
-para           |  for
-  | é          from SER
-com            |  with
-não            |  not, no
-uma            |  a
-os             |  the; them
-no             |  em + o
-se             |  himself etc
-na             |  em + a
-por            |  for
-mais           |  more
-as             |  the; them
-dos            |  de + os
-como           |  as, like
-mas            |  but
-  | foi        from SER
-ao             |  a + o
-ele            |  he
-das            |  de + as
-  | tem        from TER
-à              |  a + a
-seu            |  his
-sua            |  her
-ou             |  or
-  | ser        from SER
-quando         |  when
-muito          |  much
-  | há         from HAV
-nos            |  em + os; us
-já             |  already, now
-  | está       from EST
-eu             |  I
-também         |  also
-só             |  only, just
-pelo           |  per + o
-pela           |  per + a
-até            |  up to
-isso           |  that
-ela            |  he
-entre          |  between
-  | era        from SER
-depois         |  after
-sem            |  without
-mesmo          |  same
-aos            |  a + os
-  | ter        from TER
-seus           |  his
-quem           |  whom
-nas            |  em + as
-me             |  me
-esse           |  that
-eles           |  they
-  | estão      from EST
-você           |  you
-  | tinha      from TER
-  | foram      from SER
-essa           |  that
-num            |  em + um
-nem            |  nor
-suas           |  her
-meu            |  my
-às             |  a + as
-minha          |  my
-  | têm        from TER
-numa           |  em + uma
-pelos          |  per + os
-elas           |  they
-  | havia      from HAV
-  | seja       from SER
-qual           |  which
-  | será       from SER
-nós            |  we
-  | tenho      from TER
-lhe            |  to him, her
-deles          |  of them
-essas          |  those
-esses          |  those
-pelas          |  per + as
-este           |  this
-  | fosse      from SER
-dele           |  of him
-
- | other words. There are many contractions such as naquele = em+aquele,
- | mo = me+o, but they are rare.
- | Indefinite article plural forms are also rare.
-
-tu             |  thou
-te             |  thee
-vocês          |  you (plural)
-vos            |  you
-lhes           |  to them
-meus           |  my
-minhas
-teu            |  thy
-tua
-teus
-tuas
-nosso          | our
-nossa
-nossos
-nossas
-
-dela           |  of her
-delas          |  of them
-
-esta           |  this
-estes          |  these
-estas          |  these
-aquele         |  that
-aquela         |  that
-aqueles        |  those
-aquelas        |  those
-isto           |  this
-aquilo         |  that
-
-               | forms of estar, to be (not including the infinitive):
-estou
-está
-estamos
-estão
-estive
-esteve
-estivemos
-estiveram
-estava
-estávamos
-estavam
-estivera
-estivéramos
-esteja
-estejamos
-estejam
-estivesse
-estivéssemos
-estivessem
-estiver
-estivermos
-estiverem
-
-               | forms of haver, to have (not including the infinitive):
-hei
-há
-havemos
-hão
-houve
-houvemos
-houveram
-houvera
-houvéramos
-haja
-hajamos
-hajam
-houvesse
-houvéssemos
-houvessem
-houver
-houvermos
-houverem
-houverei
-houverá
-houveremos
-houverão
-houveria
-houveríamos
-houveriam
-
-               | forms of ser, to be (not including the infinitive):
-sou
-somos
-são
-era
-éramos
-eram
-fui
-foi
-fomos
-foram
-fora
-fôramos
-seja
-sejamos
-sejam
-fosse
-fôssemos
-fossem
-for
-formos
-forem
-serei
-será
-seremos
-serão
-seria
-seríamos
-seriam
-
-               | forms of ter, to have (not including the infinitive):
-tenho
-tem
-temos
-tém
-tinha
-tínhamos
-tinham
-tive
-teve
-tivemos
-tiveram
-tivera
-tivéramos
-tenha
-tenhamos
-tenham
-tivesse
-tivéssemos
-tivessem
-tiver
-tivermos
-tiverem
-terei
-terá
-teremos
-terão
-teria
-teríamos
-teriam
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ro.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ro.txt
deleted file mode 100644
index 4fdee90..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ro.txt
+++ /dev/null
@@ -1,233 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-acea
-aceasta
-această
-aceea
-acei
-aceia
-acel
-acela
-acele
-acelea
-acest
-acesta
-aceste
-acestea
-aceşti
-aceştia
-acolo
-acum
-ai
-aia
-aibă
-aici
-al
-ăla
-ale
-alea
-ălea
-altceva
-altcineva
-am
-ar
-are
-aş
-aşadar
-asemenea
-asta
-ăsta
-astăzi
-astea
-ăstea
-ăştia
-asupra
-aţi
-au
-avea
-avem
-aveţi
-azi
-bine
-bucur
-bună
-ca
-că
-căci
-când
-care
-cărei
-căror
-cărui
-cât
-câte
-câţi
-către
-câtva
-ce
-cel
-ceva
-chiar
-cînd
-cine
-cineva
-cît
-cîte
-cîţi
-cîtva
-contra
-cu
-cum
-cumva
-curând
-curînd
-da
-dă
-dacă
-dar
-datorită
-de
-deci
-deja
-deoarece
-departe
-deşi
-din
-dinaintea
-dintr
-dintre
-drept
-după
-ea
-ei
-el
-ele
-eram
-este
-eşti
-eu
-face
-fără
-fi
-fie
-fiecare
-fii
-fim
-fiţi
-iar
-ieri
-îi
-îl
-îmi
-împotriva
-în 
-înainte
-înaintea
-încât
-încît
-încotro
-între
-întrucât
-întrucît
-îţi
-la
-lângă
-le
-li
-lîngă
-lor
-lui
-mă
-mâine
-mea
-mei
-mele
-mereu
-meu
-mi
-mine
-mult
-multă
-mulţi
-ne
-nicăieri
-nici
-nimeni
-nişte
-noastră
-noastre
-noi
-noştri
-nostru
-nu
-ori
-oricând
-oricare
-oricât
-orice
-oricînd
-oricine
-oricît
-oricum
-oriunde
-până
-pe
-pentru
-peste
-pînă
-poate
-pot
-prea
-prima
-primul
-prin
-printr
-sa
-să
-săi
-sale
-sau
-său
-se
-şi
-sînt
-sîntem
-sînteţi
-spre
-sub
-sunt
-suntem
-sunteţi
-ta
-tăi
-tale
-tău
-te
-ţi
-ţie
-tine
-toată
-toate
-tot
-toţi
-totuşi
-tu
-un
-una
-unde
-undeva
-unei
-unele
-uneori
-unor
-vă
-vi
-voastră
-voastre
-voi
-voştri
-vostru
-vouă
-vreo
-vreun
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ru.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_ru.txt
deleted file mode 100644
index 5527140..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_ru.txt
+++ /dev/null
@@ -1,243 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/russian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | a russian stop word list. comments begin with vertical bar. each stop
- | word is at the start of a line.
-
- | this is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
- | letter `ё' is translated to `е'.
-
-и              | and
-в              | in/into
-во             | alternative form
-не             | not
-что            | what/that
-он             | he
-на             | on/onto
-я              | i
-с              | from
-со             | alternative form
-как            | how
-а              | milder form of `no' (but)
-то             | conjunction and form of `that'
-все            | all
-она            | she
-так            | so, thus
-его            | him
-но             | but
-да             | yes/and
-ты             | thou
-к              | towards, by
-у              | around, chez
-же             | intensifier particle
-вы             | you
-за             | beyond, behind
-бы             | conditional/subj. particle
-по             | up to, along
-только         | only
-ее             | her
-мне            | to me
-было           | it was
-вот            | here is/are, particle
-от             | away from
-меня           | me
-еще            | still, yet, more
-нет            | no, there isnt/arent
-о              | about
-из             | out of
-ему            | to him
-теперь         | now
-когда          | when
-даже           | even
-ну             | so, well
-вдруг          | suddenly
-ли             | interrogative particle
-если           | if
-уже            | already, but homonym of `narrower'
-или            | or
-ни             | neither
-быть           | to be
-был            | he was
-него           | prepositional form of его
-до             | up to
-вас            | you accusative
-нибудь         | indef. suffix preceded by hyphen
-опять          | again
-уж             | already, but homonym of `adder'
-вам            | to you
-сказал         | he said
-ведь           | particle `after all'
-там            | there
-потом          | then
-себя           | oneself
-ничего         | nothing
-ей             | to her
-может          | usually with `быть' as `maybe'
-они            | they
-тут            | here
-где            | where
-есть           | there is/are
-надо           | got to, must
-ней            | prepositional form of  ей
-для            | for
-мы             | we
-тебя           | thee
-их             | them, their
-чем            | than
-была           | she was
-сам            | self
-чтоб           | in order to
-без            | without
-будто          | as if
-человек        | man, person, one
-чего           | genitive form of `what'
-раз            | once
-тоже           | also
-себе           | to oneself
-под            | beneath
-жизнь          | life
-будет          | will be
-ж              | short form of intensifer particle `же'
-тогда          | then
-кто            | who
-этот           | this
-говорил        | was saying
-того           | genitive form of `that'
-потому         | for that reason
-этого          | genitive form of `this'
-какой          | which
-совсем         | altogether
-ним            | prepositional form of `его', `они'
-здесь          | here
-этом           | prepositional form of `этот'
-один           | one
-почти          | almost
-мой            | my
-тем            | instrumental/dative plural of `тот', `то'
-чтобы          | full form of `in order that'
-нее            | her (acc.)
-кажется        | it seems
-сейчас         | now
-были           | they were
-куда           | where to
-зачем          | why
-сказать        | to say
-всех           | all (acc., gen. preposn. plural)
-никогда        | never
-сегодня        | today
-можно          | possible, one can
-при            | by
-наконец        | finally
-два            | two
-об             | alternative form of `о', about
-другой         | another
-хоть           | even
-после          | after
-над            | above
-больше         | more
-тот            | that one (masc.)
-через          | across, in
-эти            | these
-нас            | us
-про            | about
-всего          | in all, only, of all
-них            | prepositional form of `они' (they)
-какая          | which, feminine
-много          | lots
-разве          | interrogative particle
-сказала        | she said
-три            | three
-эту            | this, acc. fem. sing.
-моя            | my, feminine
-впрочем        | moreover, besides
-хорошо         | good
-свою           | ones own, acc. fem. sing.
-этой           | oblique form of `эта', fem. `this'
-перед          | in front of
-иногда         | sometimes
-лучше          | better
-чуть           | a little
-том            | preposn. form of `that one'
-нельзя         | one must not
-такой          | such a one
-им             | to them
-более          | more
-всегда         | always
-конечно        | of course
-всю            | acc. fem. sing of `all'
-между          | between
-
-
-  | b: some paradigms
-  |
-  | personal pronouns
-  |
-  | я  меня  мне  мной  [мною]
-  | ты  тебя  тебе  тобой  [тобою]
-  | он  его  ему  им  [него, нему, ним]
-  | она  ее  эи  ею  [нее, нэи, нею]
-  | оно  его  ему  им  [него, нему, ним]
-  |
-  | мы  нас  нам  нами
-  | вы  вас  вам  вами
-  | они  их  им  ими  [них, ним, ними]
-  |
-  |   себя  себе  собой   [собою]
-  |
-  | demonstrative pronouns: этот (this), тот (that)
-  |
-  | этот  эта  это  эти
-  | этого  эты  это  эти
-  | этого  этой  этого  этих
-  | этому  этой  этому  этим
-  | этим  этой  этим  [этою]  этими
-  | этом  этой  этом  этих
-  |
-  | тот  та  то  те
-  | того  ту  то  те
-  | того  той  того  тех
-  | тому  той  тому  тем
-  | тем  той  тем  [тою]  теми
-  | том  той  том  тех
-  |
-  | determinative pronouns
-  |
-  | (a) весь (all)
-  |
-  | весь  вся  все  все
-  | всего  всю  все  все
-  | всего  всей  всего  всех
-  | всему  всей  всему  всем
-  | всем  всей  всем  [всею]  всеми
-  | всем  всей  всем  всех
-  |
-  | (b) сам (himself etc)
-  |
-  | сам  сама  само  сами
-  | самого саму  само  самих
-  | самого самой самого  самих
-  | самому самой самому  самим
-  | самим  самой  самим  [самою]  самими
-  | самом самой самом  самих
-  |
-  | stems of verbs `to be', `to have', `to do' and modal
-  |
-  | быть  бы  буд  быв  есть  суть
-  | име
-  | дел
-  | мог   мож  мочь
-  | уме
-  | хоч  хот
-  | долж
-  | можн
-  | нужн
-  | нельзя
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_sv.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_sv.txt
deleted file mode 100644
index 096f87f..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_sv.txt
+++ /dev/null
@@ -1,133 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/swedish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Swedish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
- | Swedish stop words occasionally exhibit homonym clashes. For example
- |  så = so, but also seed. These are indicated clearly below.
-
-och            | and
-det            | it, this/that
-att            | to (with infinitive)
-i              | in, at
-en             | a
-jag            | I
-hon            | she
-som            | who, that
-han            | he
-på             | on
-den            | it, this/that
-med            | with
-var            | where, each
-sig            | him(self) etc
-för            | for
-så             | so (also: seed)
-till           | to
-är             | is
-men            | but
-ett            | a
-om             | if; around, about
-hade           | had
-de             | they, these/those
-av             | of
-icke           | not, no
-mig            | me
-du             | you
-henne          | her
-då             | then, when
-sin            | his
-nu             | now
-har            | have
-inte           | inte någon = no one
-hans           | his
-honom          | him
-skulle         | 'sake'
-hennes         | her
-där            | there
-min            | my
-man            | one (pronoun)
-ej             | nor
-vid            | at, by, on (also: vast)
-kunde          | could
-något          | some etc
-från           | from, off
-ut             | out
-när            | when
-efter          | after, behind
-upp            | up
-vi             | we
-dem            | them
-vara           | be
-vad            | what
-över           | over
-än             | than
-dig            | you
-kan            | can
-sina           | his
-här            | here
-ha             | have
-mot            | towards
-alla           | all
-under          | under (also: wonder)
-någon          | some etc
-eller          | or (else)
-allt           | all
-mycket         | much
-sedan          | since
-ju             | why
-denna          | this/that
-själv          | myself, yourself etc
-detta          | this/that
-åt             | to
-utan           | without
-varit          | was
-hur            | how
-ingen          | no
-mitt           | my
-ni             | you
-bli            | to be, become
-blev           | from bli
-oss            | us
-din            | thy
-dessa          | these/those
-några          | some etc
-deras          | their
-blir           | from bli
-mina           | my
-samma          | (the) same
-vilken         | who, that
-er             | you, your
-sådan          | such a
-vår            | our
-blivit         | from bli
-dess           | its
-inom           | within
-mellan         | between
-sådant         | such a
-varför         | why
-varje          | each
-vilka          | who, that
-ditt           | thy
-vem            | who
-vilket         | who, that
-sitta          | his
-sådana         | such a
-vart           | each
-dina           | thy
-vars           | whose
-vårt           | our
-våra           | our
-ert            | your
-era            | your
-vilkas         | whose
-
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_th.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_th.txt
deleted file mode 100644
index 07f0fab..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_th.txt
+++ /dev/null
@@ -1,119 +0,0 @@
-# Thai stopwords from:
-# "Opinion Detection in Thai Political News Columns
-# Based on Subjectivity Analysis"
-# Khampol Sukhum, Supot Nitsuwat, and Choochart Haruechaiyasak
-ไว้
-ไม่
-ไป
-ได้
-ให้
-ใน
-โดย
-แห่ง
-แล้ว
-และ
-แรก
-แบบ
-แต่
-เอง
-เห็น
-เลย
-เริ่ม
-เรา
-เมื่อ
-เพื่อ
-เพราะ
-เป็นการ
-เป็น
-เปิดเผย
-เปิด
-เนื่องจาก
-เดียวกัน
-เดียว
-เช่น
-เฉพาะ
-เคย
-เข้า
-เขา
-อีก
-อาจ
-อะไร
-ออก
-อย่าง
-อยู่
-อยาก
-หาก
-หลาย
-หลังจาก
-หลัง
-หรือ
-หนึ่ง
-ส่วน
-ส่ง
-สุด
-สําหรับ
-ว่า
-วัน
-ลง
-ร่วม
-ราย
-รับ
-ระหว่าง
-รวม
-ยัง
-มี
-มาก
-มา
-พร้อม
-พบ
-ผ่าน
-ผล
-บาง
-น่า
-นี้
-นํา
-นั้น
-นัก
-นอกจาก
-ทุก
-ที่สุด
-ที่
-ทําให้
-ทํา
-ทาง
-ทั้งนี้
-ทั้ง
-ถ้า
-ถูก
-ถึง
-ต้อง
-ต่างๆ
-ต่าง
-ต่อ
-ตาม
-ตั้งแต่
-ตั้ง
-ด้าน
-ด้วย
-ดัง
-ซึ่ง
-ช่วง
-จึง
-จาก
-จัด
-จะ
-คือ
-ความ
-ครั้ง
-คง
-ขึ้น
-ของ
-ขอ
-ขณะ
-ก่อน
-ก็
-การ
-กับ
-กัน
-กว่า
-กล่าว
diff --git a/solr/example/example-DIH/solr/db/conf/lang/stopwords_tr.txt b/solr/example/example-DIH/solr/db/conf/lang/stopwords_tr.txt
deleted file mode 100644
index 84d9408..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/stopwords_tr.txt
+++ /dev/null
@@ -1,212 +0,0 @@
-# Turkish stopwords from LUCENE-559
-# merged with the list from "Information Retrieval on Turkish Texts"
-#   (http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf)
-acaba
-altmış
-altı
-ama
-ancak
-arada
-aslında
-ayrıca
-bana
-bazı
-belki
-ben
-benden
-beni
-benim
-beri
-beş
-bile
-bin
-bir
-birçok
-biri
-birkaç
-birkez
-birşey
-birşeyi
-biz
-bize
-bizden
-bizi
-bizim
-böyle
-böylece
-bu
-buna
-bunda
-bundan
-bunlar
-bunları
-bunların
-bunu
-bunun
-burada
-çok
-çünkü
-da
-daha
-dahi
-de
-defa
-değil
-diğer
-diye
-doksan
-dokuz
-dolayı
-dolayısıyla
-dört
-edecek
-eden
-ederek
-edilecek
-ediliyor
-edilmesi
-ediyor
-eğer
-elli
-en
-etmesi
-etti
-ettiği
-ettiğini
-gibi
-göre
-halen
-hangi
-hatta
-hem
-henüz
-hep
-hepsi
-her
-herhangi
-herkesin
-hiç
-hiçbir
-için
-iki
-ile
-ilgili
-ise
-işte
-itibaren
-itibariyle
-kadar
-karşın
-katrilyon
-kendi
-kendilerine
-kendini
-kendisi
-kendisine
-kendisini
-kez
-ki
-kim
-kimden
-kime
-kimi
-kimse
-kırk
-milyar
-milyon
-mu
-mü
-mı
-nasıl
-ne
-neden
-nedenle
-nerde
-nerede
-nereye
-niye
-niçin
-o
-olan
-olarak
-oldu
-olduğu
-olduğunu
-olduklarını
-olmadı
-olmadığı
-olmak
-olması
-olmayan
-olmaz
-olsa
-olsun
-olup
-olur
-olursa
-oluyor
-on
-ona
-ondan
-onlar
-onlardan
-onları
-onların
-onu
-onun
-otuz
-oysa
-öyle
-pek
-rağmen
-sadece
-sanki
-sekiz
-seksen
-sen
-senden
-seni
-senin
-siz
-sizden
-sizi
-sizin
-şey
-şeyden
-şeyi
-şeyler
-şöyle
-şu
-şuna
-şunda
-şundan
-şunları
-şunu
-tarafından
-trilyon
-tüm
-üç
-üzere
-var
-vardı
-ve
-veya
-ya
-yani
-yapacak
-yapılan
-yapılması
-yapıyor
-yapmak
-yaptı
-yaptığı
-yaptığını
-yaptıkları
-yedi
-yerine
-yetmiş
-yine
-yirmi
-yoksa
-yüz
-zaten
diff --git a/solr/example/example-DIH/solr/db/conf/lang/userdict_ja.txt b/solr/example/example-DIH/solr/db/conf/lang/userdict_ja.txt
deleted file mode 100644
index 6f0368e..0000000
--- a/solr/example/example-DIH/solr/db/conf/lang/userdict_ja.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-#
-# This is a sample user dictionary for Kuromoji (JapaneseTokenizer)
-#
-# Add entries to this file in order to override the statistical model in terms
-# of segmentation, readings and part-of-speech tags.  Notice that entries do
-# not have weights since they are always used when found.  This is by-design
-# in order to maximize ease-of-use.
-#
-# Entries are defined using the following CSV format:
-#  <text>,<token 1> ... <token n>,<reading 1> ... <reading n>,<part-of-speech tag>
-#
-# Notice that a single half-width space separates tokens and readings, and
-# that the number tokens and readings must match exactly.
-#
-# Also notice that multiple entries with the same <text> is undefined.
-#
-# Whitespace only lines are ignored.  Comments are not allowed on entry lines.
-#
-
-# Custom segmentation for kanji compounds
-日本経済新聞,日本 経済 新聞,ニホン ケイザイ シンブン,カスタム名詞
-関西国際空港,関西 国際 空港,カンサイ コクサイ クウコウ,カスタム名詞
-
-# Custom segmentation for compound katakana
-トートバッグ,トート バッグ,トート バッグ,かずカナ名詞
-ショルダーバッグ,ショルダー バッグ,ショルダー バッグ,かずカナ名詞
-
-# Custom reading for former sumo wrestler
-朝青龍,朝青龍,アサショウリュウ,カスタム人名
diff --git a/solr/example/example-DIH/solr/db/conf/managed-schema b/solr/example/example-DIH/solr/db/conf/managed-schema
deleted file mode 100644
index 79e3dae..0000000
--- a/solr/example/example-DIH/solr/db/conf/managed-schema
+++ /dev/null
@@ -1,1143 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
-
- PERFORMANCE NOTE: this schema includes many optional features and should not
- be used for benchmarking.  To improve performance one could
-  - set stored="false" for all fields possible (esp large fields) when you
-    only need to search on the field but don't need to return the original
-    value.
-  - set indexed="false" if you don't need to search on the field, but only
-    return the field as a result of searching on other indexed fields.
-  - remove all unneeded copyField statements
-  - for best index size and searching performance, set "index" to false
-    for all general text fields, use copyField to copy them to the
-    catchall "text" field, and use that for searching.
-  - For maximum indexing performance, use the ConcurrentUpdateSolrServer
-    java client.
-  - Remember to run the JVM in server mode, and use a higher logging level
-    that avoids logging every request
--->
-
-<schema name="example-DIH-db" version="1.6">
-  <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       version="x.y" is Solr's version number for the schema syntax and 
-       semantics.  It should not normally be changed by applications.
-
-       1.0: multiValued attribute did not exist, all fields are multiValued 
-            by nature
-       1.1: multiValued attribute introduced, false by default 
-       1.2: omitTermFreqAndPositions attribute introduced, true by default 
-            except for text fields.
-       1.3: removed optional field compress feature
-       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
-            behavior when a single string produces multiple tokens.  Defaults 
-            to off for version >= 1.4
-       1.5: omitNorms defaults to true for primitive field types 
-            (int, float, boolean, string...)
-       1.6: useDocValuesAsStored defaults to true.            
-     -->
-
-
-    <!-- Valid attributes for fields:
-     name: mandatory - the name for the field
-     type: mandatory - the name of a field type from the 
-       fieldTypes section
-     indexed: true if this field should be indexed (searchable or sortable)
-     stored: true if this field should be retrievable
-     docValues: true if this field should have doc values. Doc values are
-       useful (required, if you are using *Point fields) for faceting, 
-       grouping, sorting and function queries. Doc values will make the index 
-       faster to load, more NRT-friendly and more memory-efficient. 
-       They however come with some limitations: they are currently only 
-       supported by StrField, UUIDField, all *PointFields, and depending
-       on the field type, they might require the field to be single-valued,
-       be required or have a default value (check the documentation
-       of the field type you're interested in for more information)
-     multiValued: true if this field may contain multiple values per document
-     omitNorms: (expert) set to true to omit the norms associated with
-       this field (this disables length normalization and index-time
-       boosting for the field, and saves some memory).  Only full-text
-       fields or fields that need an index-time boost need norms.
-       Norms are omitted for primitive (non-analyzed) types by default.
-     termVectors: [false] set to true to store the term vector for a
-       given field.
-       When using MoreLikeThis, fields used for similarity should be
-       stored for best performance.
-     termPositions: Store position information with the term vector.  
-       This will increase storage costs.
-     termOffsets: Store offset information with the term vector. This 
-       will increase storage costs.
-     required: The field is required.  It will throw an error if the
-       value does not exist
-     default: a value that should be used if no value is specified
-       when adding a document.
-    -->
-
-   <!-- field names should consist of alphanumeric or underscore characters only and
-      not start with a digit.  This is not currently strictly enforced,
-      but other field names will not have first class support from all components
-      and back compatibility is not guaranteed.  Names with both leading and
-      trailing underscores (e.g. _version_) are reserved.
-   -->
-
-   <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
-      or Solr won't start. _version_ and update log are required for SolrCloud
-   --> 
-   <field name="_version_" type="plong" indexed="true" stored="true"/>
-   
-   <!-- points to the root document of a block of nested documents. Required for nested
-      document support, may be removed otherwise
-   -->
-   <field name="_root_" type="string" indexed="true" stored="false"/>
-
-   <!-- Only remove the "id" field if you have a very good reason to. While not strictly
-     required, it is highly recommended. A <uniqueKey> is present in almost all Solr 
-     installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
-   -->   
-   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> 
-        
-   <field name="sku" type="text_en_splitting_tight" indexed="true" stored="true" omitNorms="true"/>
-   <field name="name" type="text_general" indexed="true" stored="true"/>
-   <field name="manu" type="text_general" indexed="true" stored="true" omitNorms="true"/>
-   <field name="cat" type="string" indexed="true" stored="true" multiValued="true"/>
-   <field name="features" type="text_general" indexed="true" stored="true" multiValued="true"/>
-   <field name="includes" type="text_general" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
-
-   <field name="weight" type="pfloat" indexed="true" stored="true"/>
-   <field name="price"  type="pfloat" indexed="true" stored="true"/>
-   <field name="popularity" type="pint" indexed="true" stored="true" />
-   <field name="inStock" type="boolean" indexed="true" stored="true" />
-
-   <field name="store" type="location" indexed="true" stored="true"/>
-
-   <!-- Common metadata fields, named specifically to match up with
-     SolrCell metadata when parsing rich documents such as Word, PDF.
-     Some fields are multiValued only because Tika currently may return
-     multiple values for them. Some metadata is parsed from the documents,
-     but there are some which come from the client context:
-       "content_type": From the HTTP headers of incoming stream
-       "resourcename": From SolrCell request param resource.name
-   -->
-   <field name="title" type="text_general" indexed="true" stored="true" multiValued="true"/>
-   <field name="subject" type="text_general" indexed="true" stored="true"/>
-   <field name="description" type="text_general" indexed="true" stored="true"/>
-   <field name="comments" type="text_general" indexed="true" stored="true"/>
-   <field name="author" type="text_general" indexed="true" stored="true"/>
-   <field name="keywords" type="text_general" indexed="true" stored="true"/>
-   <field name="category" type="text_general" indexed="true" stored="true"/>
-   <field name="resourcename" type="text_general" indexed="true" stored="true"/>
-   <field name="url" type="text_general" indexed="true" stored="true"/>
-   <field name="content_type" type="string" indexed="true" stored="true" multiValued="true"/>
-   <field name="last_modified" type="pdate" indexed="true" stored="true"/>
-   <field name="links" type="string" indexed="true" stored="true" multiValued="true"/>
-
-   <!-- Main body of document extracted by SolrCell.
-        NOTE: This field is not indexed by default, since it is also copied to "text"
-        using copyField below. This is to save space. Use this field for returning and
-        highlighting document content. Use the "text" field to search the content. -->
-   <field name="content" type="text_general" indexed="false" stored="true" multiValued="true"/>
-   
-
-   <!-- catchall field, containing all other searchable text fields (implemented
-        via copyField further on in this schema  -->
-   <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
-
-   <!-- catchall text field that indexes tokens both normally and in reverse for efficient
-        leading wildcard queries. -->
-   <field name="text_rev" type="text_general_rev" indexed="true" stored="false" multiValued="true"/>
-
-   <!-- non-tokenized version of manufacturer to make it easier to sort or group
-        results by manufacturer.  copied from "manu" via copyField -->
-   <field name="manu_exact" type="string" indexed="true" stored="false"/>
-
-   <field name="payloads" type="payloads" indexed="true" stored="true"/>
-
-
-   <!--
-     Some fields such as popularity and manu_exact could be modified to
-     leverage doc values:
-     <field name="popularity" type="pint" indexed="true" stored="true" docValues="true" />
-     <field name="manu_exact" type="string" indexed="false" stored="false" docValues="true" />
-     <field name="cat" type="string" indexed="true" stored="true" docValues="true" multiValued="true"/>
-
-
-     Although it would make indexing slightly slower and the index bigger, it
-     would also make the index faster to load, more memory-efficient and more
-     NRT-friendly.
-     -->
-
-   <!-- Dynamic field definitions allow using convention over configuration
-       for fields via the specification of patterns to match field names.
-       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
-       RESTRICTION: the glob-like pattern in the name attribute must have
-       a "*" only at the start or the end.  -->
-   
-   <dynamicField name="*_i"  type="pint"    indexed="true"  stored="true"/>
-   <dynamicField name="*_is" type="pint"    indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
-   <dynamicField name="*_s_ns"  type="string"  indexed="true"  stored="false" />
-   <dynamicField name="*_ss" type="string"  indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_l"  type="plong"   indexed="true"  stored="true"/>
-   <dynamicField name="*_l_ns"  type="plong"   indexed="true"  stored="false"/>
-   <dynamicField name="*_ls" type="plong"   indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_t"  type="text_general"    indexed="true"  stored="true"/>
-   <dynamicField name="*_txt" type="text_general"   indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_en"  type="text_en"    indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
-   <dynamicField name="*_bs" type="boolean" indexed="true" stored="true"  multiValued="true"/>
-   <dynamicField name="*_f"  type="pfloat"  indexed="true"  stored="true"/>
-   <dynamicField name="*_fs" type="pfloat"  indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_d"  type="pdouble" indexed="true"  stored="true"/>
-   <dynamicField name="*_ds" type="pdouble" indexed="true"  stored="true"  multiValued="true"/>
-
-   <!-- Type used to index the lat and lon components for the "location" FieldType -->
-   <dynamicField name="*_coordinate"  type="pdouble" indexed="true"  stored="false" />
-
-   <dynamicField name="*_dt"  type="pdate"    indexed="true"  stored="true"/>
-   <dynamicField name="*_dts" type="pdate"    indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
-
-   <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
-
-   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
-   <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
-
-   <dynamicField name="random_*" type="random" />
-
-   <!-- uncomment the following to ignore any fields that don't already match an existing 
-        field name or dynamic field, rather than reporting them as an error. 
-        alternately, change the type="ignored" to some other type e.g. "text" if you want 
-        unknown fields indexed and/or stored by default --> 
-   <!--dynamicField name="*" type="ignored" multiValued="true" /-->
-   
-
-
-
- <!-- Field to use to determine and enforce document uniqueness. 
-      Unless this field is marked with required="false", it will be a required field
-   -->
- <uniqueKey>id</uniqueKey>
-
-  <!-- copyField commands copy one field to another at the time a document
-        is added to the index.  It's used either to index the same field differently,
-        or to add multiple fields to the same field for easier/faster searching.  -->
-
-   <copyField source="cat" dest="text"/>
-   <copyField source="name" dest="text"/>
-   <copyField source="manu" dest="text"/>
-   <copyField source="features" dest="text"/>
-   <copyField source="includes" dest="text"/>
-   <copyField source="manu" dest="manu_exact"/>
-
-   <!-- Copy the price into a currency enabled field (default USD) -->
-   <copyField source="price" dest="price_c"/>
-
-   <!-- Text fields from SolrCell to search by default in our catch-all field -->
-   <copyField source="title" dest="text"/>
-   <copyField source="author" dest="text"/>
-   <copyField source="description" dest="text"/>
-   <copyField source="keywords" dest="text"/>
-   <copyField source="content" dest="text"/>
-   <copyField source="content_type" dest="text"/>
-   <copyField source="resourcename" dest="text"/>
-   <copyField source="url" dest="text"/>
-
-   <!-- Create a string version of author for faceting -->
-   <copyField source="author" dest="author_s"/>
-
-   <!-- Above, multiple source fields are copied to the [text] field.
-    Another way to map multiple source fields to the same
-    destination field is to use the dynamic field syntax.
-    copyField also supports a maxChars to copy setting.  -->
-
-   <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
-
-   <!-- copy name to alphaNameSort, a field designed for sorting by name -->
-   <!-- <copyField source="name" dest="alphaNameSort"/> -->
-
-  
-    <!-- field type definitions. The "name" attribute is
-       just a label to be used by field definitions.  The "class"
-       attribute and any other attributes determine the real
-       behavior of the fieldType.
-         Class names starting with "solr" refer to java classes in a
-       standard package such as org.apache.solr.analysis
-    -->
-
-    <!-- The StrField type is not analyzed, but indexed/stored verbatim. -->
-    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
-
-    <!-- boolean type: "true" or "false" -->
-    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
-
-    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
-         currently supported on types that are sorted internally as strings
-         and on numeric types.
-	     This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble".
-       - If sortMissingLast="true", then a sort on this field will cause documents
-         without the field to come after documents with the field,
-         regardless of the requested sort order (asc or desc).
-       - If sortMissingFirst="true", then a sort on this field will cause documents
-         without the field to come before documents with the field,
-         regardless of the requested sort order.
-       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
-         then default lucene sorting will be used which places docs without the
-         field first in an ascending sort and last in a descending sort.
-    -->
-
-    <!--
-      Numeric field types that index values using KD-trees.
-      Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc.
-    -->
-    <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
-    <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
-    <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
-    <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
-    
-    <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/>
-    <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/>
-    <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/>
-    <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/>
-
-    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
-         is a more restricted form of the canonical representation of dateTime
-         http://www.w3.org/TR/xmlschema-2/#dateTime    
-         The trailing "Z" designates UTC time and is mandatory.
-         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
-         All other components are mandatory.
-
-         Expressions can also be used to denote calculations that should be
-         performed relative to "NOW" to determine the value, ie...
-
-               NOW/HOUR
-                  ... Round to the start of the current hour
-               NOW-1DAY
-                  ... Exactly 1 day prior to now
-               NOW/DAY+6MONTHS+3DAYS
-                  ... 6 months and 3 days in the future from the start of
-                      the current day
-                      
-         Consult the DatePointField javadocs for more information.
-      -->
-    <!-- KD-tree versions of date fields -->
-    <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
-    <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/>
-    
-    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
-    <fieldType name="binary" class="solr.BinaryField"/>
-
-    <!-- The "RandomSortField" is not used to store or search any
-         data.  You can declare fields of this type it in your schema
-         to generate pseudo-random orderings of your docs for sorting 
-         or function purposes.  The ordering is generated based on the field
-         name and the version of the index. As long as the index version
-         remains unchanged, and the same field name is reused,
-         the ordering of the docs will be consistent.  
-         If you want different psuedo-random orderings of documents,
-         for the same version of the index, use a dynamicField and
-         change the field name in the request.
-     -->
-    <fieldType name="random" class="solr.RandomSortField" indexed="true" />
-
-    <!-- solr.TextField allows the specification of custom text analyzers
-         specified as a tokenizer and a list of token filters. Different
-         analyzers may be specified for indexing and querying.
-
-         The optional positionIncrementGap puts space between multiple fields of
-         this type on the same document, with the purpose of preventing false phrase
-         matching across fields.
-
-         For more info on customizing your analyzer chain, please see
-         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-     -->
-
-    <!-- One can also specify an existing Analyzer class that has a
-         default constructor via the class attribute on the analyzer element.
-         Example:
-    <fieldType name="text_greek" class="solr.TextField">
-      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
-    </fieldType>
-    -->
-
-    <!-- A text field that only splits on whitespace for exact matching of words -->
-    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="whitespace"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A general text field that has reasonable, generic
-         cross-language defaults: it tokenizes with StandardTokenizer,
-   removes stop words from case-insensitive "stopwords.txt"
-   (empty by default), and down cases.  At query time only, it
-   also applies synonyms. -->
-    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="flattenGraph"/>
-        -->
-        <filter name="lowercase"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A text field with defaults appropriate for English: it
-         tokenizes with StandardTokenizer, removes English stop words
-         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
-         finally applies Porter's stemming.  The query time analyzer
-         also applies synonyms from synonyms.txt. -->
-    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="flattenGraph"/>
-        -->
-        <!-- Case insensitive stop word removal.
-        -->
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="lowercase"/>
-  <filter name="englishPossessive"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
-        <filter name="englishMinimalStem"/>
-  -->
-        <filter name="porterStem"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="lowercase"/>
-  <filter name="englishPossessive"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
-        <filter name="englishMinimalStem"/>
-  -->
-        <filter name="porterStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A text field with defaults appropriate for English, plus
-   aggressive word-splitting and autophrase features enabled.
-   This field is just like text_en, except it adds
-   WordDelimiterGraphFilter to enable splitting and matching of
-   words on case-change, alpha numeric boundaries, and
-   non-alphanumeric chars.  This means certain compound word
-   cases will work, for example query "wi fi" will match
-   document "WiFi" or "wi-fi".
-        -->
-    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
-      <analyzer type="index">
-        <tokenizer name="whitespace"/>
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        -->
-        <!-- Case insensitive stop word removal.
-        -->
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="porterStem"/>
-        <filter name="flattenGraph" />
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="porterStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
-         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
-    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
-      <analyzer type="index">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="englishMinimalStem"/>
-        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
-             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
-        <filter name="removeDuplicates"/>
-        <filter name="flattenGraph" />
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="englishMinimalStem"/>
-        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
-             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
-        <filter name="removeDuplicates"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Just like text_general except it reverses the characters of
-   each token, to enable more efficient leading wildcard queries. -->
-    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="lowercase"/>
-        <filter name="reversedWildcard" withOriginal="true"
-           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- charFilter + WhitespaceTokenizer  -->
-    <!--
-    <fieldType name="text_char_norm" class="solr.TextField" positionIncrementGap="100" >
-      <analyzer>
-        <charFilter name="mapping" mapping="mapping-ISOLatin1Accent.txt"/>
-        <tokenizer name="whitespace"/>
-      </analyzer>
-    </fieldType>
-    -->
-
-    <!-- This is an example of using the KeywordTokenizer along
-         With various TokenFilterFactories to produce a sortable field
-         that does not include some properties of the source text
-      -->
-    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
-      <analyzer>
-        <!-- KeywordTokenizer does no actual tokenizing, so the entire
-             input string is preserved as a single token
-          -->
-        <tokenizer name="keyword"/>
-        <!-- The LowerCase TokenFilter does what you expect, which can be
-             when you want your sorting to be case insensitive
-          -->
-        <filter name="lowercase" />
-        <!-- The TrimFilter removes any leading or trailing whitespace -->
-        <filter name="trim" />
-        <!-- The PatternReplaceFilter gives you the flexibility to use
-             Java Regular expression to replace any sequence of characters
-             matching a pattern with an arbitrary replacement string, 
-             which may include back references to portions of the original
-             string matched by the pattern.
-             
-             See the Java Regular Expression documentation for more
-             information on pattern and replacement string syntax.
-             
-             http://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html
-          -->
-        <filter name="patternReplace"
-                pattern="([^a-z])" replacement="" replace="all"
-        />
-      </analyzer>
-    </fieldType>
-    
-    <fieldType name="phonetic" stored="false" indexed="true" class="solr.TextField" >
-      <analyzer>
-        <tokenizer name="standard"/>
-        <filter name="doubleMetaphone" inject="false"/>
-      </analyzer>
-    </fieldType>
-
-    <fieldType name="payloads" stored="false" indexed="true" class="solr.TextField" >
-      <analyzer>
-        <tokenizer name="whitespace"/>
-        <!--
-        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
-        a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
-        Attributes of the DelimitedPayloadTokenFilterFactory : 
-         "delimiter" - a one character delimiter. Default is | (pipe)
-   "encoder" - how to encode the following value into a playload
-      float -> org.apache.lucene.analysis.payloads.FloatEncoder,
-      integer -> o.a.l.a.p.IntegerEncoder
-      identity -> o.a.l.a.p.IdentityEncoder
-            Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.
-         -->
-        <filter name="delimitedPayload" encoder="float"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- lowercases the entire field value, keeping it as a single token.  -->
-    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="keyword"/>
-        <filter name="lowercase" />
-      </analyzer>
-    </fieldType>
-
-    <!-- 
-      Example of using PathHierarchyTokenizerFactory at index time, so
-      queries for paths match documents at that path, or in descendent paths
-    -->
-    <fieldType name="descendent_path" class="solr.TextField">
-      <analyzer type="index">
-  <tokenizer name="pathHierarchy" delimiter="/" />
-      </analyzer>
-      <analyzer type="query">
-  <tokenizer name="keyword" />
-      </analyzer>
-    </fieldType>
-    <!-- 
-      Example of using PathHierarchyTokenizerFactory at query time, so
-      queries for paths match documents at that path, or in ancestor paths
-    -->
-    <fieldType name="ancestor_path" class="solr.TextField">
-      <analyzer type="index">
-  <tokenizer name="keyword" />
-      </analyzer>
-      <analyzer type="query">
-  <tokenizer name="pathHierarchy" delimiter="/" />
-      </analyzer>
-    </fieldType>
-
-    <!-- since fields of this type are by default not stored or indexed,
-         any data added to them will be ignored outright.  --> 
-    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
-
-    <!-- This point type indexes the coordinates as separate fields (subFields)
-      If subFieldType is defined, it references a type, and a dynamic field
-      definition is created matching *___<typename>.  Alternately, if 
-      subFieldSuffix is defined, that is used to create the subFields.
-      Example: if subFieldType="double", then the coordinates would be
-        indexed in fields myloc_0___double,myloc_1___double.
-      Example: if subFieldSuffix="_d" then the coordinates would be indexed
-        in fields myloc_0_d,myloc_1_d
-      The subFields are an implementation detail of the fieldType, and end
-      users normally should not need to know about them.
-     -->
-    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
-
-    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
-    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
-
-    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
-      For more information about this and other Spatial fields new to Solr 4, see:
-      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
-    -->
-    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
-        geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
-
-   <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
-        Parameters:
-          amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field. 
-                              The dynamic field must have a field type that extends LongValueFieldType.
-                              Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
-          codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.
-                              The dynamic field must have a field type that extends StrField.
-                              Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
-          defaultCurrency:  Specifies the default currency if none specified. Defaults to "USD"
-          providerClass:    Lets you plug in other exchange provider backend:
-                            solr.FileExchangeRateProvider is the default and takes one parameter:
-                              currencyConfig: name of an xml file holding exchange rates
-                            solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
-                              ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
-                              refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
-   -->
-    <fieldType name="currency" class="solr.CurrencyFieldType" amountLongSuffix="_l_ns" codeStrSuffix="_s_ns"
-               defaultCurrency="USD" currencyConfig="currency.xml" />
-
-
-   <!-- some examples for different languages (generally ordered by ISO code) -->
-
-    <!-- Arabic -->
-    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- for any non-arabic -->
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />
-        <!-- normalizes ﻯ to ﻱ, etc -->
-        <filter name="arabicNormalization"/>
-        <filter name="arabicStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Bulgarian -->
-    <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/> 
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" /> 
-        <filter name="bulgarianStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Catalan -->
-    <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_ca.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" />
-        <filter name="snowballPorter" language="Catalan"/>       
-      </analyzer>
-    </fieldType>
-    
-    <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->
-    <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="standard"/>
-        <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->
-        <filter name="cjkWidth"/>
-        <!-- for any non-CJK -->
-        <filter name="lowercase"/>
-        <filter name="cjkBigram"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Kurdish -->
-    <fieldType name="text_ckb" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="standard"/>
-        <filter name="soraniNormalization"/>
-        <!-- for any latin text -->
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ckb.txt"/>
-        <filter name="soraniStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Czech -->
-    <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_cz.txt" />
-        <filter name="czechStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Danish -->
-    <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />
-        <filter name="snowballPorter" language="Danish"/>       
-      </analyzer>
-    </fieldType>
-    
-    <!-- German -->
-    <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />
-        <filter name="germanNormalization"/>
-        <filter name="germanLightStem"/>
-        <!-- less aggressive: <filter name="germanMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="German2"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Greek -->
-    <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- greek specific lowercase for sigma -->
-        <filter name="greekLowercase"/>
-        <filter name="stop" ignoreCase="false" words="lang/stopwords_el.txt" />
-        <filter name="greekStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Spanish -->
-    <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />
-        <filter name="spanishLightStem"/>
-        <!-- more aggressive: <filter name="snowballPorter" language="Spanish"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Basque -->
-    <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_eu.txt" />
-        <filter name="snowballPorter" language="Basque"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Persian -->
-    <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <!-- for ZWNJ -->
-        <charFilter name="persian"/>
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="arabicNormalization"/>
-        <filter name="persianNormalization"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fa.txt" />
-      </analyzer>
-    </fieldType>
-    
-    <!-- Finnish -->
-    <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />
-        <filter name="snowballPorter" language="Finnish"/>
-        <!-- less aggressive: <filter name="finnishLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- French -->
-    <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_fr.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />
-        <filter name="frenchLightStem"/>
-        <!-- less aggressive: <filter name="frenchMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="French"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Irish -->
-    <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes d', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_ga.txt"/>
-        <!-- removes n-, etc. position increments is intentionally false! -->
-        <filter name="stop" ignoreCase="true" words="lang/hyphenations_ga.txt"/>
-        <filter name="irishLowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ga.txt"/>
-        <filter name="snowballPorter" language="Irish"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Galician -->
-    <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_gl.txt" />
-        <filter name="galicianStem"/>
-        <!-- less aggressive: <filter name="galicianMinimalStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Hindi -->
-    <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <!-- normalizes unicode representation -->
-        <filter name="indicNormalization"/>
-        <!-- normalizes variation in spelling -->
-        <filter name="hindiNormalization"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hi.txt" />
-        <filter name="hindiStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Hungarian -->
-    <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />
-        <filter name="snowballPorter" language="Hungarian"/>
-        <!-- less aggressive: <filter name="hungarianLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Armenian -->
-    <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hy.txt" />
-        <filter name="snowballPorter" language="Armenian"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Indonesian -->
-    <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_id.txt" />
-        <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->
-        <filter name="indonesianStem" stemDerivational="true"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Italian -->
-    <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_it.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />
-        <filter name="italianLightStem"/>
-        <!-- more aggressive: <filter name="snowballPorter" language="Italian"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)
-
-         NOTE: If you want to optimize search for precision, use default operator AND in your request
-         handler config (q.op) Use OR if you would like to optimize for recall (default).
-    -->
-    <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
-      <analyzer>
-      <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)
-
-           Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic
-           is used to segment compounds into its parts and the compound itself is kept as synonym.
-
-           Valid values for attribute mode are:
-              normal: regular segmentation
-              search: segmentation useful for search with synonyms compounds (default)
-            extended: same as search mode, but unigrams unknown words (experimental)
-
-           For some applications it might be good to use search mode for indexing and normal mode for
-           queries to reduce recall and prevent parts of compounds from being matched and highlighted.
-           Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.
-
-           Kuromoji also has a convenient user dictionary feature that allows overriding the statistical
-           model with your own entries for segmentation, part-of-speech tags and readings without a need
-           to specify weights.  Notice that user dictionaries have not been subject to extensive testing.
-
-           User dictionary attributes are:
-                     userDictionary: user dictionary filename
-             userDictionaryEncoding: user dictionary encoding (default is UTF-8)
-
-           See lang/userdict_ja.txt for a sample user dictionary file.
-
-           Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.
-
-           See http://wiki.apache.org/solr/JapaneseLanguageSupport for more on Japanese language support.
-        -->
-        <tokenizer name="japanese" mode="search"/>
-        <!--<tokenizer name="japanese" mode="search" userDictionary="lang/userdict_ja.txt"/>-->
-        <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->
-        <filter name="japaneseBaseForm"/>
-        <!-- Removes tokens with certain part-of-speech tags -->
-        <filter name="japanesePartOfSpeechStop" tags="lang/stoptags_ja.txt" />
-        <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->
-        <filter name="cjkWidth"/>
-        <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ja.txt" />
-        <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->
-        <filter name="japaneseKatakanaStem" minimumLength="4"/>
-        <!-- Lower-cases romaji characters -->
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Korean morphological analysis -->
-    <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>
-    <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)
-          The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.
-          It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.
-
-          This dictionary was built with MeCab, it defines a format for the features adapted
-          for the Korean language.
-          
-          Nori also has a convenient user dictionary feature that allows overriding the statistical
-          model with your own entries for segmentation, part-of-speech tags and readings without a need
-          to specify weights. Notice that user dictionaries have not been subject to extensive testing.
-
-          The tokenizer supports multiple schema attributes:
-            * userDictionary: User dictionary path.
-            * userDictionaryEncoding: User dictionary encoding.
-            * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.
-            * outputUnknownUnigrams: If true outputs unigrams for unknown words.
-        -->
-        <tokenizer name="korean" decompoundMode="discard" outputUnknownUnigrams="false"/>
-        <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',
-          listing the tags to remove. By default it removes: 
-          E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV
-          This is basically an equivalent to stemming.
-        -->
-        <filter name="koreanPartOfSpeechStop" />
-        <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->
-        <filter name="koreanReadingForm" />
-        <filter name="lowercase" />
-      </analyzer>
-    </fieldType>
-
-    <!-- Latvian -->
-    <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_lv.txt" />
-        <filter name="latvianStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Dutch -->
-    <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />
-        <filter name="stemmerOverride" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>
-        <filter name="snowballPorter" language="Dutch"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Norwegian -->
-    <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />
-        <filter name="snowballPorter" language="Norwegian"/>
-        <!-- less aggressive: <filter name="norwegianLightStem" variant="nb"/> -->
-        <!-- singular/plural: <filter name="norwegianMinimalStem" variant="nb"/> -->
-        <!-- The "light" and "minimal" stemmers support variants: nb=Bokmål, nn=Nynorsk, no=Both -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Portuguese -->
-    <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />
-        <filter name="portugueseLightStem"/>
-        <!-- less aggressive: <filter name="prtugueseMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="Portuguese"/> -->
-        <!-- most aggressive: <filter name="prtugueseStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Romanian -->
-    <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ro.txt" />
-        <filter name="snowballPorter" language="Romanian"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Russian -->
-    <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />
-        <filter name="snowballPorter" language="Russian"/>
-        <!-- less aggressive: <filter name="russianLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Swedish -->
-    <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />
-        <filter name="snowballPorter" language="Swedish"/>
-        <!-- less aggressive: <filter name="swedishLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Thai -->
-    <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="thai"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_th.txt" />
-      </analyzer>
-    </fieldType>
-    
-    <!-- Turkish -->
-    <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="apostrophe"/>
-        <filter name="turkishLowercase"/>
-        <filter name="stop" ignoreCase="false" words="lang/stopwords_tr.txt" />
-        <filter name="snowballPorter" language="Turkish"/>
-      </analyzer>
-    </fieldType>
-  
-  <!-- Similarity is the scoring routine for each document vs. a query.
-       A custom Similarity or SimilarityFactory may be specified here, but 
-       the default is fine for most applications.  
-       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
-    -->
-  <!--
-     <similarity class="com.example.solr.CustomSimilarityFactory">
-       <str name="paramkey">param value</str>
-     </similarity>
-    -->
-
-</schema>
diff --git a/solr/example/example-DIH/solr/db/conf/mapping-FoldToASCII.txt b/solr/example/example-DIH/solr/db/conf/mapping-FoldToASCII.txt
deleted file mode 100644
index 9a84b6e..0000000
--- a/solr/example/example-DIH/solr/db/conf/mapping-FoldToASCII.txt
+++ /dev/null
@@ -1,3813 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# This map converts alphabetic, numeric, and symbolic Unicode characters
-# which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
-# block) into their ASCII equivalents, if one exists.
-#
-# Characters from the following Unicode blocks are converted; however, only
-# those characters with reasonable ASCII alternatives are converted:
-#
-# - C1 Controls and Latin-1 Supplement: http://www.unicode.org/charts/PDF/U0080.pdf
-# - Latin Extended-A: http://www.unicode.org/charts/PDF/U0100.pdf
-# - Latin Extended-B: http://www.unicode.org/charts/PDF/U0180.pdf
-# - Latin Extended Additional: http://www.unicode.org/charts/PDF/U1E00.pdf
-# - Latin Extended-C: http://www.unicode.org/charts/PDF/U2C60.pdf
-# - Latin Extended-D: http://www.unicode.org/charts/PDF/UA720.pdf
-# - IPA Extensions: http://www.unicode.org/charts/PDF/U0250.pdf
-# - Phonetic Extensions: http://www.unicode.org/charts/PDF/U1D00.pdf
-# - Phonetic Extensions Supplement: http://www.unicode.org/charts/PDF/U1D80.pdf
-# - General Punctuation: http://www.unicode.org/charts/PDF/U2000.pdf
-# - Superscripts and Subscripts: http://www.unicode.org/charts/PDF/U2070.pdf
-# - Enclosed Alphanumerics: http://www.unicode.org/charts/PDF/U2460.pdf
-# - Dingbats: http://www.unicode.org/charts/PDF/U2700.pdf
-# - Supplemental Punctuation: http://www.unicode.org/charts/PDF/U2E00.pdf
-# - Alphabetic Presentation Forms: http://www.unicode.org/charts/PDF/UFB00.pdf
-# - Halfwidth and Fullwidth Forms: http://www.unicode.org/charts/PDF/UFF00.pdf
-#  
-# See: http://en.wikipedia.org/wiki/Latin_characters_in_Unicode
-#
-# The set of character conversions supported by this map is a superset of
-# those supported by the map represented by mapping-ISOLatin1Accent.txt.
-#
-# See the bottom of this file for the Perl script used to generate the contents
-# of this file (without this header) from ASCIIFoldingFilter.java.
-
-
-# Syntax:
-#   "source" => "target"
-#     "source".length() > 0 (source cannot be empty.)
-#     "target".length() >= 0 (target can be empty.)
-
-
-# À  [LATIN CAPITAL LETTER A WITH GRAVE]
-"\u00C0" => "A"
-
-# Á  [LATIN CAPITAL LETTER A WITH ACUTE]
-"\u00C1" => "A"
-
-# Â  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX]
-"\u00C2" => "A"
-
-# Ã  [LATIN CAPITAL LETTER A WITH TILDE]
-"\u00C3" => "A"
-
-# Ä  [LATIN CAPITAL LETTER A WITH DIAERESIS]
-"\u00C4" => "A"
-
-# Å  [LATIN CAPITAL LETTER A WITH RING ABOVE]
-"\u00C5" => "A"
-
-# Ā  [LATIN CAPITAL LETTER A WITH MACRON]
-"\u0100" => "A"
-
-# Ă  [LATIN CAPITAL LETTER A WITH BREVE]
-"\u0102" => "A"
-
-# Ą  [LATIN CAPITAL LETTER A WITH OGONEK]
-"\u0104" => "A"
-
-# Ə  http://en.wikipedia.org/wiki/Schwa  [LATIN CAPITAL LETTER SCHWA]
-"\u018F" => "A"
-
-# Ǎ  [LATIN CAPITAL LETTER A WITH CARON]
-"\u01CD" => "A"
-
-# Ǟ  [LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON]
-"\u01DE" => "A"
-
-# Ǡ  [LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON]
-"\u01E0" => "A"
-
-# Ǻ  [LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE]
-"\u01FA" => "A"
-
-# Ȁ  [LATIN CAPITAL LETTER A WITH DOUBLE GRAVE]
-"\u0200" => "A"
-
-# Ȃ  [LATIN CAPITAL LETTER A WITH INVERTED BREVE]
-"\u0202" => "A"
-
-# Ȧ  [LATIN CAPITAL LETTER A WITH DOT ABOVE]
-"\u0226" => "A"
-
-# Ⱥ  [LATIN CAPITAL LETTER A WITH STROKE]
-"\u023A" => "A"
-
-# ᴀ  [LATIN LETTER SMALL CAPITAL A]
-"\u1D00" => "A"
-
-# Ḁ  [LATIN CAPITAL LETTER A WITH RING BELOW]
-"\u1E00" => "A"
-
-# Ạ  [LATIN CAPITAL LETTER A WITH DOT BELOW]
-"\u1EA0" => "A"
-
-# Ả  [LATIN CAPITAL LETTER A WITH HOOK ABOVE]
-"\u1EA2" => "A"
-
-# Ấ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE]
-"\u1EA4" => "A"
-
-# Ầ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE]
-"\u1EA6" => "A"
-
-# Ẩ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EA8" => "A"
-
-# Ẫ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE]
-"\u1EAA" => "A"
-
-# Ậ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EAC" => "A"
-
-# Ắ  [LATIN CAPITAL LETTER A WITH BREVE AND ACUTE]
-"\u1EAE" => "A"
-
-# Ằ  [LATIN CAPITAL LETTER A WITH BREVE AND GRAVE]
-"\u1EB0" => "A"
-
-# Ẳ  [LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE]
-"\u1EB2" => "A"
-
-# Ẵ  [LATIN CAPITAL LETTER A WITH BREVE AND TILDE]
-"\u1EB4" => "A"
-
-# Ặ  [LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW]
-"\u1EB6" => "A"
-
-# Ⓐ  [CIRCLED LATIN CAPITAL LETTER A]
-"\u24B6" => "A"
-
-# A  [FULLWIDTH LATIN CAPITAL LETTER A]
-"\uFF21" => "A"
-
-# à  [LATIN SMALL LETTER A WITH GRAVE]
-"\u00E0" => "a"
-
-# á  [LATIN SMALL LETTER A WITH ACUTE]
-"\u00E1" => "a"
-
-# â  [LATIN SMALL LETTER A WITH CIRCUMFLEX]
-"\u00E2" => "a"
-
-# ã  [LATIN SMALL LETTER A WITH TILDE]
-"\u00E3" => "a"
-
-# ä  [LATIN SMALL LETTER A WITH DIAERESIS]
-"\u00E4" => "a"
-
-# å  [LATIN SMALL LETTER A WITH RING ABOVE]
-"\u00E5" => "a"
-
-# ā  [LATIN SMALL LETTER A WITH MACRON]
-"\u0101" => "a"
-
-# ă  [LATIN SMALL LETTER A WITH BREVE]
-"\u0103" => "a"
-
-# ą  [LATIN SMALL LETTER A WITH OGONEK]
-"\u0105" => "a"
-
-# ǎ  [LATIN SMALL LETTER A WITH CARON]
-"\u01CE" => "a"
-
-# ǟ  [LATIN SMALL LETTER A WITH DIAERESIS AND MACRON]
-"\u01DF" => "a"
-
-# ǡ  [LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON]
-"\u01E1" => "a"
-
-# ǻ  [LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE]
-"\u01FB" => "a"
-
-# ȁ  [LATIN SMALL LETTER A WITH DOUBLE GRAVE]
-"\u0201" => "a"
-
-# ȃ  [LATIN SMALL LETTER A WITH INVERTED BREVE]
-"\u0203" => "a"
-
-# ȧ  [LATIN SMALL LETTER A WITH DOT ABOVE]
-"\u0227" => "a"
-
-# ɐ  [LATIN SMALL LETTER TURNED A]
-"\u0250" => "a"
-
-# ə  [LATIN SMALL LETTER SCHWA]
-"\u0259" => "a"
-
-# ɚ  [LATIN SMALL LETTER SCHWA WITH HOOK]
-"\u025A" => "a"
-
-# ᶏ  [LATIN SMALL LETTER A WITH RETROFLEX HOOK]
-"\u1D8F" => "a"
-
-# ᶕ  [LATIN SMALL LETTER SCHWA WITH RETROFLEX HOOK]
-"\u1D95" => "a"
-
-# ạ  [LATIN SMALL LETTER A WITH RING BELOW]
-"\u1E01" => "a"
-
-# ả  [LATIN SMALL LETTER A WITH RIGHT HALF RING]
-"\u1E9A" => "a"
-
-# ạ  [LATIN SMALL LETTER A WITH DOT BELOW]
-"\u1EA1" => "a"
-
-# ả  [LATIN SMALL LETTER A WITH HOOK ABOVE]
-"\u1EA3" => "a"
-
-# ấ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE]
-"\u1EA5" => "a"
-
-# ầ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE]
-"\u1EA7" => "a"
-
-# ẩ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EA9" => "a"
-
-# ẫ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE]
-"\u1EAB" => "a"
-
-# ậ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EAD" => "a"
-
-# ắ  [LATIN SMALL LETTER A WITH BREVE AND ACUTE]
-"\u1EAF" => "a"
-
-# ằ  [LATIN SMALL LETTER A WITH BREVE AND GRAVE]
-"\u1EB1" => "a"
-
-# ẳ  [LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE]
-"\u1EB3" => "a"
-
-# ẵ  [LATIN SMALL LETTER A WITH BREVE AND TILDE]
-"\u1EB5" => "a"
-
-# ặ  [LATIN SMALL LETTER A WITH BREVE AND DOT BELOW]
-"\u1EB7" => "a"
-
-# ₐ  [LATIN SUBSCRIPT SMALL LETTER A]
-"\u2090" => "a"
-
-# ₔ  [LATIN SUBSCRIPT SMALL LETTER SCHWA]
-"\u2094" => "a"
-
-# ⓐ  [CIRCLED LATIN SMALL LETTER A]
-"\u24D0" => "a"
-
-# ⱥ  [LATIN SMALL LETTER A WITH STROKE]
-"\u2C65" => "a"
-
-# Ɐ  [LATIN CAPITAL LETTER TURNED A]
-"\u2C6F" => "a"
-
-# a  [FULLWIDTH LATIN SMALL LETTER A]
-"\uFF41" => "a"
-
-# Ꜳ  [LATIN CAPITAL LETTER AA]
-"\uA732" => "AA"
-
-# Æ  [LATIN CAPITAL LETTER AE]
-"\u00C6" => "AE"
-
-# Ǣ  [LATIN CAPITAL LETTER AE WITH MACRON]
-"\u01E2" => "AE"
-
-# Ǽ  [LATIN CAPITAL LETTER AE WITH ACUTE]
-"\u01FC" => "AE"
-
-# ᴁ  [LATIN LETTER SMALL CAPITAL AE]
-"\u1D01" => "AE"
-
-# Ꜵ  [LATIN CAPITAL LETTER AO]
-"\uA734" => "AO"
-
-# Ꜷ  [LATIN CAPITAL LETTER AU]
-"\uA736" => "AU"
-
-# Ꜹ  [LATIN CAPITAL LETTER AV]
-"\uA738" => "AV"
-
-# Ꜻ  [LATIN CAPITAL LETTER AV WITH HORIZONTAL BAR]
-"\uA73A" => "AV"
-
-# Ꜽ  [LATIN CAPITAL LETTER AY]
-"\uA73C" => "AY"
-
-# ⒜  [PARENTHESIZED LATIN SMALL LETTER A]
-"\u249C" => "(a)"
-
-# ꜳ  [LATIN SMALL LETTER AA]
-"\uA733" => "aa"
-
-# æ  [LATIN SMALL LETTER AE]
-"\u00E6" => "ae"
-
-# ǣ  [LATIN SMALL LETTER AE WITH MACRON]
-"\u01E3" => "ae"
-
-# ǽ  [LATIN SMALL LETTER AE WITH ACUTE]
-"\u01FD" => "ae"
-
-# ᴂ  [LATIN SMALL LETTER TURNED AE]
-"\u1D02" => "ae"
-
-# ꜵ  [LATIN SMALL LETTER AO]
-"\uA735" => "ao"
-
-# ꜷ  [LATIN SMALL LETTER AU]
-"\uA737" => "au"
-
-# ꜹ  [LATIN SMALL LETTER AV]
-"\uA739" => "av"
-
-# ꜻ  [LATIN SMALL LETTER AV WITH HORIZONTAL BAR]
-"\uA73B" => "av"
-
-# ꜽ  [LATIN SMALL LETTER AY]
-"\uA73D" => "ay"
-
-# Ɓ  [LATIN CAPITAL LETTER B WITH HOOK]
-"\u0181" => "B"
-
-# Ƃ  [LATIN CAPITAL LETTER B WITH TOPBAR]
-"\u0182" => "B"
-
-# Ƀ  [LATIN CAPITAL LETTER B WITH STROKE]
-"\u0243" => "B"
-
-# ʙ  [LATIN LETTER SMALL CAPITAL B]
-"\u0299" => "B"
-
-# ᴃ  [LATIN LETTER SMALL CAPITAL BARRED B]
-"\u1D03" => "B"
-
-# Ḃ  [LATIN CAPITAL LETTER B WITH DOT ABOVE]
-"\u1E02" => "B"
-
-# Ḅ  [LATIN CAPITAL LETTER B WITH DOT BELOW]
-"\u1E04" => "B"
-
-# Ḇ  [LATIN CAPITAL LETTER B WITH LINE BELOW]
-"\u1E06" => "B"
-
-# Ⓑ  [CIRCLED LATIN CAPITAL LETTER B]
-"\u24B7" => "B"
-
-# B  [FULLWIDTH LATIN CAPITAL LETTER B]
-"\uFF22" => "B"
-
-# ƀ  [LATIN SMALL LETTER B WITH STROKE]
-"\u0180" => "b"
-
-# ƃ  [LATIN SMALL LETTER B WITH TOPBAR]
-"\u0183" => "b"
-
-# ɓ  [LATIN SMALL LETTER B WITH HOOK]
-"\u0253" => "b"
-
-# ᵬ  [LATIN SMALL LETTER B WITH MIDDLE TILDE]
-"\u1D6C" => "b"
-
-# ᶀ  [LATIN SMALL LETTER B WITH PALATAL HOOK]
-"\u1D80" => "b"
-
-# ḃ  [LATIN SMALL LETTER B WITH DOT ABOVE]
-"\u1E03" => "b"
-
-# ḅ  [LATIN SMALL LETTER B WITH DOT BELOW]
-"\u1E05" => "b"
-
-# ḇ  [LATIN SMALL LETTER B WITH LINE BELOW]
-"\u1E07" => "b"
-
-# ⓑ  [CIRCLED LATIN SMALL LETTER B]
-"\u24D1" => "b"
-
-# b  [FULLWIDTH LATIN SMALL LETTER B]
-"\uFF42" => "b"
-
-# ⒝  [PARENTHESIZED LATIN SMALL LETTER B]
-"\u249D" => "(b)"
-
-# Ç  [LATIN CAPITAL LETTER C WITH CEDILLA]
-"\u00C7" => "C"
-
-# Ć  [LATIN CAPITAL LETTER C WITH ACUTE]
-"\u0106" => "C"
-
-# Ĉ  [LATIN CAPITAL LETTER C WITH CIRCUMFLEX]
-"\u0108" => "C"
-
-# Ċ  [LATIN CAPITAL LETTER C WITH DOT ABOVE]
-"\u010A" => "C"
-
-# Č  [LATIN CAPITAL LETTER C WITH CARON]
-"\u010C" => "C"
-
-# Ƈ  [LATIN CAPITAL LETTER C WITH HOOK]
-"\u0187" => "C"
-
-# Ȼ  [LATIN CAPITAL LETTER C WITH STROKE]
-"\u023B" => "C"
-
-# ʗ  [LATIN LETTER STRETCHED C]
-"\u0297" => "C"
-
-# ᴄ  [LATIN LETTER SMALL CAPITAL C]
-"\u1D04" => "C"
-
-# Ḉ  [LATIN CAPITAL LETTER C WITH CEDILLA AND ACUTE]
-"\u1E08" => "C"
-
-# Ⓒ  [CIRCLED LATIN CAPITAL LETTER C]
-"\u24B8" => "C"
-
-# C  [FULLWIDTH LATIN CAPITAL LETTER C]
-"\uFF23" => "C"
-
-# ç  [LATIN SMALL LETTER C WITH CEDILLA]
-"\u00E7" => "c"
-
-# ć  [LATIN SMALL LETTER C WITH ACUTE]
-"\u0107" => "c"
-
-# ĉ  [LATIN SMALL LETTER C WITH CIRCUMFLEX]
-"\u0109" => "c"
-
-# ċ  [LATIN SMALL LETTER C WITH DOT ABOVE]
-"\u010B" => "c"
-
-# č  [LATIN SMALL LETTER C WITH CARON]
-"\u010D" => "c"
-
-# ƈ  [LATIN SMALL LETTER C WITH HOOK]
-"\u0188" => "c"
-
-# ȼ  [LATIN SMALL LETTER C WITH STROKE]
-"\u023C" => "c"
-
-# ɕ  [LATIN SMALL LETTER C WITH CURL]
-"\u0255" => "c"
-
-# ḉ  [LATIN SMALL LETTER C WITH CEDILLA AND ACUTE]
-"\u1E09" => "c"
-
-# ↄ  [LATIN SMALL LETTER REVERSED C]
-"\u2184" => "c"
-
-# ⓒ  [CIRCLED LATIN SMALL LETTER C]
-"\u24D2" => "c"
-
-# Ꜿ  [LATIN CAPITAL LETTER REVERSED C WITH DOT]
-"\uA73E" => "c"
-
-# ꜿ  [LATIN SMALL LETTER REVERSED C WITH DOT]
-"\uA73F" => "c"
-
-# c  [FULLWIDTH LATIN SMALL LETTER C]
-"\uFF43" => "c"
-
-# ⒞  [PARENTHESIZED LATIN SMALL LETTER C]
-"\u249E" => "(c)"
-
-# Ð  [LATIN CAPITAL LETTER ETH]
-"\u00D0" => "D"
-
-# Ď  [LATIN CAPITAL LETTER D WITH CARON]
-"\u010E" => "D"
-
-# Đ  [LATIN CAPITAL LETTER D WITH STROKE]
-"\u0110" => "D"
-
-# Ɖ  [LATIN CAPITAL LETTER AFRICAN D]
-"\u0189" => "D"
-
-# Ɗ  [LATIN CAPITAL LETTER D WITH HOOK]
-"\u018A" => "D"
-
-# Ƌ  [LATIN CAPITAL LETTER D WITH TOPBAR]
-"\u018B" => "D"
-
-# ᴅ  [LATIN LETTER SMALL CAPITAL D]
-"\u1D05" => "D"
-
-# ᴆ  [LATIN LETTER SMALL CAPITAL ETH]
-"\u1D06" => "D"
-
-# Ḋ  [LATIN CAPITAL LETTER D WITH DOT ABOVE]
-"\u1E0A" => "D"
-
-# Ḍ  [LATIN CAPITAL LETTER D WITH DOT BELOW]
-"\u1E0C" => "D"
-
-# Ḏ  [LATIN CAPITAL LETTER D WITH LINE BELOW]
-"\u1E0E" => "D"
-
-# Ḑ  [LATIN CAPITAL LETTER D WITH CEDILLA]
-"\u1E10" => "D"
-
-# Ḓ  [LATIN CAPITAL LETTER D WITH CIRCUMFLEX BELOW]
-"\u1E12" => "D"
-
-# Ⓓ  [CIRCLED LATIN CAPITAL LETTER D]
-"\u24B9" => "D"
-
-# Ꝺ  [LATIN CAPITAL LETTER INSULAR D]
-"\uA779" => "D"
-
-# D  [FULLWIDTH LATIN CAPITAL LETTER D]
-"\uFF24" => "D"
-
-# ð  [LATIN SMALL LETTER ETH]
-"\u00F0" => "d"
-
-# ď  [LATIN SMALL LETTER D WITH CARON]
-"\u010F" => "d"
-
-# đ  [LATIN SMALL LETTER D WITH STROKE]
-"\u0111" => "d"
-
-# ƌ  [LATIN SMALL LETTER D WITH TOPBAR]
-"\u018C" => "d"
-
-# ȡ  [LATIN SMALL LETTER D WITH CURL]
-"\u0221" => "d"
-
-# ɖ  [LATIN SMALL LETTER D WITH TAIL]
-"\u0256" => "d"
-
-# ɗ  [LATIN SMALL LETTER D WITH HOOK]
-"\u0257" => "d"
-
-# ᵭ  [LATIN SMALL LETTER D WITH MIDDLE TILDE]
-"\u1D6D" => "d"
-
-# ᶁ  [LATIN SMALL LETTER D WITH PALATAL HOOK]
-"\u1D81" => "d"
-
-# ᶑ  [LATIN SMALL LETTER D WITH HOOK AND TAIL]
-"\u1D91" => "d"
-
-# ḋ  [LATIN SMALL LETTER D WITH DOT ABOVE]
-"\u1E0B" => "d"
-
-# ḍ  [LATIN SMALL LETTER D WITH DOT BELOW]
-"\u1E0D" => "d"
-
-# ḏ  [LATIN SMALL LETTER D WITH LINE BELOW]
-"\u1E0F" => "d"
-
-# ḑ  [LATIN SMALL LETTER D WITH CEDILLA]
-"\u1E11" => "d"
-
-# ḓ  [LATIN SMALL LETTER D WITH CIRCUMFLEX BELOW]
-"\u1E13" => "d"
-
-# ⓓ  [CIRCLED LATIN SMALL LETTER D]
-"\u24D3" => "d"
-
-# ꝺ  [LATIN SMALL LETTER INSULAR D]
-"\uA77A" => "d"
-
-# d  [FULLWIDTH LATIN SMALL LETTER D]
-"\uFF44" => "d"
-
-# DŽ  [LATIN CAPITAL LETTER DZ WITH CARON]
-"\u01C4" => "DZ"
-
-# DZ  [LATIN CAPITAL LETTER DZ]
-"\u01F1" => "DZ"
-
-# Dž  [LATIN CAPITAL LETTER D WITH SMALL LETTER Z WITH CARON]
-"\u01C5" => "Dz"
-
-# Dz  [LATIN CAPITAL LETTER D WITH SMALL LETTER Z]
-"\u01F2" => "Dz"
-
-# ⒟  [PARENTHESIZED LATIN SMALL LETTER D]
-"\u249F" => "(d)"
-
-# ȸ  [LATIN SMALL LETTER DB DIGRAPH]
-"\u0238" => "db"
-
-# dž  [LATIN SMALL LETTER DZ WITH CARON]
-"\u01C6" => "dz"
-
-# dz  [LATIN SMALL LETTER DZ]
-"\u01F3" => "dz"
-
-# ʣ  [LATIN SMALL LETTER DZ DIGRAPH]
-"\u02A3" => "dz"
-
-# ʥ  [LATIN SMALL LETTER DZ DIGRAPH WITH CURL]
-"\u02A5" => "dz"
-
-# È  [LATIN CAPITAL LETTER E WITH GRAVE]
-"\u00C8" => "E"
-
-# É  [LATIN CAPITAL LETTER E WITH ACUTE]
-"\u00C9" => "E"
-
-# Ê  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX]
-"\u00CA" => "E"
-
-# Ë  [LATIN CAPITAL LETTER E WITH DIAERESIS]
-"\u00CB" => "E"
-
-# Ē  [LATIN CAPITAL LETTER E WITH MACRON]
-"\u0112" => "E"
-
-# Ĕ  [LATIN CAPITAL LETTER E WITH BREVE]
-"\u0114" => "E"
-
-# Ė  [LATIN CAPITAL LETTER E WITH DOT ABOVE]
-"\u0116" => "E"
-
-# Ę  [LATIN CAPITAL LETTER E WITH OGONEK]
-"\u0118" => "E"
-
-# Ě  [LATIN CAPITAL LETTER E WITH CARON]
-"\u011A" => "E"
-
-# Ǝ  [LATIN CAPITAL LETTER REVERSED E]
-"\u018E" => "E"
-
-# Ɛ  [LATIN CAPITAL LETTER OPEN E]
-"\u0190" => "E"
-
-# Ȅ  [LATIN CAPITAL LETTER E WITH DOUBLE GRAVE]
-"\u0204" => "E"
-
-# Ȇ  [LATIN CAPITAL LETTER E WITH INVERTED BREVE]
-"\u0206" => "E"
-
-# Ȩ  [LATIN CAPITAL LETTER E WITH CEDILLA]
-"\u0228" => "E"
-
-# Ɇ  [LATIN CAPITAL LETTER E WITH STROKE]
-"\u0246" => "E"
-
-# ᴇ  [LATIN LETTER SMALL CAPITAL E]
-"\u1D07" => "E"
-
-# Ḕ  [LATIN CAPITAL LETTER E WITH MACRON AND GRAVE]
-"\u1E14" => "E"
-
-# Ḗ  [LATIN CAPITAL LETTER E WITH MACRON AND ACUTE]
-"\u1E16" => "E"
-
-# Ḙ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX BELOW]
-"\u1E18" => "E"
-
-# Ḛ  [LATIN CAPITAL LETTER E WITH TILDE BELOW]
-"\u1E1A" => "E"
-
-# Ḝ  [LATIN CAPITAL LETTER E WITH CEDILLA AND BREVE]
-"\u1E1C" => "E"
-
-# Ẹ  [LATIN CAPITAL LETTER E WITH DOT BELOW]
-"\u1EB8" => "E"
-
-# Ẻ  [LATIN CAPITAL LETTER E WITH HOOK ABOVE]
-"\u1EBA" => "E"
-
-# Ẽ  [LATIN CAPITAL LETTER E WITH TILDE]
-"\u1EBC" => "E"
-
-# Ế  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND ACUTE]
-"\u1EBE" => "E"
-
-# Ề  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND GRAVE]
-"\u1EC0" => "E"
-
-# Ể  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EC2" => "E"
-
-# Ễ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE]
-"\u1EC4" => "E"
-
-# Ệ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EC6" => "E"
-
-# Ⓔ  [CIRCLED LATIN CAPITAL LETTER E]
-"\u24BA" => "E"
-
-# ⱻ  [LATIN LETTER SMALL CAPITAL TURNED E]
-"\u2C7B" => "E"
-
-# E  [FULLWIDTH LATIN CAPITAL LETTER E]
-"\uFF25" => "E"
-
-# è  [LATIN SMALL LETTER E WITH GRAVE]
-"\u00E8" => "e"
-
-# é  [LATIN SMALL LETTER E WITH ACUTE]
-"\u00E9" => "e"
-
-# ê  [LATIN SMALL LETTER E WITH CIRCUMFLEX]
-"\u00EA" => "e"
-
-# ë  [LATIN SMALL LETTER E WITH DIAERESIS]
-"\u00EB" => "e"
-
-# ē  [LATIN SMALL LETTER E WITH MACRON]
-"\u0113" => "e"
-
-# ĕ  [LATIN SMALL LETTER E WITH BREVE]
-"\u0115" => "e"
-
-# ė  [LATIN SMALL LETTER E WITH DOT ABOVE]
-"\u0117" => "e"
-
-# ę  [LATIN SMALL LETTER E WITH OGONEK]
-"\u0119" => "e"
-
-# ě  [LATIN SMALL LETTER E WITH CARON]
-"\u011B" => "e"
-
-# ǝ  [LATIN SMALL LETTER TURNED E]
-"\u01DD" => "e"
-
-# ȅ  [LATIN SMALL LETTER E WITH DOUBLE GRAVE]
-"\u0205" => "e"
-
-# ȇ  [LATIN SMALL LETTER E WITH INVERTED BREVE]
-"\u0207" => "e"
-
-# ȩ  [LATIN SMALL LETTER E WITH CEDILLA]
-"\u0229" => "e"
-
-# ɇ  [LATIN SMALL LETTER E WITH STROKE]
-"\u0247" => "e"
-
-# ɘ  [LATIN SMALL LETTER REVERSED E]
-"\u0258" => "e"
-
-# ɛ  [LATIN SMALL LETTER OPEN E]
-"\u025B" => "e"
-
-# ɜ  [LATIN SMALL LETTER REVERSED OPEN E]
-"\u025C" => "e"
-
-# ɝ  [LATIN SMALL LETTER REVERSED OPEN E WITH HOOK]
-"\u025D" => "e"
-
-# ɞ  [LATIN SMALL LETTER CLOSED REVERSED OPEN E]
-"\u025E" => "e"
-
-# ʚ  [LATIN SMALL LETTER CLOSED OPEN E]
-"\u029A" => "e"
-
-# ᴈ  [LATIN SMALL LETTER TURNED OPEN E]
-"\u1D08" => "e"
-
-# ᶒ  [LATIN SMALL LETTER E WITH RETROFLEX HOOK]
-"\u1D92" => "e"
-
-# ᶓ  [LATIN SMALL LETTER OPEN E WITH RETROFLEX HOOK]
-"\u1D93" => "e"
-
-# ᶔ  [LATIN SMALL LETTER REVERSED OPEN E WITH RETROFLEX HOOK]
-"\u1D94" => "e"
-
-# ḕ  [LATIN SMALL LETTER E WITH MACRON AND GRAVE]
-"\u1E15" => "e"
-
-# ḗ  [LATIN SMALL LETTER E WITH MACRON AND ACUTE]
-"\u1E17" => "e"
-
-# ḙ  [LATIN SMALL LETTER E WITH CIRCUMFLEX BELOW]
-"\u1E19" => "e"
-
-# ḛ  [LATIN SMALL LETTER E WITH TILDE BELOW]
-"\u1E1B" => "e"
-
-# ḝ  [LATIN SMALL LETTER E WITH CEDILLA AND BREVE]
-"\u1E1D" => "e"
-
-# ẹ  [LATIN SMALL LETTER E WITH DOT BELOW]
-"\u1EB9" => "e"
-
-# ẻ  [LATIN SMALL LETTER E WITH HOOK ABOVE]
-"\u1EBB" => "e"
-
-# ẽ  [LATIN SMALL LETTER E WITH TILDE]
-"\u1EBD" => "e"
-
-# ế  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND ACUTE]
-"\u1EBF" => "e"
-
-# ề  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND GRAVE]
-"\u1EC1" => "e"
-
-# ể  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EC3" => "e"
-
-# ễ  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE]
-"\u1EC5" => "e"
-
-# ệ  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EC7" => "e"
-
-# ₑ  [LATIN SUBSCRIPT SMALL LETTER E]
-"\u2091" => "e"
-
-# ⓔ  [CIRCLED LATIN SMALL LETTER E]
-"\u24D4" => "e"
-
-# ⱸ  [LATIN SMALL LETTER E WITH NOTCH]
-"\u2C78" => "e"
-
-# e  [FULLWIDTH LATIN SMALL LETTER E]
-"\uFF45" => "e"
-
-# ⒠  [PARENTHESIZED LATIN SMALL LETTER E]
-"\u24A0" => "(e)"
-
-# Ƒ  [LATIN CAPITAL LETTER F WITH HOOK]
-"\u0191" => "F"
-
-# Ḟ  [LATIN CAPITAL LETTER F WITH DOT ABOVE]
-"\u1E1E" => "F"
-
-# Ⓕ  [CIRCLED LATIN CAPITAL LETTER F]
-"\u24BB" => "F"
-
-# ꜰ  [LATIN LETTER SMALL CAPITAL F]
-"\uA730" => "F"
-
-# Ꝼ  [LATIN CAPITAL LETTER INSULAR F]
-"\uA77B" => "F"
-
-# ꟻ  [LATIN EPIGRAPHIC LETTER REVERSED F]
-"\uA7FB" => "F"
-
-# F  [FULLWIDTH LATIN CAPITAL LETTER F]
-"\uFF26" => "F"
-
-# ƒ  [LATIN SMALL LETTER F WITH HOOK]
-"\u0192" => "f"
-
-# ᵮ  [LATIN SMALL LETTER F WITH MIDDLE TILDE]
-"\u1D6E" => "f"
-
-# ᶂ  [LATIN SMALL LETTER F WITH PALATAL HOOK]
-"\u1D82" => "f"
-
-# ḟ  [LATIN SMALL LETTER F WITH DOT ABOVE]
-"\u1E1F" => "f"
-
-# ẛ  [LATIN SMALL LETTER LONG S WITH DOT ABOVE]
-"\u1E9B" => "f"
-
-# ⓕ  [CIRCLED LATIN SMALL LETTER F]
-"\u24D5" => "f"
-
-# ꝼ  [LATIN SMALL LETTER INSULAR F]
-"\uA77C" => "f"
-
-# f  [FULLWIDTH LATIN SMALL LETTER F]
-"\uFF46" => "f"
-
-# ⒡  [PARENTHESIZED LATIN SMALL LETTER F]
-"\u24A1" => "(f)"
-
-# ff  [LATIN SMALL LIGATURE FF]
-"\uFB00" => "ff"
-
-# ffi  [LATIN SMALL LIGATURE FFI]
-"\uFB03" => "ffi"
-
-# ffl  [LATIN SMALL LIGATURE FFL]
-"\uFB04" => "ffl"
-
-# fi  [LATIN SMALL LIGATURE FI]
-"\uFB01" => "fi"
-
-# fl  [LATIN SMALL LIGATURE FL]
-"\uFB02" => "fl"
-
-# Ĝ  [LATIN CAPITAL LETTER G WITH CIRCUMFLEX]
-"\u011C" => "G"
-
-# Ğ  [LATIN CAPITAL LETTER G WITH BREVE]
-"\u011E" => "G"
-
-# Ġ  [LATIN CAPITAL LETTER G WITH DOT ABOVE]
-"\u0120" => "G"
-
-# Ģ  [LATIN CAPITAL LETTER G WITH CEDILLA]
-"\u0122" => "G"
-
-# Ɠ  [LATIN CAPITAL LETTER G WITH HOOK]
-"\u0193" => "G"
-
-# Ǥ  [LATIN CAPITAL LETTER G WITH STROKE]
-"\u01E4" => "G"
-
-# ǥ  [LATIN SMALL LETTER G WITH STROKE]
-"\u01E5" => "G"
-
-# Ǧ  [LATIN CAPITAL LETTER G WITH CARON]
-"\u01E6" => "G"
-
-# ǧ  [LATIN SMALL LETTER G WITH CARON]
-"\u01E7" => "G"
-
-# Ǵ  [LATIN CAPITAL LETTER G WITH ACUTE]
-"\u01F4" => "G"
-
-# ɢ  [LATIN LETTER SMALL CAPITAL G]
-"\u0262" => "G"
-
-# ʛ  [LATIN LETTER SMALL CAPITAL G WITH HOOK]
-"\u029B" => "G"
-
-# Ḡ  [LATIN CAPITAL LETTER G WITH MACRON]
-"\u1E20" => "G"
-
-# Ⓖ  [CIRCLED LATIN CAPITAL LETTER G]
-"\u24BC" => "G"
-
-# Ᵹ  [LATIN CAPITAL LETTER INSULAR G]
-"\uA77D" => "G"
-
-# Ꝿ  [LATIN CAPITAL LETTER TURNED INSULAR G]
-"\uA77E" => "G"
-
-# G  [FULLWIDTH LATIN CAPITAL LETTER G]
-"\uFF27" => "G"
-
-# ĝ  [LATIN SMALL LETTER G WITH CIRCUMFLEX]
-"\u011D" => "g"
-
-# ğ  [LATIN SMALL LETTER G WITH BREVE]
-"\u011F" => "g"
-
-# ġ  [LATIN SMALL LETTER G WITH DOT ABOVE]
-"\u0121" => "g"
-
-# ģ  [LATIN SMALL LETTER G WITH CEDILLA]
-"\u0123" => "g"
-
-# ǵ  [LATIN SMALL LETTER G WITH ACUTE]
-"\u01F5" => "g"
-
-# ɠ  [LATIN SMALL LETTER G WITH HOOK]
-"\u0260" => "g"
-
-# ɡ  [LATIN SMALL LETTER SCRIPT G]
-"\u0261" => "g"
-
-# ᵷ  [LATIN SMALL LETTER TURNED G]
-"\u1D77" => "g"
-
-# ᵹ  [LATIN SMALL LETTER INSULAR G]
-"\u1D79" => "g"
-
-# ᶃ  [LATIN SMALL LETTER G WITH PALATAL HOOK]
-"\u1D83" => "g"
-
-# ḡ  [LATIN SMALL LETTER G WITH MACRON]
-"\u1E21" => "g"
-
-# ⓖ  [CIRCLED LATIN SMALL LETTER G]
-"\u24D6" => "g"
-
-# ꝿ  [LATIN SMALL LETTER TURNED INSULAR G]
-"\uA77F" => "g"
-
-# g  [FULLWIDTH LATIN SMALL LETTER G]
-"\uFF47" => "g"
-
-# ⒢  [PARENTHESIZED LATIN SMALL LETTER G]
-"\u24A2" => "(g)"
-
-# Ĥ  [LATIN CAPITAL LETTER H WITH CIRCUMFLEX]
-"\u0124" => "H"
-
-# Ħ  [LATIN CAPITAL LETTER H WITH STROKE]
-"\u0126" => "H"
-
-# Ȟ  [LATIN CAPITAL LETTER H WITH CARON]
-"\u021E" => "H"
-
-# ʜ  [LATIN LETTER SMALL CAPITAL H]
-"\u029C" => "H"
-
-# Ḣ  [LATIN CAPITAL LETTER H WITH DOT ABOVE]
-"\u1E22" => "H"
-
-# Ḥ  [LATIN CAPITAL LETTER H WITH DOT BELOW]
-"\u1E24" => "H"
-
-# Ḧ  [LATIN CAPITAL LETTER H WITH DIAERESIS]
-"\u1E26" => "H"
-
-# Ḩ  [LATIN CAPITAL LETTER H WITH CEDILLA]
-"\u1E28" => "H"
-
-# Ḫ  [LATIN CAPITAL LETTER H WITH BREVE BELOW]
-"\u1E2A" => "H"
-
-# Ⓗ  [CIRCLED LATIN CAPITAL LETTER H]
-"\u24BD" => "H"
-
-# Ⱨ  [LATIN CAPITAL LETTER H WITH DESCENDER]
-"\u2C67" => "H"
-
-# Ⱶ  [LATIN CAPITAL LETTER HALF H]
-"\u2C75" => "H"
-
-# H  [FULLWIDTH LATIN CAPITAL LETTER H]
-"\uFF28" => "H"
-
-# ĥ  [LATIN SMALL LETTER H WITH CIRCUMFLEX]
-"\u0125" => "h"
-
-# ħ  [LATIN SMALL LETTER H WITH STROKE]
-"\u0127" => "h"
-
-# ȟ  [LATIN SMALL LETTER H WITH CARON]
-"\u021F" => "h"
-
-# ɥ  [LATIN SMALL LETTER TURNED H]
-"\u0265" => "h"
-
-# ɦ  [LATIN SMALL LETTER H WITH HOOK]
-"\u0266" => "h"
-
-# ʮ  [LATIN SMALL LETTER TURNED H WITH FISHHOOK]
-"\u02AE" => "h"
-
-# ʯ  [LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL]
-"\u02AF" => "h"
-
-# ḣ  [LATIN SMALL LETTER H WITH DOT ABOVE]
-"\u1E23" => "h"
-
-# ḥ  [LATIN SMALL LETTER H WITH DOT BELOW]
-"\u1E25" => "h"
-
-# ḧ  [LATIN SMALL LETTER H WITH DIAERESIS]
-"\u1E27" => "h"
-
-# ḩ  [LATIN SMALL LETTER H WITH CEDILLA]
-"\u1E29" => "h"
-
-# ḫ  [LATIN SMALL LETTER H WITH BREVE BELOW]
-"\u1E2B" => "h"
-
-# ẖ  [LATIN SMALL LETTER H WITH LINE BELOW]
-"\u1E96" => "h"
-
-# ⓗ  [CIRCLED LATIN SMALL LETTER H]
-"\u24D7" => "h"
-
-# ⱨ  [LATIN SMALL LETTER H WITH DESCENDER]
-"\u2C68" => "h"
-
-# ⱶ  [LATIN SMALL LETTER HALF H]
-"\u2C76" => "h"
-
-# h  [FULLWIDTH LATIN SMALL LETTER H]
-"\uFF48" => "h"
-
-# Ƕ  http://en.wikipedia.org/wiki/Hwair  [LATIN CAPITAL LETTER HWAIR]
-"\u01F6" => "HV"
-
-# ⒣  [PARENTHESIZED LATIN SMALL LETTER H]
-"\u24A3" => "(h)"
-
-# ƕ  [LATIN SMALL LETTER HV]
-"\u0195" => "hv"
-
-# Ì  [LATIN CAPITAL LETTER I WITH GRAVE]
-"\u00CC" => "I"
-
-# Í  [LATIN CAPITAL LETTER I WITH ACUTE]
-"\u00CD" => "I"
-
-# Î  [LATIN CAPITAL LETTER I WITH CIRCUMFLEX]
-"\u00CE" => "I"
-
-# Ï  [LATIN CAPITAL LETTER I WITH DIAERESIS]
-"\u00CF" => "I"
-
-# Ĩ  [LATIN CAPITAL LETTER I WITH TILDE]
-"\u0128" => "I"
-
-# Ī  [LATIN CAPITAL LETTER I WITH MACRON]
-"\u012A" => "I"
-
-# Ĭ  [LATIN CAPITAL LETTER I WITH BREVE]
-"\u012C" => "I"
-
-# Į  [LATIN CAPITAL LETTER I WITH OGONEK]
-"\u012E" => "I"
-
-# İ  [LATIN CAPITAL LETTER I WITH DOT ABOVE]
-"\u0130" => "I"
-
-# Ɩ  [LATIN CAPITAL LETTER IOTA]
-"\u0196" => "I"
-
-# Ɨ  [LATIN CAPITAL LETTER I WITH STROKE]
-"\u0197" => "I"
-
-# Ǐ  [LATIN CAPITAL LETTER I WITH CARON]
-"\u01CF" => "I"
-
-# Ȉ  [LATIN CAPITAL LETTER I WITH DOUBLE GRAVE]
-"\u0208" => "I"
-
-# Ȋ  [LATIN CAPITAL LETTER I WITH INVERTED BREVE]
-"\u020A" => "I"
-
-# ɪ  [LATIN LETTER SMALL CAPITAL I]
-"\u026A" => "I"
-
-# ᵻ  [LATIN SMALL CAPITAL LETTER I WITH STROKE]
-"\u1D7B" => "I"
-
-# Ḭ  [LATIN CAPITAL LETTER I WITH TILDE BELOW]
-"\u1E2C" => "I"
-
-# Ḯ  [LATIN CAPITAL LETTER I WITH DIAERESIS AND ACUTE]
-"\u1E2E" => "I"
-
-# Ỉ  [LATIN CAPITAL LETTER I WITH HOOK ABOVE]
-"\u1EC8" => "I"
-
-# Ị  [LATIN CAPITAL LETTER I WITH DOT BELOW]
-"\u1ECA" => "I"
-
-# Ⓘ  [CIRCLED LATIN CAPITAL LETTER I]
-"\u24BE" => "I"
-
-# ꟾ  [LATIN EPIGRAPHIC LETTER I LONGA]
-"\uA7FE" => "I"
-
-# I  [FULLWIDTH LATIN CAPITAL LETTER I]
-"\uFF29" => "I"
-
-# ì  [LATIN SMALL LETTER I WITH GRAVE]
-"\u00EC" => "i"
-
-# í  [LATIN SMALL LETTER I WITH ACUTE]
-"\u00ED" => "i"
-
-# î  [LATIN SMALL LETTER I WITH CIRCUMFLEX]
-"\u00EE" => "i"
-
-# ï  [LATIN SMALL LETTER I WITH DIAERESIS]
-"\u00EF" => "i"
-
-# ĩ  [LATIN SMALL LETTER I WITH TILDE]
-"\u0129" => "i"
-
-# ī  [LATIN SMALL LETTER I WITH MACRON]
-"\u012B" => "i"
-
-# ĭ  [LATIN SMALL LETTER I WITH BREVE]
-"\u012D" => "i"
-
-# į  [LATIN SMALL LETTER I WITH OGONEK]
-"\u012F" => "i"
-
-# ı  [LATIN SMALL LETTER DOTLESS I]
-"\u0131" => "i"
-
-# ǐ  [LATIN SMALL LETTER I WITH CARON]
-"\u01D0" => "i"
-
-# ȉ  [LATIN SMALL LETTER I WITH DOUBLE GRAVE]
-"\u0209" => "i"
-
-# ȋ  [LATIN SMALL LETTER I WITH INVERTED BREVE]
-"\u020B" => "i"
-
-# ɨ  [LATIN SMALL LETTER I WITH STROKE]
-"\u0268" => "i"
-
-# ᴉ  [LATIN SMALL LETTER TURNED I]
-"\u1D09" => "i"
-
-# ᵢ  [LATIN SUBSCRIPT SMALL LETTER I]
-"\u1D62" => "i"
-
-# ᵼ  [LATIN SMALL LETTER IOTA WITH STROKE]
-"\u1D7C" => "i"
-
-# ᶖ  [LATIN SMALL LETTER I WITH RETROFLEX HOOK]
-"\u1D96" => "i"
-
-# ḭ  [LATIN SMALL LETTER I WITH TILDE BELOW]
-"\u1E2D" => "i"
-
-# ḯ  [LATIN SMALL LETTER I WITH DIAERESIS AND ACUTE]
-"\u1E2F" => "i"
-
-# ỉ  [LATIN SMALL LETTER I WITH HOOK ABOVE]
-"\u1EC9" => "i"
-
-# ị  [LATIN SMALL LETTER I WITH DOT BELOW]
-"\u1ECB" => "i"
-
-# ⁱ  [SUPERSCRIPT LATIN SMALL LETTER I]
-"\u2071" => "i"
-
-# ⓘ  [CIRCLED LATIN SMALL LETTER I]
-"\u24D8" => "i"
-
-# i  [FULLWIDTH LATIN SMALL LETTER I]
-"\uFF49" => "i"
-
-# IJ  [LATIN CAPITAL LIGATURE IJ]
-"\u0132" => "IJ"
-
-# ⒤  [PARENTHESIZED LATIN SMALL LETTER I]
-"\u24A4" => "(i)"
-
-# ij  [LATIN SMALL LIGATURE IJ]
-"\u0133" => "ij"
-
-# Ĵ  [LATIN CAPITAL LETTER J WITH CIRCUMFLEX]
-"\u0134" => "J"
-
-# Ɉ  [LATIN CAPITAL LETTER J WITH STROKE]
-"\u0248" => "J"
-
-# ᴊ  [LATIN LETTER SMALL CAPITAL J]
-"\u1D0A" => "J"
-
-# Ⓙ  [CIRCLED LATIN CAPITAL LETTER J]
-"\u24BF" => "J"
-
-# J  [FULLWIDTH LATIN CAPITAL LETTER J]
-"\uFF2A" => "J"
-
-# ĵ  [LATIN SMALL LETTER J WITH CIRCUMFLEX]
-"\u0135" => "j"
-
-# ǰ  [LATIN SMALL LETTER J WITH CARON]
-"\u01F0" => "j"
-
-# ȷ  [LATIN SMALL LETTER DOTLESS J]
-"\u0237" => "j"
-
-# ɉ  [LATIN SMALL LETTER J WITH STROKE]
-"\u0249" => "j"
-
-# ɟ  [LATIN SMALL LETTER DOTLESS J WITH STROKE]
-"\u025F" => "j"
-
-# ʄ  [LATIN SMALL LETTER DOTLESS J WITH STROKE AND HOOK]
-"\u0284" => "j"
-
-# ʝ  [LATIN SMALL LETTER J WITH CROSSED-TAIL]
-"\u029D" => "j"
-
-# ⓙ  [CIRCLED LATIN SMALL LETTER J]
-"\u24D9" => "j"
-
-# ⱼ  [LATIN SUBSCRIPT SMALL LETTER J]
-"\u2C7C" => "j"
-
-# j  [FULLWIDTH LATIN SMALL LETTER J]
-"\uFF4A" => "j"
-
-# ⒥  [PARENTHESIZED LATIN SMALL LETTER J]
-"\u24A5" => "(j)"
-
-# Ķ  [LATIN CAPITAL LETTER K WITH CEDILLA]
-"\u0136" => "K"
-
-# Ƙ  [LATIN CAPITAL LETTER K WITH HOOK]
-"\u0198" => "K"
-
-# Ǩ  [LATIN CAPITAL LETTER K WITH CARON]
-"\u01E8" => "K"
-
-# ᴋ  [LATIN LETTER SMALL CAPITAL K]
-"\u1D0B" => "K"
-
-# Ḱ  [LATIN CAPITAL LETTER K WITH ACUTE]
-"\u1E30" => "K"
-
-# Ḳ  [LATIN CAPITAL LETTER K WITH DOT BELOW]
-"\u1E32" => "K"
-
-# Ḵ  [LATIN CAPITAL LETTER K WITH LINE BELOW]
-"\u1E34" => "K"
-
-# Ⓚ  [CIRCLED LATIN CAPITAL LETTER K]
-"\u24C0" => "K"
-
-# Ⱪ  [LATIN CAPITAL LETTER K WITH DESCENDER]
-"\u2C69" => "K"
-
-# Ꝁ  [LATIN CAPITAL LETTER K WITH STROKE]
-"\uA740" => "K"
-
-# Ꝃ  [LATIN CAPITAL LETTER K WITH DIAGONAL STROKE]
-"\uA742" => "K"
-
-# Ꝅ  [LATIN CAPITAL LETTER K WITH STROKE AND DIAGONAL STROKE]
-"\uA744" => "K"
-
-# K  [FULLWIDTH LATIN CAPITAL LETTER K]
-"\uFF2B" => "K"
-
-# ķ  [LATIN SMALL LETTER K WITH CEDILLA]
-"\u0137" => "k"
-
-# ƙ  [LATIN SMALL LETTER K WITH HOOK]
-"\u0199" => "k"
-
-# ǩ  [LATIN SMALL LETTER K WITH CARON]
-"\u01E9" => "k"
-
-# ʞ  [LATIN SMALL LETTER TURNED K]
-"\u029E" => "k"
-
-# ᶄ  [LATIN SMALL LETTER K WITH PALATAL HOOK]
-"\u1D84" => "k"
-
-# ḱ  [LATIN SMALL LETTER K WITH ACUTE]
-"\u1E31" => "k"
-
-# ḳ  [LATIN SMALL LETTER K WITH DOT BELOW]
-"\u1E33" => "k"
-
-# ḵ  [LATIN SMALL LETTER K WITH LINE BELOW]
-"\u1E35" => "k"
-
-# ⓚ  [CIRCLED LATIN SMALL LETTER K]
-"\u24DA" => "k"
-
-# ⱪ  [LATIN SMALL LETTER K WITH DESCENDER]
-"\u2C6A" => "k"
-
-# ꝁ  [LATIN SMALL LETTER K WITH STROKE]
-"\uA741" => "k"
-
-# ꝃ  [LATIN SMALL LETTER K WITH DIAGONAL STROKE]
-"\uA743" => "k"
-
-# ꝅ  [LATIN SMALL LETTER K WITH STROKE AND DIAGONAL STROKE]
-"\uA745" => "k"
-
-# k  [FULLWIDTH LATIN SMALL LETTER K]
-"\uFF4B" => "k"
-
-# ⒦  [PARENTHESIZED LATIN SMALL LETTER K]
-"\u24A6" => "(k)"
-
-# Ĺ  [LATIN CAPITAL LETTER L WITH ACUTE]
-"\u0139" => "L"
-
-# Ļ  [LATIN CAPITAL LETTER L WITH CEDILLA]
-"\u013B" => "L"
-
-# Ľ  [LATIN CAPITAL LETTER L WITH CARON]
-"\u013D" => "L"
-
-# Ŀ  [LATIN CAPITAL LETTER L WITH MIDDLE DOT]
-"\u013F" => "L"
-
-# Ł  [LATIN CAPITAL LETTER L WITH STROKE]
-"\u0141" => "L"
-
-# Ƚ  [LATIN CAPITAL LETTER L WITH BAR]
-"\u023D" => "L"
-
-# ʟ  [LATIN LETTER SMALL CAPITAL L]
-"\u029F" => "L"
-
-# ᴌ  [LATIN LETTER SMALL CAPITAL L WITH STROKE]
-"\u1D0C" => "L"
-
-# Ḷ  [LATIN CAPITAL LETTER L WITH DOT BELOW]
-"\u1E36" => "L"
-
-# Ḹ  [LATIN CAPITAL LETTER L WITH DOT BELOW AND MACRON]
-"\u1E38" => "L"
-
-# Ḻ  [LATIN CAPITAL LETTER L WITH LINE BELOW]
-"\u1E3A" => "L"
-
-# Ḽ  [LATIN CAPITAL LETTER L WITH CIRCUMFLEX BELOW]
-"\u1E3C" => "L"
-
-# Ⓛ  [CIRCLED LATIN CAPITAL LETTER L]
-"\u24C1" => "L"
-
-# Ⱡ  [LATIN CAPITAL LETTER L WITH DOUBLE BAR]
-"\u2C60" => "L"
-
-# Ɫ  [LATIN CAPITAL LETTER L WITH MIDDLE TILDE]
-"\u2C62" => "L"
-
-# Ꝇ  [LATIN CAPITAL LETTER BROKEN L]
-"\uA746" => "L"
-
-# Ꝉ  [LATIN CAPITAL LETTER L WITH HIGH STROKE]
-"\uA748" => "L"
-
-# Ꞁ  [LATIN CAPITAL LETTER TURNED L]
-"\uA780" => "L"
-
-# L  [FULLWIDTH LATIN CAPITAL LETTER L]
-"\uFF2C" => "L"
-
-# ĺ  [LATIN SMALL LETTER L WITH ACUTE]
-"\u013A" => "l"
-
-# ļ  [LATIN SMALL LETTER L WITH CEDILLA]
-"\u013C" => "l"
-
-# ľ  [LATIN SMALL LETTER L WITH CARON]
-"\u013E" => "l"
-
-# ŀ  [LATIN SMALL LETTER L WITH MIDDLE DOT]
-"\u0140" => "l"
-
-# ł  [LATIN SMALL LETTER L WITH STROKE]
-"\u0142" => "l"
-
-# ƚ  [LATIN SMALL LETTER L WITH BAR]
-"\u019A" => "l"
-
-# ȴ  [LATIN SMALL LETTER L WITH CURL]
-"\u0234" => "l"
-
-# ɫ  [LATIN SMALL LETTER L WITH MIDDLE TILDE]
-"\u026B" => "l"
-
-# ɬ  [LATIN SMALL LETTER L WITH BELT]
-"\u026C" => "l"
-
-# ɭ  [LATIN SMALL LETTER L WITH RETROFLEX HOOK]
-"\u026D" => "l"
-
-# ᶅ  [LATIN SMALL LETTER L WITH PALATAL HOOK]
-"\u1D85" => "l"
-
-# ḷ  [LATIN SMALL LETTER L WITH DOT BELOW]
-"\u1E37" => "l"
-
-# ḹ  [LATIN SMALL LETTER L WITH DOT BELOW AND MACRON]
-"\u1E39" => "l"
-
-# ḻ  [LATIN SMALL LETTER L WITH LINE BELOW]
-"\u1E3B" => "l"
-
-# ḽ  [LATIN SMALL LETTER L WITH CIRCUMFLEX BELOW]
-"\u1E3D" => "l"
-
-# ⓛ  [CIRCLED LATIN SMALL LETTER L]
-"\u24DB" => "l"
-
-# ⱡ  [LATIN SMALL LETTER L WITH DOUBLE BAR]
-"\u2C61" => "l"
-
-# ꝇ  [LATIN SMALL LETTER BROKEN L]
-"\uA747" => "l"
-
-# ꝉ  [LATIN SMALL LETTER L WITH HIGH STROKE]
-"\uA749" => "l"
-
-# ꞁ  [LATIN SMALL LETTER TURNED L]
-"\uA781" => "l"
-
-# l  [FULLWIDTH LATIN SMALL LETTER L]
-"\uFF4C" => "l"
-
-# LJ  [LATIN CAPITAL LETTER LJ]
-"\u01C7" => "LJ"
-
-# Ỻ  [LATIN CAPITAL LETTER MIDDLE-WELSH LL]
-"\u1EFA" => "LL"
-
-# Lj  [LATIN CAPITAL LETTER L WITH SMALL LETTER J]
-"\u01C8" => "Lj"
-
-# ⒧  [PARENTHESIZED LATIN SMALL LETTER L]
-"\u24A7" => "(l)"
-
-# lj  [LATIN SMALL LETTER LJ]
-"\u01C9" => "lj"
-
-# ỻ  [LATIN SMALL LETTER MIDDLE-WELSH LL]
-"\u1EFB" => "ll"
-
-# ʪ  [LATIN SMALL LETTER LS DIGRAPH]
-"\u02AA" => "ls"
-
-# ʫ  [LATIN SMALL LETTER LZ DIGRAPH]
-"\u02AB" => "lz"
-
-# Ɯ  [LATIN CAPITAL LETTER TURNED M]
-"\u019C" => "M"
-
-# ᴍ  [LATIN LETTER SMALL CAPITAL M]
-"\u1D0D" => "M"
-
-# Ḿ  [LATIN CAPITAL LETTER M WITH ACUTE]
-"\u1E3E" => "M"
-
-# Ṁ  [LATIN CAPITAL LETTER M WITH DOT ABOVE]
-"\u1E40" => "M"
-
-# Ṃ  [LATIN CAPITAL LETTER M WITH DOT BELOW]
-"\u1E42" => "M"
-
-# Ⓜ  [CIRCLED LATIN CAPITAL LETTER M]
-"\u24C2" => "M"
-
-# Ɱ  [LATIN CAPITAL LETTER M WITH HOOK]
-"\u2C6E" => "M"
-
-# ꟽ  [LATIN EPIGRAPHIC LETTER INVERTED M]
-"\uA7FD" => "M"
-
-# ꟿ  [LATIN EPIGRAPHIC LETTER ARCHAIC M]
-"\uA7FF" => "M"
-
-# M  [FULLWIDTH LATIN CAPITAL LETTER M]
-"\uFF2D" => "M"
-
-# ɯ  [LATIN SMALL LETTER TURNED M]
-"\u026F" => "m"
-
-# ɰ  [LATIN SMALL LETTER TURNED M WITH LONG LEG]
-"\u0270" => "m"
-
-# ɱ  [LATIN SMALL LETTER M WITH HOOK]
-"\u0271" => "m"
-
-# ᵯ  [LATIN SMALL LETTER M WITH MIDDLE TILDE]
-"\u1D6F" => "m"
-
-# ᶆ  [LATIN SMALL LETTER M WITH PALATAL HOOK]
-"\u1D86" => "m"
-
-# ḿ  [LATIN SMALL LETTER M WITH ACUTE]
-"\u1E3F" => "m"
-
-# ṁ  [LATIN SMALL LETTER M WITH DOT ABOVE]
-"\u1E41" => "m"
-
-# ṃ  [LATIN SMALL LETTER M WITH DOT BELOW]
-"\u1E43" => "m"
-
-# ⓜ  [CIRCLED LATIN SMALL LETTER M]
-"\u24DC" => "m"
-
-# m  [FULLWIDTH LATIN SMALL LETTER M]
-"\uFF4D" => "m"
-
-# ⒨  [PARENTHESIZED LATIN SMALL LETTER M]
-"\u24A8" => "(m)"
-
-# Ñ  [LATIN CAPITAL LETTER N WITH TILDE]
-"\u00D1" => "N"
-
-# Ń  [LATIN CAPITAL LETTER N WITH ACUTE]
-"\u0143" => "N"
-
-# Ņ  [LATIN CAPITAL LETTER N WITH CEDILLA]
-"\u0145" => "N"
-
-# Ň  [LATIN CAPITAL LETTER N WITH CARON]
-"\u0147" => "N"
-
-# Ŋ  http://en.wikipedia.org/wiki/Eng_(letter)  [LATIN CAPITAL LETTER ENG]
-"\u014A" => "N"
-
-# Ɲ  [LATIN CAPITAL LETTER N WITH LEFT HOOK]
-"\u019D" => "N"
-
-# Ǹ  [LATIN CAPITAL LETTER N WITH GRAVE]
-"\u01F8" => "N"
-
-# Ƞ  [LATIN CAPITAL LETTER N WITH LONG RIGHT LEG]
-"\u0220" => "N"
-
-# ɴ  [LATIN LETTER SMALL CAPITAL N]
-"\u0274" => "N"
-
-# ᴎ  [LATIN LETTER SMALL CAPITAL REVERSED N]
-"\u1D0E" => "N"
-
-# Ṅ  [LATIN CAPITAL LETTER N WITH DOT ABOVE]
-"\u1E44" => "N"
-
-# Ṇ  [LATIN CAPITAL LETTER N WITH DOT BELOW]
-"\u1E46" => "N"
-
-# Ṉ  [LATIN CAPITAL LETTER N WITH LINE BELOW]
-"\u1E48" => "N"
-
-# Ṋ  [LATIN CAPITAL LETTER N WITH CIRCUMFLEX BELOW]
-"\u1E4A" => "N"
-
-# Ⓝ  [CIRCLED LATIN CAPITAL LETTER N]
-"\u24C3" => "N"
-
-# N  [FULLWIDTH LATIN CAPITAL LETTER N]
-"\uFF2E" => "N"
-
-# ñ  [LATIN SMALL LETTER N WITH TILDE]
-"\u00F1" => "n"
-
-# ń  [LATIN SMALL LETTER N WITH ACUTE]
-"\u0144" => "n"
-
-# ņ  [LATIN SMALL LETTER N WITH CEDILLA]
-"\u0146" => "n"
-
-# ň  [LATIN SMALL LETTER N WITH CARON]
-"\u0148" => "n"
-
-# ʼn  [LATIN SMALL LETTER N PRECEDED BY APOSTROPHE]
-"\u0149" => "n"
-
-# ŋ  http://en.wikipedia.org/wiki/Eng_(letter)  [LATIN SMALL LETTER ENG]
-"\u014B" => "n"
-
-# ƞ  [LATIN SMALL LETTER N WITH LONG RIGHT LEG]
-"\u019E" => "n"
-
-# ǹ  [LATIN SMALL LETTER N WITH GRAVE]
-"\u01F9" => "n"
-
-# ȵ  [LATIN SMALL LETTER N WITH CURL]
-"\u0235" => "n"
-
-# ɲ  [LATIN SMALL LETTER N WITH LEFT HOOK]
-"\u0272" => "n"
-
-# ɳ  [LATIN SMALL LETTER N WITH RETROFLEX HOOK]
-"\u0273" => "n"
-
-# ᵰ  [LATIN SMALL LETTER N WITH MIDDLE TILDE]
-"\u1D70" => "n"
-
-# ᶇ  [LATIN SMALL LETTER N WITH PALATAL HOOK]
-"\u1D87" => "n"
-
-# ṅ  [LATIN SMALL LETTER N WITH DOT ABOVE]
-"\u1E45" => "n"
-
-# ṇ  [LATIN SMALL LETTER N WITH DOT BELOW]
-"\u1E47" => "n"
-
-# ṉ  [LATIN SMALL LETTER N WITH LINE BELOW]
-"\u1E49" => "n"
-
-# ṋ  [LATIN SMALL LETTER N WITH CIRCUMFLEX BELOW]
-"\u1E4B" => "n"
-
-# ⁿ  [SUPERSCRIPT LATIN SMALL LETTER N]
-"\u207F" => "n"
-
-# ⓝ  [CIRCLED LATIN SMALL LETTER N]
-"\u24DD" => "n"
-
-# n  [FULLWIDTH LATIN SMALL LETTER N]
-"\uFF4E" => "n"
-
-# NJ  [LATIN CAPITAL LETTER NJ]
-"\u01CA" => "NJ"
-
-# Nj  [LATIN CAPITAL LETTER N WITH SMALL LETTER J]
-"\u01CB" => "Nj"
-
-# ⒩  [PARENTHESIZED LATIN SMALL LETTER N]
-"\u24A9" => "(n)"
-
-# nj  [LATIN SMALL LETTER NJ]
-"\u01CC" => "nj"
-
-# Ò  [LATIN CAPITAL LETTER O WITH GRAVE]
-"\u00D2" => "O"
-
-# Ó  [LATIN CAPITAL LETTER O WITH ACUTE]
-"\u00D3" => "O"
-
-# Ô  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX]
-"\u00D4" => "O"
-
-# Õ  [LATIN CAPITAL LETTER O WITH TILDE]
-"\u00D5" => "O"
-
-# Ö  [LATIN CAPITAL LETTER O WITH DIAERESIS]
-"\u00D6" => "O"
-
-# Ø  [LATIN CAPITAL LETTER O WITH STROKE]
-"\u00D8" => "O"
-
-# Ō  [LATIN CAPITAL LETTER O WITH MACRON]
-"\u014C" => "O"
-
-# Ŏ  [LATIN CAPITAL LETTER O WITH BREVE]
-"\u014E" => "O"
-
-# Ő  [LATIN CAPITAL LETTER O WITH DOUBLE ACUTE]
-"\u0150" => "O"
-
-# Ɔ  [LATIN CAPITAL LETTER OPEN O]
-"\u0186" => "O"
-
-# Ɵ  [LATIN CAPITAL LETTER O WITH MIDDLE TILDE]
-"\u019F" => "O"
-
-# Ơ  [LATIN CAPITAL LETTER O WITH HORN]
-"\u01A0" => "O"
-
-# Ǒ  [LATIN CAPITAL LETTER O WITH CARON]
-"\u01D1" => "O"
-
-# Ǫ  [LATIN CAPITAL LETTER O WITH OGONEK]
-"\u01EA" => "O"
-
-# Ǭ  [LATIN CAPITAL LETTER O WITH OGONEK AND MACRON]
-"\u01EC" => "O"
-
-# Ǿ  [LATIN CAPITAL LETTER O WITH STROKE AND ACUTE]
-"\u01FE" => "O"
-
-# Ȍ  [LATIN CAPITAL LETTER O WITH DOUBLE GRAVE]
-"\u020C" => "O"
-
-# Ȏ  [LATIN CAPITAL LETTER O WITH INVERTED BREVE]
-"\u020E" => "O"
-
-# Ȫ  [LATIN CAPITAL LETTER O WITH DIAERESIS AND MACRON]
-"\u022A" => "O"
-
-# Ȭ  [LATIN CAPITAL LETTER O WITH TILDE AND MACRON]
-"\u022C" => "O"
-
-# Ȯ  [LATIN CAPITAL LETTER O WITH DOT ABOVE]
-"\u022E" => "O"
-
-# Ȱ  [LATIN CAPITAL LETTER O WITH DOT ABOVE AND MACRON]
-"\u0230" => "O"
-
-# ᴏ  [LATIN LETTER SMALL CAPITAL O]
-"\u1D0F" => "O"
-
-# ᴐ  [LATIN LETTER SMALL CAPITAL OPEN O]
-"\u1D10" => "O"
-
-# Ṍ  [LATIN CAPITAL LETTER O WITH TILDE AND ACUTE]
-"\u1E4C" => "O"
-
-# Ṏ  [LATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS]
-"\u1E4E" => "O"
-
-# Ṑ  [LATIN CAPITAL LETTER O WITH MACRON AND GRAVE]
-"\u1E50" => "O"
-
-# Ṓ  [LATIN CAPITAL LETTER O WITH MACRON AND ACUTE]
-"\u1E52" => "O"
-
-# Ọ  [LATIN CAPITAL LETTER O WITH DOT BELOW]
-"\u1ECC" => "O"
-
-# Ỏ  [LATIN CAPITAL LETTER O WITH HOOK ABOVE]
-"\u1ECE" => "O"
-
-# Ố  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND ACUTE]
-"\u1ED0" => "O"
-
-# Ồ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND GRAVE]
-"\u1ED2" => "O"
-
-# Ổ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1ED4" => "O"
-
-# Ỗ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE]
-"\u1ED6" => "O"
-
-# Ộ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND DOT BELOW]
-"\u1ED8" => "O"
-
-# Ớ  [LATIN CAPITAL LETTER O WITH HORN AND ACUTE]
-"\u1EDA" => "O"
-
-# Ờ  [LATIN CAPITAL LETTER O WITH HORN AND GRAVE]
-"\u1EDC" => "O"
-
-# Ở  [LATIN CAPITAL LETTER O WITH HORN AND HOOK ABOVE]
-"\u1EDE" => "O"
-
-# Ỡ  [LATIN CAPITAL LETTER O WITH HORN AND TILDE]
-"\u1EE0" => "O"
-
-# Ợ  [LATIN CAPITAL LETTER O WITH HORN AND DOT BELOW]
-"\u1EE2" => "O"
-
-# Ⓞ  [CIRCLED LATIN CAPITAL LETTER O]
-"\u24C4" => "O"
-
-# Ꝋ  [LATIN CAPITAL LETTER O WITH LONG STROKE OVERLAY]
-"\uA74A" => "O"
-
-# Ꝍ  [LATIN CAPITAL LETTER O WITH LOOP]
-"\uA74C" => "O"
-
-# O  [FULLWIDTH LATIN CAPITAL LETTER O]
-"\uFF2F" => "O"
-
-# ò  [LATIN SMALL LETTER O WITH GRAVE]
-"\u00F2" => "o"
-
-# ó  [LATIN SMALL LETTER O WITH ACUTE]
-"\u00F3" => "o"
-
-# ô  [LATIN SMALL LETTER O WITH CIRCUMFLEX]
-"\u00F4" => "o"
-
-# õ  [LATIN SMALL LETTER O WITH TILDE]
-"\u00F5" => "o"
-
-# ö  [LATIN SMALL LETTER O WITH DIAERESIS]
-"\u00F6" => "o"
-
-# ø  [LATIN SMALL LETTER O WITH STROKE]
-"\u00F8" => "o"
-
-# ō  [LATIN SMALL LETTER O WITH MACRON]
-"\u014D" => "o"
-
-# ŏ  [LATIN SMALL LETTER O WITH BREVE]
-"\u014F" => "o"
-
-# ő  [LATIN SMALL LETTER O WITH DOUBLE ACUTE]
-"\u0151" => "o"
-
-# ơ  [LATIN SMALL LETTER O WITH HORN]
-"\u01A1" => "o"
-
-# ǒ  [LATIN SMALL LETTER O WITH CARON]
-"\u01D2" => "o"
-
-# ǫ  [LATIN SMALL LETTER O WITH OGONEK]
-"\u01EB" => "o"
-
-# ǭ  [LATIN SMALL LETTER O WITH OGONEK AND MACRON]
-"\u01ED" => "o"
-
-# ǿ  [LATIN SMALL LETTER O WITH STROKE AND ACUTE]
-"\u01FF" => "o"
-
-# ȍ  [LATIN SMALL LETTER O WITH DOUBLE GRAVE]
-"\u020D" => "o"
-
-# ȏ  [LATIN SMALL LETTER O WITH INVERTED BREVE]
-"\u020F" => "o"
-
-# ȫ  [LATIN SMALL LETTER O WITH DIAERESIS AND MACRON]
-"\u022B" => "o"
-
-# ȭ  [LATIN SMALL LETTER O WITH TILDE AND MACRON]
-"\u022D" => "o"
-
-# ȯ  [LATIN SMALL LETTER O WITH DOT ABOVE]
-"\u022F" => "o"
-
-# ȱ  [LATIN SMALL LETTER O WITH DOT ABOVE AND MACRON]
-"\u0231" => "o"
-
-# ɔ  [LATIN SMALL LETTER OPEN O]
-"\u0254" => "o"
-
-# ɵ  [LATIN SMALL LETTER BARRED O]
-"\u0275" => "o"
-
-# ᴖ  [LATIN SMALL LETTER TOP HALF O]
-"\u1D16" => "o"
-
-# ᴗ  [LATIN SMALL LETTER BOTTOM HALF O]
-"\u1D17" => "o"
-
-# ᶗ  [LATIN SMALL LETTER OPEN O WITH RETROFLEX HOOK]
-"\u1D97" => "o"
-
-# ṍ  [LATIN SMALL LETTER O WITH TILDE AND ACUTE]
-"\u1E4D" => "o"
-
-# ṏ  [LATIN SMALL LETTER O WITH TILDE AND DIAERESIS]
-"\u1E4F" => "o"
-
-# ṑ  [LATIN SMALL LETTER O WITH MACRON AND GRAVE]
-"\u1E51" => "o"
-
-# ṓ  [LATIN SMALL LETTER O WITH MACRON AND ACUTE]
-"\u1E53" => "o"
-
-# ọ  [LATIN SMALL LETTER O WITH DOT BELOW]
-"\u1ECD" => "o"
-
-# ỏ  [LATIN SMALL LETTER O WITH HOOK ABOVE]
-"\u1ECF" => "o"
-
-# ố  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND ACUTE]
-"\u1ED1" => "o"
-
-# ồ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND GRAVE]
-"\u1ED3" => "o"
-
-# ổ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1ED5" => "o"
-
-# ỗ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE]
-"\u1ED7" => "o"
-
-# ộ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW]
-"\u1ED9" => "o"
-
-# ớ  [LATIN SMALL LETTER O WITH HORN AND ACUTE]
-"\u1EDB" => "o"
-
-# ờ  [LATIN SMALL LETTER O WITH HORN AND GRAVE]
-"\u1EDD" => "o"
-
-# ở  [LATIN SMALL LETTER O WITH HORN AND HOOK ABOVE]
-"\u1EDF" => "o"
-
-# ỡ  [LATIN SMALL LETTER O WITH HORN AND TILDE]
-"\u1EE1" => "o"
-
-# ợ  [LATIN SMALL LETTER O WITH HORN AND DOT BELOW]
-"\u1EE3" => "o"
-
-# ₒ  [LATIN SUBSCRIPT SMALL LETTER O]
-"\u2092" => "o"
-
-# ⓞ  [CIRCLED LATIN SMALL LETTER O]
-"\u24DE" => "o"
-
-# ⱺ  [LATIN SMALL LETTER O WITH LOW RING INSIDE]
-"\u2C7A" => "o"
-
-# ꝋ  [LATIN SMALL LETTER O WITH LONG STROKE OVERLAY]
-"\uA74B" => "o"
-
-# ꝍ  [LATIN SMALL LETTER O WITH LOOP]
-"\uA74D" => "o"
-
-# o  [FULLWIDTH LATIN SMALL LETTER O]
-"\uFF4F" => "o"
-
-# Π [LATIN CAPITAL LIGATURE OE]
-"\u0152" => "OE"
-
-# ɶ  [LATIN LETTER SMALL CAPITAL OE]
-"\u0276" => "OE"
-
-# Ꝏ  [LATIN CAPITAL LETTER OO]
-"\uA74E" => "OO"
-
-# Ȣ  http://en.wikipedia.org/wiki/OU  [LATIN CAPITAL LETTER OU]
-"\u0222" => "OU"
-
-# ᴕ  [LATIN LETTER SMALL CAPITAL OU]
-"\u1D15" => "OU"
-
-# ⒪  [PARENTHESIZED LATIN SMALL LETTER O]
-"\u24AA" => "(o)"
-
-# œ  [LATIN SMALL LIGATURE OE]
-"\u0153" => "oe"
-
-# ᴔ  [LATIN SMALL LETTER TURNED OE]
-"\u1D14" => "oe"
-
-# ꝏ  [LATIN SMALL LETTER OO]
-"\uA74F" => "oo"
-
-# ȣ  http://en.wikipedia.org/wiki/OU  [LATIN SMALL LETTER OU]
-"\u0223" => "ou"
-
-# Ƥ  [LATIN CAPITAL LETTER P WITH HOOK]
-"\u01A4" => "P"
-
-# ᴘ  [LATIN LETTER SMALL CAPITAL P]
-"\u1D18" => "P"
-
-# Ṕ  [LATIN CAPITAL LETTER P WITH ACUTE]
-"\u1E54" => "P"
-
-# Ṗ  [LATIN CAPITAL LETTER P WITH DOT ABOVE]
-"\u1E56" => "P"
-
-# Ⓟ  [CIRCLED LATIN CAPITAL LETTER P]
-"\u24C5" => "P"
-
-# Ᵽ  [LATIN CAPITAL LETTER P WITH STROKE]
-"\u2C63" => "P"
-
-# Ꝑ  [LATIN CAPITAL LETTER P WITH STROKE THROUGH DESCENDER]
-"\uA750" => "P"
-
-# Ꝓ  [LATIN CAPITAL LETTER P WITH FLOURISH]
-"\uA752" => "P"
-
-# Ꝕ  [LATIN CAPITAL LETTER P WITH SQUIRREL TAIL]
-"\uA754" => "P"
-
-# P  [FULLWIDTH LATIN CAPITAL LETTER P]
-"\uFF30" => "P"
-
-# ƥ  [LATIN SMALL LETTER P WITH HOOK]
-"\u01A5" => "p"
-
-# ᵱ  [LATIN SMALL LETTER P WITH MIDDLE TILDE]
-"\u1D71" => "p"
-
-# ᵽ  [LATIN SMALL LETTER P WITH STROKE]
-"\u1D7D" => "p"
-
-# ᶈ  [LATIN SMALL LETTER P WITH PALATAL HOOK]
-"\u1D88" => "p"
-
-# ṕ  [LATIN SMALL LETTER P WITH ACUTE]
-"\u1E55" => "p"
-
-# ṗ  [LATIN SMALL LETTER P WITH DOT ABOVE]
-"\u1E57" => "p"
-
-# ⓟ  [CIRCLED LATIN SMALL LETTER P]
-"\u24DF" => "p"
-
-# ꝑ  [LATIN SMALL LETTER P WITH STROKE THROUGH DESCENDER]
-"\uA751" => "p"
-
-# ꝓ  [LATIN SMALL LETTER P WITH FLOURISH]
-"\uA753" => "p"
-
-# ꝕ  [LATIN SMALL LETTER P WITH SQUIRREL TAIL]
-"\uA755" => "p"
-
-# ꟼ  [LATIN EPIGRAPHIC LETTER REVERSED P]
-"\uA7FC" => "p"
-
-# p  [FULLWIDTH LATIN SMALL LETTER P]
-"\uFF50" => "p"
-
-# ⒫  [PARENTHESIZED LATIN SMALL LETTER P]
-"\u24AB" => "(p)"
-
-# Ɋ  [LATIN CAPITAL LETTER SMALL Q WITH HOOK TAIL]
-"\u024A" => "Q"
-
-# Ⓠ  [CIRCLED LATIN CAPITAL LETTER Q]
-"\u24C6" => "Q"
-
-# Ꝗ  [LATIN CAPITAL LETTER Q WITH STROKE THROUGH DESCENDER]
-"\uA756" => "Q"
-
-# Ꝙ  [LATIN CAPITAL LETTER Q WITH DIAGONAL STROKE]
-"\uA758" => "Q"
-
-# Q  [FULLWIDTH LATIN CAPITAL LETTER Q]
-"\uFF31" => "Q"
-
-# ĸ  http://en.wikipedia.org/wiki/Kra_(letter)  [LATIN SMALL LETTER KRA]
-"\u0138" => "q"
-
-# ɋ  [LATIN SMALL LETTER Q WITH HOOK TAIL]
-"\u024B" => "q"
-
-# ʠ  [LATIN SMALL LETTER Q WITH HOOK]
-"\u02A0" => "q"
-
-# ⓠ  [CIRCLED LATIN SMALL LETTER Q]
-"\u24E0" => "q"
-
-# ꝗ  [LATIN SMALL LETTER Q WITH STROKE THROUGH DESCENDER]
-"\uA757" => "q"
-
-# ꝙ  [LATIN SMALL LETTER Q WITH DIAGONAL STROKE]
-"\uA759" => "q"
-
-# q  [FULLWIDTH LATIN SMALL LETTER Q]
-"\uFF51" => "q"
-
-# ⒬  [PARENTHESIZED LATIN SMALL LETTER Q]
-"\u24AC" => "(q)"
-
-# ȹ  [LATIN SMALL LETTER QP DIGRAPH]
-"\u0239" => "qp"
-
-# Ŕ  [LATIN CAPITAL LETTER R WITH ACUTE]
-"\u0154" => "R"
-
-# Ŗ  [LATIN CAPITAL LETTER R WITH CEDILLA]
-"\u0156" => "R"
-
-# Ř  [LATIN CAPITAL LETTER R WITH CARON]
-"\u0158" => "R"
-
-# Ȓ  [LATIN CAPITAL LETTER R WITH DOUBLE GRAVE]
-"\u0210" => "R"
-
-# Ȓ  [LATIN CAPITAL LETTER R WITH INVERTED BREVE]
-"\u0212" => "R"
-
-# Ɍ  [LATIN CAPITAL LETTER R WITH STROKE]
-"\u024C" => "R"
-
-# ʀ  [LATIN LETTER SMALL CAPITAL R]
-"\u0280" => "R"
-
-# ʁ  [LATIN LETTER SMALL CAPITAL INVERTED R]
-"\u0281" => "R"
-
-# ᴙ  [LATIN LETTER SMALL CAPITAL REVERSED R]
-"\u1D19" => "R"
-
-# ᴚ  [LATIN LETTER SMALL CAPITAL TURNED R]
-"\u1D1A" => "R"
-
-# Ṙ  [LATIN CAPITAL LETTER R WITH DOT ABOVE]
-"\u1E58" => "R"
-
-# Ṛ  [LATIN CAPITAL LETTER R WITH DOT BELOW]
-"\u1E5A" => "R"
-
-# Ṝ  [LATIN CAPITAL LETTER R WITH DOT BELOW AND MACRON]
-"\u1E5C" => "R"
-
-# Ṟ  [LATIN CAPITAL LETTER R WITH LINE BELOW]
-"\u1E5E" => "R"
-
-# Ⓡ  [CIRCLED LATIN CAPITAL LETTER R]
-"\u24C7" => "R"
-
-# Ɽ  [LATIN CAPITAL LETTER R WITH TAIL]
-"\u2C64" => "R"
-
-# Ꝛ  [LATIN CAPITAL LETTER R ROTUNDA]
-"\uA75A" => "R"
-
-# Ꞃ  [LATIN CAPITAL LETTER INSULAR R]
-"\uA782" => "R"
-
-# R  [FULLWIDTH LATIN CAPITAL LETTER R]
-"\uFF32" => "R"
-
-# ŕ  [LATIN SMALL LETTER R WITH ACUTE]
-"\u0155" => "r"
-
-# ŗ  [LATIN SMALL LETTER R WITH CEDILLA]
-"\u0157" => "r"
-
-# ř  [LATIN SMALL LETTER R WITH CARON]
-"\u0159" => "r"
-
-# ȑ  [LATIN SMALL LETTER R WITH DOUBLE GRAVE]
-"\u0211" => "r"
-
-# ȓ  [LATIN SMALL LETTER R WITH INVERTED BREVE]
-"\u0213" => "r"
-
-# ɍ  [LATIN SMALL LETTER R WITH STROKE]
-"\u024D" => "r"
-
-# ɼ  [LATIN SMALL LETTER R WITH LONG LEG]
-"\u027C" => "r"
-
-# ɽ  [LATIN SMALL LETTER R WITH TAIL]
-"\u027D" => "r"
-
-# ɾ  [LATIN SMALL LETTER R WITH FISHHOOK]
-"\u027E" => "r"
-
-# ɿ  [LATIN SMALL LETTER REVERSED R WITH FISHHOOK]
-"\u027F" => "r"
-
-# ᵣ  [LATIN SUBSCRIPT SMALL LETTER R]
-"\u1D63" => "r"
-
-# ᵲ  [LATIN SMALL LETTER R WITH MIDDLE TILDE]
-"\u1D72" => "r"
-
-# ᵳ  [LATIN SMALL LETTER R WITH FISHHOOK AND MIDDLE TILDE]
-"\u1D73" => "r"
-
-# ᶉ  [LATIN SMALL LETTER R WITH PALATAL HOOK]
-"\u1D89" => "r"
-
-# ṙ  [LATIN SMALL LETTER R WITH DOT ABOVE]
-"\u1E59" => "r"
-
-# ṛ  [LATIN SMALL LETTER R WITH DOT BELOW]
-"\u1E5B" => "r"
-
-# ṝ  [LATIN SMALL LETTER R WITH DOT BELOW AND MACRON]
-"\u1E5D" => "r"
-
-# ṟ  [LATIN SMALL LETTER R WITH LINE BELOW]
-"\u1E5F" => "r"
-
-# ⓡ  [CIRCLED LATIN SMALL LETTER R]
-"\u24E1" => "r"
-
-# ꝛ  [LATIN SMALL LETTER R ROTUNDA]
-"\uA75B" => "r"
-
-# ꞃ  [LATIN SMALL LETTER INSULAR R]
-"\uA783" => "r"
-
-# r  [FULLWIDTH LATIN SMALL LETTER R]
-"\uFF52" => "r"
-
-# ⒭  [PARENTHESIZED LATIN SMALL LETTER R]
-"\u24AD" => "(r)"
-
-# Ś  [LATIN CAPITAL LETTER S WITH ACUTE]
-"\u015A" => "S"
-
-# Ŝ  [LATIN CAPITAL LETTER S WITH CIRCUMFLEX]
-"\u015C" => "S"
-
-# Ş  [LATIN CAPITAL LETTER S WITH CEDILLA]
-"\u015E" => "S"
-
-# Š  [LATIN CAPITAL LETTER S WITH CARON]
-"\u0160" => "S"
-
-# Ș  [LATIN CAPITAL LETTER S WITH COMMA BELOW]
-"\u0218" => "S"
-
-# Ṡ  [LATIN CAPITAL LETTER S WITH DOT ABOVE]
-"\u1E60" => "S"
-
-# Ṣ  [LATIN CAPITAL LETTER S WITH DOT BELOW]
-"\u1E62" => "S"
-
-# Ṥ  [LATIN CAPITAL LETTER S WITH ACUTE AND DOT ABOVE]
-"\u1E64" => "S"
-
-# Ṧ  [LATIN CAPITAL LETTER S WITH CARON AND DOT ABOVE]
-"\u1E66" => "S"
-
-# Ṩ  [LATIN CAPITAL LETTER S WITH DOT BELOW AND DOT ABOVE]
-"\u1E68" => "S"
-
-# Ⓢ  [CIRCLED LATIN CAPITAL LETTER S]
-"\u24C8" => "S"
-
-# ꜱ  [LATIN LETTER SMALL CAPITAL S]
-"\uA731" => "S"
-
-# ꞅ  [LATIN SMALL LETTER INSULAR S]
-"\uA785" => "S"
-
-# S  [FULLWIDTH LATIN CAPITAL LETTER S]
-"\uFF33" => "S"
-
-# ś  [LATIN SMALL LETTER S WITH ACUTE]
-"\u015B" => "s"
-
-# ŝ  [LATIN SMALL LETTER S WITH CIRCUMFLEX]
-"\u015D" => "s"
-
-# ş  [LATIN SMALL LETTER S WITH CEDILLA]
-"\u015F" => "s"
-
-# š  [LATIN SMALL LETTER S WITH CARON]
-"\u0161" => "s"
-
-# ſ  http://en.wikipedia.org/wiki/Long_S  [LATIN SMALL LETTER LONG S]
-"\u017F" => "s"
-
-# ș  [LATIN SMALL LETTER S WITH COMMA BELOW]
-"\u0219" => "s"
-
-# ȿ  [LATIN SMALL LETTER S WITH SWASH TAIL]
-"\u023F" => "s"
-
-# ʂ  [LATIN SMALL LETTER S WITH HOOK]
-"\u0282" => "s"
-
-# ᵴ  [LATIN SMALL LETTER S WITH MIDDLE TILDE]
-"\u1D74" => "s"
-
-# ᶊ  [LATIN SMALL LETTER S WITH PALATAL HOOK]
-"\u1D8A" => "s"
-
-# ṡ  [LATIN SMALL LETTER S WITH DOT ABOVE]
-"\u1E61" => "s"
-
-# ṣ  [LATIN SMALL LETTER S WITH DOT BELOW]
-"\u1E63" => "s"
-
-# ṥ  [LATIN SMALL LETTER S WITH ACUTE AND DOT ABOVE]
-"\u1E65" => "s"
-
-# ṧ  [LATIN SMALL LETTER S WITH CARON AND DOT ABOVE]
-"\u1E67" => "s"
-
-# ṩ  [LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE]
-"\u1E69" => "s"
-
-# ẜ  [LATIN SMALL LETTER LONG S WITH DIAGONAL STROKE]
-"\u1E9C" => "s"
-
-# ẝ  [LATIN SMALL LETTER LONG S WITH HIGH STROKE]
-"\u1E9D" => "s"
-
-# ⓢ  [CIRCLED LATIN SMALL LETTER S]
-"\u24E2" => "s"
-
-# Ꞅ  [LATIN CAPITAL LETTER INSULAR S]
-"\uA784" => "s"
-
-# s  [FULLWIDTH LATIN SMALL LETTER S]
-"\uFF53" => "s"
-
-# ẞ  [LATIN CAPITAL LETTER SHARP S]
-"\u1E9E" => "SS"
-
-# ⒮  [PARENTHESIZED LATIN SMALL LETTER S]
-"\u24AE" => "(s)"
-
-# ß  [LATIN SMALL LETTER SHARP S]
-"\u00DF" => "ss"
-
-# st  [LATIN SMALL LIGATURE ST]
-"\uFB06" => "st"
-
-# Ţ  [LATIN CAPITAL LETTER T WITH CEDILLA]
-"\u0162" => "T"
-
-# Ť  [LATIN CAPITAL LETTER T WITH CARON]
-"\u0164" => "T"
-
-# Ŧ  [LATIN CAPITAL LETTER T WITH STROKE]
-"\u0166" => "T"
-
-# Ƭ  [LATIN CAPITAL LETTER T WITH HOOK]
-"\u01AC" => "T"
-
-# Ʈ  [LATIN CAPITAL LETTER T WITH RETROFLEX HOOK]
-"\u01AE" => "T"
-
-# Ț  [LATIN CAPITAL LETTER T WITH COMMA BELOW]
-"\u021A" => "T"
-
-# Ⱦ  [LATIN CAPITAL LETTER T WITH DIAGONAL STROKE]
-"\u023E" => "T"
-
-# ᴛ  [LATIN LETTER SMALL CAPITAL T]
-"\u1D1B" => "T"
-
-# Ṫ  [LATIN CAPITAL LETTER T WITH DOT ABOVE]
-"\u1E6A" => "T"
-
-# Ṭ  [LATIN CAPITAL LETTER T WITH DOT BELOW]
-"\u1E6C" => "T"
-
-# Ṯ  [LATIN CAPITAL LETTER T WITH LINE BELOW]
-"\u1E6E" => "T"
-
-# Ṱ  [LATIN CAPITAL LETTER T WITH CIRCUMFLEX BELOW]
-"\u1E70" => "T"
-
-# Ⓣ  [CIRCLED LATIN CAPITAL LETTER T]
-"\u24C9" => "T"
-
-# Ꞇ  [LATIN CAPITAL LETTER INSULAR T]
-"\uA786" => "T"
-
-# T  [FULLWIDTH LATIN CAPITAL LETTER T]
-"\uFF34" => "T"
-
-# ţ  [LATIN SMALL LETTER T WITH CEDILLA]
-"\u0163" => "t"
-
-# ť  [LATIN SMALL LETTER T WITH CARON]
-"\u0165" => "t"
-
-# ŧ  [LATIN SMALL LETTER T WITH STROKE]
-"\u0167" => "t"
-
-# ƫ  [LATIN SMALL LETTER T WITH PALATAL HOOK]
-"\u01AB" => "t"
-
-# ƭ  [LATIN SMALL LETTER T WITH HOOK]
-"\u01AD" => "t"
-
-# ț  [LATIN SMALL LETTER T WITH COMMA BELOW]
-"\u021B" => "t"
-
-# ȶ  [LATIN SMALL LETTER T WITH CURL]
-"\u0236" => "t"
-
-# ʇ  [LATIN SMALL LETTER TURNED T]
-"\u0287" => "t"
-
-# ʈ  [LATIN SMALL LETTER T WITH RETROFLEX HOOK]
-"\u0288" => "t"
-
-# ᵵ  [LATIN SMALL LETTER T WITH MIDDLE TILDE]
-"\u1D75" => "t"
-
-# ṫ  [LATIN SMALL LETTER T WITH DOT ABOVE]
-"\u1E6B" => "t"
-
-# ṭ  [LATIN SMALL LETTER T WITH DOT BELOW]
-"\u1E6D" => "t"
-
-# ṯ  [LATIN SMALL LETTER T WITH LINE BELOW]
-"\u1E6F" => "t"
-
-# ṱ  [LATIN SMALL LETTER T WITH CIRCUMFLEX BELOW]
-"\u1E71" => "t"
-
-# ẗ  [LATIN SMALL LETTER T WITH DIAERESIS]
-"\u1E97" => "t"
-
-# ⓣ  [CIRCLED LATIN SMALL LETTER T]
-"\u24E3" => "t"
-
-# ⱦ  [LATIN SMALL LETTER T WITH DIAGONAL STROKE]
-"\u2C66" => "t"
-
-# t  [FULLWIDTH LATIN SMALL LETTER T]
-"\uFF54" => "t"
-
-# Þ  [LATIN CAPITAL LETTER THORN]
-"\u00DE" => "TH"
-
-# Ꝧ  [LATIN CAPITAL LETTER THORN WITH STROKE THROUGH DESCENDER]
-"\uA766" => "TH"
-
-# Ꜩ  [LATIN CAPITAL LETTER TZ]
-"\uA728" => "TZ"
-
-# ⒯  [PARENTHESIZED LATIN SMALL LETTER T]
-"\u24AF" => "(t)"
-
-# ʨ  [LATIN SMALL LETTER TC DIGRAPH WITH CURL]
-"\u02A8" => "tc"
-
-# þ  [LATIN SMALL LETTER THORN]
-"\u00FE" => "th"
-
-# ᵺ  [LATIN SMALL LETTER TH WITH STRIKETHROUGH]
-"\u1D7A" => "th"
-
-# ꝧ  [LATIN SMALL LETTER THORN WITH STROKE THROUGH DESCENDER]
-"\uA767" => "th"
-
-# ʦ  [LATIN SMALL LETTER TS DIGRAPH]
-"\u02A6" => "ts"
-
-# ꜩ  [LATIN SMALL LETTER TZ]
-"\uA729" => "tz"
-
-# Ù  [LATIN CAPITAL LETTER U WITH GRAVE]
-"\u00D9" => "U"
-
-# Ú  [LATIN CAPITAL LETTER U WITH ACUTE]
-"\u00DA" => "U"
-
-# Û  [LATIN CAPITAL LETTER U WITH CIRCUMFLEX]
-"\u00DB" => "U"
-
-# Ü  [LATIN CAPITAL LETTER U WITH DIAERESIS]
-"\u00DC" => "U"
-
-# Ũ  [LATIN CAPITAL LETTER U WITH TILDE]
-"\u0168" => "U"
-
-# Ū  [LATIN CAPITAL LETTER U WITH MACRON]
-"\u016A" => "U"
-
-# Ŭ  [LATIN CAPITAL LETTER U WITH BREVE]
-"\u016C" => "U"
-
-# Ů  [LATIN CAPITAL LETTER U WITH RING ABOVE]
-"\u016E" => "U"
-
-# Ű  [LATIN CAPITAL LETTER U WITH DOUBLE ACUTE]
-"\u0170" => "U"
-
-# Ų  [LATIN CAPITAL LETTER U WITH OGONEK]
-"\u0172" => "U"
-
-# Ư  [LATIN CAPITAL LETTER U WITH HORN]
-"\u01AF" => "U"
-
-# Ǔ  [LATIN CAPITAL LETTER U WITH CARON]
-"\u01D3" => "U"
-
-# Ǖ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND MACRON]
-"\u01D5" => "U"
-
-# Ǘ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND ACUTE]
-"\u01D7" => "U"
-
-# Ǚ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND CARON]
-"\u01D9" => "U"
-
-# Ǜ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND GRAVE]
-"\u01DB" => "U"
-
-# Ȕ  [LATIN CAPITAL LETTER U WITH DOUBLE GRAVE]
-"\u0214" => "U"
-
-# Ȗ  [LATIN CAPITAL LETTER U WITH INVERTED BREVE]
-"\u0216" => "U"
-
-# Ʉ  [LATIN CAPITAL LETTER U BAR]
-"\u0244" => "U"
-
-# ᴜ  [LATIN LETTER SMALL CAPITAL U]
-"\u1D1C" => "U"
-
-# ᵾ  [LATIN SMALL CAPITAL LETTER U WITH STROKE]
-"\u1D7E" => "U"
-
-# Ṳ  [LATIN CAPITAL LETTER U WITH DIAERESIS BELOW]
-"\u1E72" => "U"
-
-# Ṵ  [LATIN CAPITAL LETTER U WITH TILDE BELOW]
-"\u1E74" => "U"
-
-# Ṷ  [LATIN CAPITAL LETTER U WITH CIRCUMFLEX BELOW]
-"\u1E76" => "U"
-
-# Ṹ  [LATIN CAPITAL LETTER U WITH TILDE AND ACUTE]
-"\u1E78" => "U"
-
-# Ṻ  [LATIN CAPITAL LETTER U WITH MACRON AND DIAERESIS]
-"\u1E7A" => "U"
-
-# Ụ  [LATIN CAPITAL LETTER U WITH DOT BELOW]
-"\u1EE4" => "U"
-
-# Ủ  [LATIN CAPITAL LETTER U WITH HOOK ABOVE]
-"\u1EE6" => "U"
-
-# Ứ  [LATIN CAPITAL LETTER U WITH HORN AND ACUTE]
-"\u1EE8" => "U"
-
-# Ừ  [LATIN CAPITAL LETTER U WITH HORN AND GRAVE]
-"\u1EEA" => "U"
-
-# Ử  [LATIN CAPITAL LETTER U WITH HORN AND HOOK ABOVE]
-"\u1EEC" => "U"
-
-# Ữ  [LATIN CAPITAL LETTER U WITH HORN AND TILDE]
-"\u1EEE" => "U"
-
-# Ự  [LATIN CAPITAL LETTER U WITH HORN AND DOT BELOW]
-"\u1EF0" => "U"
-
-# Ⓤ  [CIRCLED LATIN CAPITAL LETTER U]
-"\u24CA" => "U"
-
-# U  [FULLWIDTH LATIN CAPITAL LETTER U]
-"\uFF35" => "U"
-
-# ù  [LATIN SMALL LETTER U WITH GRAVE]
-"\u00F9" => "u"
-
-# ú  [LATIN SMALL LETTER U WITH ACUTE]
-"\u00FA" => "u"
-
-# û  [LATIN SMALL LETTER U WITH CIRCUMFLEX]
-"\u00FB" => "u"
-
-# ü  [LATIN SMALL LETTER U WITH DIAERESIS]
-"\u00FC" => "u"
-
-# ũ  [LATIN SMALL LETTER U WITH TILDE]
-"\u0169" => "u"
-
-# ū  [LATIN SMALL LETTER U WITH MACRON]
-"\u016B" => "u"
-
-# ŭ  [LATIN SMALL LETTER U WITH BREVE]
-"\u016D" => "u"
-
-# ů  [LATIN SMALL LETTER U WITH RING ABOVE]
-"\u016F" => "u"
-
-# ű  [LATIN SMALL LETTER U WITH DOUBLE ACUTE]
-"\u0171" => "u"
-
-# ų  [LATIN SMALL LETTER U WITH OGONEK]
-"\u0173" => "u"
-
-# ư  [LATIN SMALL LETTER U WITH HORN]
-"\u01B0" => "u"
-
-# ǔ  [LATIN SMALL LETTER U WITH CARON]
-"\u01D4" => "u"
-
-# ǖ  [LATIN SMALL LETTER U WITH DIAERESIS AND MACRON]
-"\u01D6" => "u"
-
-# ǘ  [LATIN SMALL LETTER U WITH DIAERESIS AND ACUTE]
-"\u01D8" => "u"
-
-# ǚ  [LATIN SMALL LETTER U WITH DIAERESIS AND CARON]
-"\u01DA" => "u"
-
-# ǜ  [LATIN SMALL LETTER U WITH DIAERESIS AND GRAVE]
-"\u01DC" => "u"
-
-# ȕ  [LATIN SMALL LETTER U WITH DOUBLE GRAVE]
-"\u0215" => "u"
-
-# ȗ  [LATIN SMALL LETTER U WITH INVERTED BREVE]
-"\u0217" => "u"
-
-# ʉ  [LATIN SMALL LETTER U BAR]
-"\u0289" => "u"
-
-# ᵤ  [LATIN SUBSCRIPT SMALL LETTER U]
-"\u1D64" => "u"
-
-# ᶙ  [LATIN SMALL LETTER U WITH RETROFLEX HOOK]
-"\u1D99" => "u"
-
-# ṳ  [LATIN SMALL LETTER U WITH DIAERESIS BELOW]
-"\u1E73" => "u"
-
-# ṵ  [LATIN SMALL LETTER U WITH TILDE BELOW]
-"\u1E75" => "u"
-
-# ṷ  [LATIN SMALL LETTER U WITH CIRCUMFLEX BELOW]
-"\u1E77" => "u"
-
-# ṹ  [LATIN SMALL LETTER U WITH TILDE AND ACUTE]
-"\u1E79" => "u"
-
-# ṻ  [LATIN SMALL LETTER U WITH MACRON AND DIAERESIS]
-"\u1E7B" => "u"
-
-# ụ  [LATIN SMALL LETTER U WITH DOT BELOW]
-"\u1EE5" => "u"
-
-# ủ  [LATIN SMALL LETTER U WITH HOOK ABOVE]
-"\u1EE7" => "u"
-
-# ứ  [LATIN SMALL LETTER U WITH HORN AND ACUTE]
-"\u1EE9" => "u"
-
-# ừ  [LATIN SMALL LETTER U WITH HORN AND GRAVE]
-"\u1EEB" => "u"
-
-# ử  [LATIN SMALL LETTER U WITH HORN AND HOOK ABOVE]
-"\u1EED" => "u"
-
-# ữ  [LATIN SMALL LETTER U WITH HORN AND TILDE]
-"\u1EEF" => "u"
-
-# ự  [LATIN SMALL LETTER U WITH HORN AND DOT BELOW]
-"\u1EF1" => "u"
-
-# ⓤ  [CIRCLED LATIN SMALL LETTER U]
-"\u24E4" => "u"
-
-# u  [FULLWIDTH LATIN SMALL LETTER U]
-"\uFF55" => "u"
-
-# ⒰  [PARENTHESIZED LATIN SMALL LETTER U]
-"\u24B0" => "(u)"
-
-# ᵫ  [LATIN SMALL LETTER UE]
-"\u1D6B" => "ue"
-
-# Ʋ  [LATIN CAPITAL LETTER V WITH HOOK]
-"\u01B2" => "V"
-
-# Ʌ  [LATIN CAPITAL LETTER TURNED V]
-"\u0245" => "V"
-
-# ᴠ  [LATIN LETTER SMALL CAPITAL V]
-"\u1D20" => "V"
-
-# Ṽ  [LATIN CAPITAL LETTER V WITH TILDE]
-"\u1E7C" => "V"
-
-# Ṿ  [LATIN CAPITAL LETTER V WITH DOT BELOW]
-"\u1E7E" => "V"
-
-# Ỽ  [LATIN CAPITAL LETTER MIDDLE-WELSH V]
-"\u1EFC" => "V"
-
-# Ⓥ  [CIRCLED LATIN CAPITAL LETTER V]
-"\u24CB" => "V"
-
-# Ꝟ  [LATIN CAPITAL LETTER V WITH DIAGONAL STROKE]
-"\uA75E" => "V"
-
-# Ꝩ  [LATIN CAPITAL LETTER VEND]
-"\uA768" => "V"
-
-# V  [FULLWIDTH LATIN CAPITAL LETTER V]
-"\uFF36" => "V"
-
-# ʋ  [LATIN SMALL LETTER V WITH HOOK]
-"\u028B" => "v"
-
-# ʌ  [LATIN SMALL LETTER TURNED V]
-"\u028C" => "v"
-
-# ᵥ  [LATIN SUBSCRIPT SMALL LETTER V]
-"\u1D65" => "v"
-
-# ᶌ  [LATIN SMALL LETTER V WITH PALATAL HOOK]
-"\u1D8C" => "v"
-
-# ṽ  [LATIN SMALL LETTER V WITH TILDE]
-"\u1E7D" => "v"
-
-# ṿ  [LATIN SMALL LETTER V WITH DOT BELOW]
-"\u1E7F" => "v"
-
-# ⓥ  [CIRCLED LATIN SMALL LETTER V]
-"\u24E5" => "v"
-
-# ⱱ  [LATIN SMALL LETTER V WITH RIGHT HOOK]
-"\u2C71" => "v"
-
-# ⱴ  [LATIN SMALL LETTER V WITH CURL]
-"\u2C74" => "v"
-
-# ꝟ  [LATIN SMALL LETTER V WITH DIAGONAL STROKE]
-"\uA75F" => "v"
-
-# v  [FULLWIDTH LATIN SMALL LETTER V]
-"\uFF56" => "v"
-
-# Ꝡ  [LATIN CAPITAL LETTER VY]
-"\uA760" => "VY"
-
-# ⒱  [PARENTHESIZED LATIN SMALL LETTER V]
-"\u24B1" => "(v)"
-
-# ꝡ  [LATIN SMALL LETTER VY]
-"\uA761" => "vy"
-
-# Ŵ  [LATIN CAPITAL LETTER W WITH CIRCUMFLEX]
-"\u0174" => "W"
-
-# Ƿ  http://en.wikipedia.org/wiki/Wynn  [LATIN CAPITAL LETTER WYNN]
-"\u01F7" => "W"
-
-# ᴡ  [LATIN LETTER SMALL CAPITAL W]
-"\u1D21" => "W"
-
-# Ẁ  [LATIN CAPITAL LETTER W WITH GRAVE]
-"\u1E80" => "W"
-
-# Ẃ  [LATIN CAPITAL LETTER W WITH ACUTE]
-"\u1E82" => "W"
-
-# Ẅ  [LATIN CAPITAL LETTER W WITH DIAERESIS]
-"\u1E84" => "W"
-
-# Ẇ  [LATIN CAPITAL LETTER W WITH DOT ABOVE]
-"\u1E86" => "W"
-
-# Ẉ  [LATIN CAPITAL LETTER W WITH DOT BELOW]
-"\u1E88" => "W"
-
-# Ⓦ  [CIRCLED LATIN CAPITAL LETTER W]
-"\u24CC" => "W"
-
-# Ⱳ  [LATIN CAPITAL LETTER W WITH HOOK]
-"\u2C72" => "W"
-
-# W  [FULLWIDTH LATIN CAPITAL LETTER W]
-"\uFF37" => "W"
-
-# ŵ  [LATIN SMALL LETTER W WITH CIRCUMFLEX]
-"\u0175" => "w"
-
-# ƿ  http://en.wikipedia.org/wiki/Wynn  [LATIN LETTER WYNN]
-"\u01BF" => "w"
-
-# ʍ  [LATIN SMALL LETTER TURNED W]
-"\u028D" => "w"
-
-# ẁ  [LATIN SMALL LETTER W WITH GRAVE]
-"\u1E81" => "w"
-
-# ẃ  [LATIN SMALL LETTER W WITH ACUTE]
-"\u1E83" => "w"
-
-# ẅ  [LATIN SMALL LETTER W WITH DIAERESIS]
-"\u1E85" => "w"
-
-# ẇ  [LATIN SMALL LETTER W WITH DOT ABOVE]
-"\u1E87" => "w"
-
-# ẉ  [LATIN SMALL LETTER W WITH DOT BELOW]
-"\u1E89" => "w"
-
-# ẘ  [LATIN SMALL LETTER W WITH RING ABOVE]
-"\u1E98" => "w"
-
-# ⓦ  [CIRCLED LATIN SMALL LETTER W]
-"\u24E6" => "w"
-
-# ⱳ  [LATIN SMALL LETTER W WITH HOOK]
-"\u2C73" => "w"
-
-# w  [FULLWIDTH LATIN SMALL LETTER W]
-"\uFF57" => "w"
-
-# ⒲  [PARENTHESIZED LATIN SMALL LETTER W]
-"\u24B2" => "(w)"
-
-# Ẋ  [LATIN CAPITAL LETTER X WITH DOT ABOVE]
-"\u1E8A" => "X"
-
-# Ẍ  [LATIN CAPITAL LETTER X WITH DIAERESIS]
-"\u1E8C" => "X"
-
-# Ⓧ  [CIRCLED LATIN CAPITAL LETTER X]
-"\u24CD" => "X"
-
-# X  [FULLWIDTH LATIN CAPITAL LETTER X]
-"\uFF38" => "X"
-
-# ᶍ  [LATIN SMALL LETTER X WITH PALATAL HOOK]
-"\u1D8D" => "x"
-
-# ẋ  [LATIN SMALL LETTER X WITH DOT ABOVE]
-"\u1E8B" => "x"
-
-# ẍ  [LATIN SMALL LETTER X WITH DIAERESIS]
-"\u1E8D" => "x"
-
-# ₓ  [LATIN SUBSCRIPT SMALL LETTER X]
-"\u2093" => "x"
-
-# ⓧ  [CIRCLED LATIN SMALL LETTER X]
-"\u24E7" => "x"
-
-# x  [FULLWIDTH LATIN SMALL LETTER X]
-"\uFF58" => "x"
-
-# ⒳  [PARENTHESIZED LATIN SMALL LETTER X]
-"\u24B3" => "(x)"
-
-# Ý  [LATIN CAPITAL LETTER Y WITH ACUTE]
-"\u00DD" => "Y"
-
-# Ŷ  [LATIN CAPITAL LETTER Y WITH CIRCUMFLEX]
-"\u0176" => "Y"
-
-# Ÿ  [LATIN CAPITAL LETTER Y WITH DIAERESIS]
-"\u0178" => "Y"
-
-# Ƴ  [LATIN CAPITAL LETTER Y WITH HOOK]
-"\u01B3" => "Y"
-
-# Ȳ  [LATIN CAPITAL LETTER Y WITH MACRON]
-"\u0232" => "Y"
-
-# Ɏ  [LATIN CAPITAL LETTER Y WITH STROKE]
-"\u024E" => "Y"
-
-# ʏ  [LATIN LETTER SMALL CAPITAL Y]
-"\u028F" => "Y"
-
-# Ẏ  [LATIN CAPITAL LETTER Y WITH DOT ABOVE]
-"\u1E8E" => "Y"
-
-# Ỳ  [LATIN CAPITAL LETTER Y WITH GRAVE]
-"\u1EF2" => "Y"
-
-# Ỵ  [LATIN CAPITAL LETTER Y WITH DOT BELOW]
-"\u1EF4" => "Y"
-
-# Ỷ  [LATIN CAPITAL LETTER Y WITH HOOK ABOVE]
-"\u1EF6" => "Y"
-
-# Ỹ  [LATIN CAPITAL LETTER Y WITH TILDE]
-"\u1EF8" => "Y"
-
-# Ỿ  [LATIN CAPITAL LETTER Y WITH LOOP]
-"\u1EFE" => "Y"
-
-# Ⓨ  [CIRCLED LATIN CAPITAL LETTER Y]
-"\u24CE" => "Y"
-
-# Y  [FULLWIDTH LATIN CAPITAL LETTER Y]
-"\uFF39" => "Y"
-
-# ý  [LATIN SMALL LETTER Y WITH ACUTE]
-"\u00FD" => "y"
-
-# ÿ  [LATIN SMALL LETTER Y WITH DIAERESIS]
-"\u00FF" => "y"
-
-# ŷ  [LATIN SMALL LETTER Y WITH CIRCUMFLEX]
-"\u0177" => "y"
-
-# ƴ  [LATIN SMALL LETTER Y WITH HOOK]
-"\u01B4" => "y"
-
-# ȳ  [LATIN SMALL LETTER Y WITH MACRON]
-"\u0233" => "y"
-
-# ɏ  [LATIN SMALL LETTER Y WITH STROKE]
-"\u024F" => "y"
-
-# ʎ  [LATIN SMALL LETTER TURNED Y]
-"\u028E" => "y"
-
-# ẏ  [LATIN SMALL LETTER Y WITH DOT ABOVE]
-"\u1E8F" => "y"
-
-# ẙ  [LATIN SMALL LETTER Y WITH RING ABOVE]
-"\u1E99" => "y"
-
-# ỳ  [LATIN SMALL LETTER Y WITH GRAVE]
-"\u1EF3" => "y"
-
-# ỵ  [LATIN SMALL LETTER Y WITH DOT BELOW]
-"\u1EF5" => "y"
-
-# ỷ  [LATIN SMALL LETTER Y WITH HOOK ABOVE]
-"\u1EF7" => "y"
-
-# ỹ  [LATIN SMALL LETTER Y WITH TILDE]
-"\u1EF9" => "y"
-
-# ỿ  [LATIN SMALL LETTER Y WITH LOOP]
-"\u1EFF" => "y"
-
-# ⓨ  [CIRCLED LATIN SMALL LETTER Y]
-"\u24E8" => "y"
-
-# y  [FULLWIDTH LATIN SMALL LETTER Y]
-"\uFF59" => "y"
-
-# ⒴  [PARENTHESIZED LATIN SMALL LETTER Y]
-"\u24B4" => "(y)"
-
-# Ź  [LATIN CAPITAL LETTER Z WITH ACUTE]
-"\u0179" => "Z"
-
-# Ż  [LATIN CAPITAL LETTER Z WITH DOT ABOVE]
-"\u017B" => "Z"
-
-# Ž  [LATIN CAPITAL LETTER Z WITH CARON]
-"\u017D" => "Z"
-
-# Ƶ  [LATIN CAPITAL LETTER Z WITH STROKE]
-"\u01B5" => "Z"
-
-# Ȝ  http://en.wikipedia.org/wiki/Yogh  [LATIN CAPITAL LETTER YOGH]
-"\u021C" => "Z"
-
-# Ȥ  [LATIN CAPITAL LETTER Z WITH HOOK]
-"\u0224" => "Z"
-
-# ᴢ  [LATIN LETTER SMALL CAPITAL Z]
-"\u1D22" => "Z"
-
-# Ẑ  [LATIN CAPITAL LETTER Z WITH CIRCUMFLEX]
-"\u1E90" => "Z"
-
-# Ẓ  [LATIN CAPITAL LETTER Z WITH DOT BELOW]
-"\u1E92" => "Z"
-
-# Ẕ  [LATIN CAPITAL LETTER Z WITH LINE BELOW]
-"\u1E94" => "Z"
-
-# Ⓩ  [CIRCLED LATIN CAPITAL LETTER Z]
-"\u24CF" => "Z"
-
-# Ⱬ  [LATIN CAPITAL LETTER Z WITH DESCENDER]
-"\u2C6B" => "Z"
-
-# Ꝣ  [LATIN CAPITAL LETTER VISIGOTHIC Z]
-"\uA762" => "Z"
-
-# Z  [FULLWIDTH LATIN CAPITAL LETTER Z]
-"\uFF3A" => "Z"
-
-# ź  [LATIN SMALL LETTER Z WITH ACUTE]
-"\u017A" => "z"
-
-# ż  [LATIN SMALL LETTER Z WITH DOT ABOVE]
-"\u017C" => "z"
-
-# ž  [LATIN SMALL LETTER Z WITH CARON]
-"\u017E" => "z"
-
-# ƶ  [LATIN SMALL LETTER Z WITH STROKE]
-"\u01B6" => "z"
-
-# ȝ  http://en.wikipedia.org/wiki/Yogh  [LATIN SMALL LETTER YOGH]
-"\u021D" => "z"
-
-# ȥ  [LATIN SMALL LETTER Z WITH HOOK]
-"\u0225" => "z"
-
-# ɀ  [LATIN SMALL LETTER Z WITH SWASH TAIL]
-"\u0240" => "z"
-
-# ʐ  [LATIN SMALL LETTER Z WITH RETROFLEX HOOK]
-"\u0290" => "z"
-
-# ʑ  [LATIN SMALL LETTER Z WITH CURL]
-"\u0291" => "z"
-
-# ᵶ  [LATIN SMALL LETTER Z WITH MIDDLE TILDE]
-"\u1D76" => "z"
-
-# ᶎ  [LATIN SMALL LETTER Z WITH PALATAL HOOK]
-"\u1D8E" => "z"
-
-# ẑ  [LATIN SMALL LETTER Z WITH CIRCUMFLEX]
-"\u1E91" => "z"
-
-# ẓ  [LATIN SMALL LETTER Z WITH DOT BELOW]
-"\u1E93" => "z"
-
-# ẕ  [LATIN SMALL LETTER Z WITH LINE BELOW]
-"\u1E95" => "z"
-
-# ⓩ  [CIRCLED LATIN SMALL LETTER Z]
-"\u24E9" => "z"
-
-# ⱬ  [LATIN SMALL LETTER Z WITH DESCENDER]
-"\u2C6C" => "z"
-
-# ꝣ  [LATIN SMALL LETTER VISIGOTHIC Z]
-"\uA763" => "z"
-
-# z  [FULLWIDTH LATIN SMALL LETTER Z]
-"\uFF5A" => "z"
-
-# ⒵  [PARENTHESIZED LATIN SMALL LETTER Z]
-"\u24B5" => "(z)"
-
-# ⁰  [SUPERSCRIPT ZERO]
-"\u2070" => "0"
-
-# ₀  [SUBSCRIPT ZERO]
-"\u2080" => "0"
-
-# ⓪  [CIRCLED DIGIT ZERO]
-"\u24EA" => "0"
-
-# ⓿  [NEGATIVE CIRCLED DIGIT ZERO]
-"\u24FF" => "0"
-
-# 0  [FULLWIDTH DIGIT ZERO]
-"\uFF10" => "0"
-
-# ¹  [SUPERSCRIPT ONE]
-"\u00B9" => "1"
-
-# ₁  [SUBSCRIPT ONE]
-"\u2081" => "1"
-
-# ①  [CIRCLED DIGIT ONE]
-"\u2460" => "1"
-
-# ⓵  [DOUBLE CIRCLED DIGIT ONE]
-"\u24F5" => "1"
-
-# ❶  [DINGBAT NEGATIVE CIRCLED DIGIT ONE]
-"\u2776" => "1"
-
-# ➀  [DINGBAT CIRCLED SANS-SERIF DIGIT ONE]
-"\u2780" => "1"
-
-# ➊  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT ONE]
-"\u278A" => "1"
-
-# 1  [FULLWIDTH DIGIT ONE]
-"\uFF11" => "1"
-
-# ⒈  [DIGIT ONE FULL STOP]
-"\u2488" => "1."
-
-# ⑴  [PARENTHESIZED DIGIT ONE]
-"\u2474" => "(1)"
-
-# ²  [SUPERSCRIPT TWO]
-"\u00B2" => "2"
-
-# ₂  [SUBSCRIPT TWO]
-"\u2082" => "2"
-
-# ②  [CIRCLED DIGIT TWO]
-"\u2461" => "2"
-
-# ⓶  [DOUBLE CIRCLED DIGIT TWO]
-"\u24F6" => "2"
-
-# ❷  [DINGBAT NEGATIVE CIRCLED DIGIT TWO]
-"\u2777" => "2"
-
-# ➁  [DINGBAT CIRCLED SANS-SERIF DIGIT TWO]
-"\u2781" => "2"
-
-# ➋  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT TWO]
-"\u278B" => "2"
-
-# 2  [FULLWIDTH DIGIT TWO]
-"\uFF12" => "2"
-
-# ⒉  [DIGIT TWO FULL STOP]
-"\u2489" => "2."
-
-# ⑵  [PARENTHESIZED DIGIT TWO]
-"\u2475" => "(2)"
-
-# ³  [SUPERSCRIPT THREE]
-"\u00B3" => "3"
-
-# ₃  [SUBSCRIPT THREE]
-"\u2083" => "3"
-
-# ③  [CIRCLED DIGIT THREE]
-"\u2462" => "3"
-
-# ⓷  [DOUBLE CIRCLED DIGIT THREE]
-"\u24F7" => "3"
-
-# ❸  [DINGBAT NEGATIVE CIRCLED DIGIT THREE]
-"\u2778" => "3"
-
-# ➂  [DINGBAT CIRCLED SANS-SERIF DIGIT THREE]
-"\u2782" => "3"
-
-# ➌  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT THREE]
-"\u278C" => "3"
-
-# 3  [FULLWIDTH DIGIT THREE]
-"\uFF13" => "3"
-
-# ⒊  [DIGIT THREE FULL STOP]
-"\u248A" => "3."
-
-# ⑶  [PARENTHESIZED DIGIT THREE]
-"\u2476" => "(3)"
-
-# ⁴  [SUPERSCRIPT FOUR]
-"\u2074" => "4"
-
-# ₄  [SUBSCRIPT FOUR]
-"\u2084" => "4"
-
-# ④  [CIRCLED DIGIT FOUR]
-"\u2463" => "4"
-
-# ⓸  [DOUBLE CIRCLED DIGIT FOUR]
-"\u24F8" => "4"
-
-# ❹  [DINGBAT NEGATIVE CIRCLED DIGIT FOUR]
-"\u2779" => "4"
-
-# ➃  [DINGBAT CIRCLED SANS-SERIF DIGIT FOUR]
-"\u2783" => "4"
-
-# ➍  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT FOUR]
-"\u278D" => "4"
-
-# 4  [FULLWIDTH DIGIT FOUR]
-"\uFF14" => "4"
-
-# ⒋  [DIGIT FOUR FULL STOP]
-"\u248B" => "4."
-
-# ⑷  [PARENTHESIZED DIGIT FOUR]
-"\u2477" => "(4)"
-
-# ⁵  [SUPERSCRIPT FIVE]
-"\u2075" => "5"
-
-# ₅  [SUBSCRIPT FIVE]
-"\u2085" => "5"
-
-# ⑤  [CIRCLED DIGIT FIVE]
-"\u2464" => "5"
-
-# ⓹  [DOUBLE CIRCLED DIGIT FIVE]
-"\u24F9" => "5"
-
-# ❺  [DINGBAT NEGATIVE CIRCLED DIGIT FIVE]
-"\u277A" => "5"
-
-# ➄  [DINGBAT CIRCLED SANS-SERIF DIGIT FIVE]
-"\u2784" => "5"
-
-# ➎  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT FIVE]
-"\u278E" => "5"
-
-# 5  [FULLWIDTH DIGIT FIVE]
-"\uFF15" => "5"
-
-# ⒌  [DIGIT FIVE FULL STOP]
-"\u248C" => "5."
-
-# ⑸  [PARENTHESIZED DIGIT FIVE]
-"\u2478" => "(5)"
-
-# ⁶  [SUPERSCRIPT SIX]
-"\u2076" => "6"
-
-# ₆  [SUBSCRIPT SIX]
-"\u2086" => "6"
-
-# ⑥  [CIRCLED DIGIT SIX]
-"\u2465" => "6"
-
-# ⓺  [DOUBLE CIRCLED DIGIT SIX]
-"\u24FA" => "6"
-
-# ❻  [DINGBAT NEGATIVE CIRCLED DIGIT SIX]
-"\u277B" => "6"
-
-# ➅  [DINGBAT CIRCLED SANS-SERIF DIGIT SIX]
-"\u2785" => "6"
-
-# ➏  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT SIX]
-"\u278F" => "6"
-
-# 6  [FULLWIDTH DIGIT SIX]
-"\uFF16" => "6"
-
-# ⒍  [DIGIT SIX FULL STOP]
-"\u248D" => "6."
-
-# ⑹  [PARENTHESIZED DIGIT SIX]
-"\u2479" => "(6)"
-
-# ⁷  [SUPERSCRIPT SEVEN]
-"\u2077" => "7"
-
-# ₇  [SUBSCRIPT SEVEN]
-"\u2087" => "7"
-
-# ⑦  [CIRCLED DIGIT SEVEN]
-"\u2466" => "7"
-
-# ⓻  [DOUBLE CIRCLED DIGIT SEVEN]
-"\u24FB" => "7"
-
-# ❼  [DINGBAT NEGATIVE CIRCLED DIGIT SEVEN]
-"\u277C" => "7"
-
-# ➆  [DINGBAT CIRCLED SANS-SERIF DIGIT SEVEN]
-"\u2786" => "7"
-
-# ➐  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT SEVEN]
-"\u2790" => "7"
-
-# 7  [FULLWIDTH DIGIT SEVEN]
-"\uFF17" => "7"
-
-# ⒎  [DIGIT SEVEN FULL STOP]
-"\u248E" => "7."
-
-# ⑺  [PARENTHESIZED DIGIT SEVEN]
-"\u247A" => "(7)"
-
-# ⁸  [SUPERSCRIPT EIGHT]
-"\u2078" => "8"
-
-# ₈  [SUBSCRIPT EIGHT]
-"\u2088" => "8"
-
-# ⑧  [CIRCLED DIGIT EIGHT]
-"\u2467" => "8"
-
-# ⓼  [DOUBLE CIRCLED DIGIT EIGHT]
-"\u24FC" => "8"
-
-# ❽  [DINGBAT NEGATIVE CIRCLED DIGIT EIGHT]
-"\u277D" => "8"
-
-# ➇  [DINGBAT CIRCLED SANS-SERIF DIGIT EIGHT]
-"\u2787" => "8"
-
-# ➑  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT EIGHT]
-"\u2791" => "8"
-
-# 8  [FULLWIDTH DIGIT EIGHT]
-"\uFF18" => "8"
-
-# ⒏  [DIGIT EIGHT FULL STOP]
-"\u248F" => "8."
-
-# ⑻  [PARENTHESIZED DIGIT EIGHT]
-"\u247B" => "(8)"
-
-# ⁹  [SUPERSCRIPT NINE]
-"\u2079" => "9"
-
-# ₉  [SUBSCRIPT NINE]
-"\u2089" => "9"
-
-# ⑨  [CIRCLED DIGIT NINE]
-"\u2468" => "9"
-
-# ⓽  [DOUBLE CIRCLED DIGIT NINE]
-"\u24FD" => "9"
-
-# ❾  [DINGBAT NEGATIVE CIRCLED DIGIT NINE]
-"\u277E" => "9"
-
-# ➈  [DINGBAT CIRCLED SANS-SERIF DIGIT NINE]
-"\u2788" => "9"
-
-# ➒  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT NINE]
-"\u2792" => "9"
-
-# 9  [FULLWIDTH DIGIT NINE]
-"\uFF19" => "9"
-
-# ⒐  [DIGIT NINE FULL STOP]
-"\u2490" => "9."
-
-# ⑼  [PARENTHESIZED DIGIT NINE]
-"\u247C" => "(9)"
-
-# ⑩  [CIRCLED NUMBER TEN]
-"\u2469" => "10"
-
-# ⓾  [DOUBLE CIRCLED NUMBER TEN]
-"\u24FE" => "10"
-
-# ❿  [DINGBAT NEGATIVE CIRCLED NUMBER TEN]
-"\u277F" => "10"
-
-# ➉  [DINGBAT CIRCLED SANS-SERIF NUMBER TEN]
-"\u2789" => "10"
-
-# ➓  [DINGBAT NEGATIVE CIRCLED SANS-SERIF NUMBER TEN]
-"\u2793" => "10"
-
-# ⒑  [NUMBER TEN FULL STOP]
-"\u2491" => "10."
-
-# ⑽  [PARENTHESIZED NUMBER TEN]
-"\u247D" => "(10)"
-
-# ⑪  [CIRCLED NUMBER ELEVEN]
-"\u246A" => "11"
-
-# ⓫  [NEGATIVE CIRCLED NUMBER ELEVEN]
-"\u24EB" => "11"
-
-# ⒒  [NUMBER ELEVEN FULL STOP]
-"\u2492" => "11."
-
-# ⑾  [PARENTHESIZED NUMBER ELEVEN]
-"\u247E" => "(11)"
-
-# ⑫  [CIRCLED NUMBER TWELVE]
-"\u246B" => "12"
-
-# ⓬  [NEGATIVE CIRCLED NUMBER TWELVE]
-"\u24EC" => "12"
-
-# ⒓  [NUMBER TWELVE FULL STOP]
-"\u2493" => "12."
-
-# ⑿  [PARENTHESIZED NUMBER TWELVE]
-"\u247F" => "(12)"
-
-# ⑬  [CIRCLED NUMBER THIRTEEN]
-"\u246C" => "13"
-
-# ⓭  [NEGATIVE CIRCLED NUMBER THIRTEEN]
-"\u24ED" => "13"
-
-# ⒔  [NUMBER THIRTEEN FULL STOP]
-"\u2494" => "13."
-
-# ⒀  [PARENTHESIZED NUMBER THIRTEEN]
-"\u2480" => "(13)"
-
-# ⑭  [CIRCLED NUMBER FOURTEEN]
-"\u246D" => "14"
-
-# ⓮  [NEGATIVE CIRCLED NUMBER FOURTEEN]
-"\u24EE" => "14"
-
-# ⒕  [NUMBER FOURTEEN FULL STOP]
-"\u2495" => "14."
-
-# ⒁  [PARENTHESIZED NUMBER FOURTEEN]
-"\u2481" => "(14)"
-
-# ⑮  [CIRCLED NUMBER FIFTEEN]
-"\u246E" => "15"
-
-# ⓯  [NEGATIVE CIRCLED NUMBER FIFTEEN]
-"\u24EF" => "15"
-
-# ⒖  [NUMBER FIFTEEN FULL STOP]
-"\u2496" => "15."
-
-# ⒂  [PARENTHESIZED NUMBER FIFTEEN]
-"\u2482" => "(15)"
-
-# ⑯  [CIRCLED NUMBER SIXTEEN]
-"\u246F" => "16"
-
-# ⓰  [NEGATIVE CIRCLED NUMBER SIXTEEN]
-"\u24F0" => "16"
-
-# ⒗  [NUMBER SIXTEEN FULL STOP]
-"\u2497" => "16."
-
-# ⒃  [PARENTHESIZED NUMBER SIXTEEN]
-"\u2483" => "(16)"
-
-# ⑰  [CIRCLED NUMBER SEVENTEEN]
-"\u2470" => "17"
-
-# ⓱  [NEGATIVE CIRCLED NUMBER SEVENTEEN]
-"\u24F1" => "17"
-
-# ⒘  [NUMBER SEVENTEEN FULL STOP]
-"\u2498" => "17."
-
-# ⒄  [PARENTHESIZED NUMBER SEVENTEEN]
-"\u2484" => "(17)"
-
-# ⑱  [CIRCLED NUMBER EIGHTEEN]
-"\u2471" => "18"
-
-# ⓲  [NEGATIVE CIRCLED NUMBER EIGHTEEN]
-"\u24F2" => "18"
-
-# ⒙  [NUMBER EIGHTEEN FULL STOP]
-"\u2499" => "18."
-
-# ⒅  [PARENTHESIZED NUMBER EIGHTEEN]
-"\u2485" => "(18)"
-
-# ⑲  [CIRCLED NUMBER NINETEEN]
-"\u2472" => "19"
-
-# ⓳  [NEGATIVE CIRCLED NUMBER NINETEEN]
-"\u24F3" => "19"
-
-# ⒚  [NUMBER NINETEEN FULL STOP]
-"\u249A" => "19."
-
-# ⒆  [PARENTHESIZED NUMBER NINETEEN]
-"\u2486" => "(19)"
-
-# ⑳  [CIRCLED NUMBER TWENTY]
-"\u2473" => "20"
-
-# ⓴  [NEGATIVE CIRCLED NUMBER TWENTY]
-"\u24F4" => "20"
-
-# ⒛  [NUMBER TWENTY FULL STOP]
-"\u249B" => "20."
-
-# ⒇  [PARENTHESIZED NUMBER TWENTY]
-"\u2487" => "(20)"
-
-# «  [LEFT-POINTING DOUBLE ANGLE QUOTATION MARK]
-"\u00AB" => "\""
-
-# »  [RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK]
-"\u00BB" => "\""
-
-# “  [LEFT DOUBLE QUOTATION MARK]
-"\u201C" => "\""
-
-# ”  [RIGHT DOUBLE QUOTATION MARK]
-"\u201D" => "\""
-
-# „  [DOUBLE LOW-9 QUOTATION MARK]
-"\u201E" => "\""
-
-# ″  [DOUBLE PRIME]
-"\u2033" => "\""
-
-# ‶  [REVERSED DOUBLE PRIME]
-"\u2036" => "\""
-
-# ❝  [HEAVY DOUBLE TURNED COMMA QUOTATION MARK ORNAMENT]
-"\u275D" => "\""
-
-# ❞  [HEAVY DOUBLE COMMA QUOTATION MARK ORNAMENT]
-"\u275E" => "\""
-
-# ❮  [HEAVY LEFT-POINTING ANGLE QUOTATION MARK ORNAMENT]
-"\u276E" => "\""
-
-# ❯  [HEAVY RIGHT-POINTING ANGLE QUOTATION MARK ORNAMENT]
-"\u276F" => "\""
-
-# "  [FULLWIDTH QUOTATION MARK]
-"\uFF02" => "\""
-
-# ‘  [LEFT SINGLE QUOTATION MARK]
-"\u2018" => "\'"
-
-# ’  [RIGHT SINGLE QUOTATION MARK]
-"\u2019" => "\'"
-
-# ‚  [SINGLE LOW-9 QUOTATION MARK]
-"\u201A" => "\'"
-
-# ‛  [SINGLE HIGH-REVERSED-9 QUOTATION MARK]
-"\u201B" => "\'"
-
-# ′  [PRIME]
-"\u2032" => "\'"
-
-# ‵  [REVERSED PRIME]
-"\u2035" => "\'"
-
-# ‹  [SINGLE LEFT-POINTING ANGLE QUOTATION MARK]
-"\u2039" => "\'"
-
-# ›  [SINGLE RIGHT-POINTING ANGLE QUOTATION MARK]
-"\u203A" => "\'"
-
-# ❛  [HEAVY SINGLE TURNED COMMA QUOTATION MARK ORNAMENT]
-"\u275B" => "\'"
-
-# ❜  [HEAVY SINGLE COMMA QUOTATION MARK ORNAMENT]
-"\u275C" => "\'"
-
-# '  [FULLWIDTH APOSTROPHE]
-"\uFF07" => "\'"
-
-# ‐  [HYPHEN]
-"\u2010" => "-"
-
-# ‑  [NON-BREAKING HYPHEN]
-"\u2011" => "-"
-
-# ‒  [FIGURE DASH]
-"\u2012" => "-"
-
-# –  [EN DASH]
-"\u2013" => "-"
-
-# —  [EM DASH]
-"\u2014" => "-"
-
-# ⁻  [SUPERSCRIPT MINUS]
-"\u207B" => "-"
-
-# ₋  [SUBSCRIPT MINUS]
-"\u208B" => "-"
-
-# -  [FULLWIDTH HYPHEN-MINUS]
-"\uFF0D" => "-"
-
-# ⁅  [LEFT SQUARE BRACKET WITH QUILL]
-"\u2045" => "["
-
-# ❲  [LIGHT LEFT TORTOISE SHELL BRACKET ORNAMENT]
-"\u2772" => "["
-
-# [  [FULLWIDTH LEFT SQUARE BRACKET]
-"\uFF3B" => "["
-
-# ⁆  [RIGHT SQUARE BRACKET WITH QUILL]
-"\u2046" => "]"
-
-# ❳  [LIGHT RIGHT TORTOISE SHELL BRACKET ORNAMENT]
-"\u2773" => "]"
-
-# ]  [FULLWIDTH RIGHT SQUARE BRACKET]
-"\uFF3D" => "]"
-
-# ⁽  [SUPERSCRIPT LEFT PARENTHESIS]
-"\u207D" => "("
-
-# ₍  [SUBSCRIPT LEFT PARENTHESIS]
-"\u208D" => "("
-
-# ❨  [MEDIUM LEFT PARENTHESIS ORNAMENT]
-"\u2768" => "("
-
-# ❪  [MEDIUM FLATTENED LEFT PARENTHESIS ORNAMENT]
-"\u276A" => "("
-
-# (  [FULLWIDTH LEFT PARENTHESIS]
-"\uFF08" => "("
-
-# ⸨  [LEFT DOUBLE PARENTHESIS]
-"\u2E28" => "(("
-
-# ⁾  [SUPERSCRIPT RIGHT PARENTHESIS]
-"\u207E" => ")"
-
-# ₎  [SUBSCRIPT RIGHT PARENTHESIS]
-"\u208E" => ")"
-
-# ❩  [MEDIUM RIGHT PARENTHESIS ORNAMENT]
-"\u2769" => ")"
-
-# ❫  [MEDIUM FLATTENED RIGHT PARENTHESIS ORNAMENT]
-"\u276B" => ")"
-
-# )  [FULLWIDTH RIGHT PARENTHESIS]
-"\uFF09" => ")"
-
-# ⸩  [RIGHT DOUBLE PARENTHESIS]
-"\u2E29" => "))"
-
-# ❬  [MEDIUM LEFT-POINTING ANGLE BRACKET ORNAMENT]
-"\u276C" => "<"
-
-# ❰  [HEAVY LEFT-POINTING ANGLE BRACKET ORNAMENT]
-"\u2770" => "<"
-
-# <  [FULLWIDTH LESS-THAN SIGN]
-"\uFF1C" => "<"
-
-# ❭  [MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT]
-"\u276D" => ">"
-
-# ❱  [HEAVY RIGHT-POINTING ANGLE BRACKET ORNAMENT]
-"\u2771" => ">"
-
-# >  [FULLWIDTH GREATER-THAN SIGN]
-"\uFF1E" => ">"
-
-# ❴  [MEDIUM LEFT CURLY BRACKET ORNAMENT]
-"\u2774" => "{"
-
-# {  [FULLWIDTH LEFT CURLY BRACKET]
-"\uFF5B" => "{"
-
-# ❵  [MEDIUM RIGHT CURLY BRACKET ORNAMENT]
-"\u2775" => "}"
-
-# }  [FULLWIDTH RIGHT CURLY BRACKET]
-"\uFF5D" => "}"
-
-# ⁺  [SUPERSCRIPT PLUS SIGN]
-"\u207A" => "+"
-
-# ₊  [SUBSCRIPT PLUS SIGN]
-"\u208A" => "+"
-
-# +  [FULLWIDTH PLUS SIGN]
-"\uFF0B" => "+"
-
-# ⁼  [SUPERSCRIPT EQUALS SIGN]
-"\u207C" => "="
-
-# ₌  [SUBSCRIPT EQUALS SIGN]
-"\u208C" => "="
-
-# =  [FULLWIDTH EQUALS SIGN]
-"\uFF1D" => "="
-
-# !  [FULLWIDTH EXCLAMATION MARK]
-"\uFF01" => "!"
-
-# ‼  [DOUBLE EXCLAMATION MARK]
-"\u203C" => "!!"
-
-# ⁉  [EXCLAMATION QUESTION MARK]
-"\u2049" => "!?"
-
-# #  [FULLWIDTH NUMBER SIGN]
-"\uFF03" => "#"
-
-# $  [FULLWIDTH DOLLAR SIGN]
-"\uFF04" => "$"
-
-# ⁒  [COMMERCIAL MINUS SIGN]
-"\u2052" => "%"
-
-# %  [FULLWIDTH PERCENT SIGN]
-"\uFF05" => "%"
-
-# &  [FULLWIDTH AMPERSAND]
-"\uFF06" => "&"
-
-# ⁎  [LOW ASTERISK]
-"\u204E" => "*"
-
-# *  [FULLWIDTH ASTERISK]
-"\uFF0A" => "*"
-
-# ,  [FULLWIDTH COMMA]
-"\uFF0C" => ","
-
-# .  [FULLWIDTH FULL STOP]
-"\uFF0E" => "."
-
-# ⁄  [FRACTION SLASH]
-"\u2044" => "/"
-
-# /  [FULLWIDTH SOLIDUS]
-"\uFF0F" => "/"
-
-# :  [FULLWIDTH COLON]
-"\uFF1A" => ":"
-
-# ⁏  [REVERSED SEMICOLON]
-"\u204F" => ";"
-
-# ;  [FULLWIDTH SEMICOLON]
-"\uFF1B" => ";"
-
-# ?  [FULLWIDTH QUESTION MARK]
-"\uFF1F" => "?"
-
-# ⁇  [DOUBLE QUESTION MARK]
-"\u2047" => "??"
-
-# ⁈  [QUESTION EXCLAMATION MARK]
-"\u2048" => "?!"
-
-# @  [FULLWIDTH COMMERCIAL AT]
-"\uFF20" => "@"
-
-# \  [FULLWIDTH REVERSE SOLIDUS]
-"\uFF3C" => "\\"
-
-# ‸  [CARET]
-"\u2038" => "^"
-
-# ^  [FULLWIDTH CIRCUMFLEX ACCENT]
-"\uFF3E" => "^"
-
-# _  [FULLWIDTH LOW LINE]
-"\uFF3F" => "_"
-
-# ⁓  [SWUNG DASH]
-"\u2053" => "~"
-
-# ~  [FULLWIDTH TILDE]
-"\uFF5E" => "~"
-
-################################################################
-# Below is the Perl script used to generate the above mappings #
-# from ASCIIFoldingFilter.java:                                #
-################################################################
-#
-# #!/usr/bin/perl
-#
-# use warnings;
-# use strict;
-# 
-# my @source_chars = ();
-# my @source_char_descriptions = ();
-# my $target = '';
-# 
-# while (<>) {
-#   if (/case\s+'(\\u[A-F0-9]+)':\s*\/\/\s*(.*)/i) {
-#     push @source_chars, $1;
-#	  push @source_char_descriptions, $2;
-#	  next;
-#   }
-#   if (/output\[[^\]]+\]\s*=\s*'(\\'|\\\\|.)'/) {
-#     $target .= $1;
-#     next;
-#   }
-#   if (/break;/) {
-#     $target = "\\\"" if ($target eq '"');
-#     for my $source_char_num (0..$#source_chars) {
-#	    print "# $source_char_descriptions[$source_char_num]\n";
-#	    print "\"$source_chars[$source_char_num]\" => \"$target\"\n\n";
-#	  }
-#	  @source_chars = ();
-#	  @source_char_descriptions = ();
-#	  $target = '';
-#   }
-# }
diff --git a/solr/example/example-DIH/solr/db/conf/mapping-ISOLatin1Accent.txt b/solr/example/example-DIH/solr/db/conf/mapping-ISOLatin1Accent.txt
deleted file mode 100644
index ede7742..0000000
--- a/solr/example/example-DIH/solr/db/conf/mapping-ISOLatin1Accent.txt
+++ /dev/null
@@ -1,246 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Syntax:
-#   "source" => "target"
-#     "source".length() > 0 (source cannot be empty.)
-#     "target".length() >= 0 (target can be empty.)
-
-# example:
-#   "À" => "A"
-#   "\u00C0" => "A"
-#   "\u00C0" => "\u0041"
-#   "ß" => "ss"
-#   "\t" => " "
-#   "\n" => ""
-
-# À => A
-"\u00C0" => "A"
-
-# Á => A
-"\u00C1" => "A"
-
-# Â => A
-"\u00C2" => "A"
-
-# Ã => A
-"\u00C3" => "A"
-
-# Ä => A
-"\u00C4" => "A"
-
-# Å => A
-"\u00C5" => "A"
-
-# Æ => AE
-"\u00C6" => "AE"
-
-# Ç => C
-"\u00C7" => "C"
-
-# È => E
-"\u00C8" => "E"
-
-# É => E
-"\u00C9" => "E"
-
-# Ê => E
-"\u00CA" => "E"
-
-# Ë => E
-"\u00CB" => "E"
-
-# Ì => I
-"\u00CC" => "I"
-
-# Í => I
-"\u00CD" => "I"
-
-# Î => I
-"\u00CE" => "I"
-
-# Ï => I
-"\u00CF" => "I"
-
-# IJ => IJ
-"\u0132" => "IJ"
-
-# Ð => D
-"\u00D0" => "D"
-
-# Ñ => N
-"\u00D1" => "N"
-
-# Ò => O
-"\u00D2" => "O"
-
-# Ó => O
-"\u00D3" => "O"
-
-# Ô => O
-"\u00D4" => "O"
-
-# Õ => O
-"\u00D5" => "O"
-
-# Ö => O
-"\u00D6" => "O"
-
-# Ø => O
-"\u00D8" => "O"
-
-# Π=> OE
-"\u0152" => "OE"
-
-# Þ
-"\u00DE" => "TH"
-
-# Ù => U
-"\u00D9" => "U"
-
-# Ú => U
-"\u00DA" => "U"
-
-# Û => U
-"\u00DB" => "U"
-
-# Ü => U
-"\u00DC" => "U"
-
-# Ý => Y
-"\u00DD" => "Y"
-
-# Ÿ => Y
-"\u0178" => "Y"
-
-# à => a
-"\u00E0" => "a"
-
-# á => a
-"\u00E1" => "a"
-
-# â => a
-"\u00E2" => "a"
-
-# ã => a
-"\u00E3" => "a"
-
-# ä => a
-"\u00E4" => "a"
-
-# å => a
-"\u00E5" => "a"
-
-# æ => ae
-"\u00E6" => "ae"
-
-# ç => c
-"\u00E7" => "c"
-
-# è => e
-"\u00E8" => "e"
-
-# é => e
-"\u00E9" => "e"
-
-# ê => e
-"\u00EA" => "e"
-
-# ë => e
-"\u00EB" => "e"
-
-# ì => i
-"\u00EC" => "i"
-
-# í => i
-"\u00ED" => "i"
-
-# î => i
-"\u00EE" => "i"
-
-# ï => i
-"\u00EF" => "i"
-
-# ij => ij
-"\u0133" => "ij"
-
-# ð => d
-"\u00F0" => "d"
-
-# ñ => n
-"\u00F1" => "n"
-
-# ò => o
-"\u00F2" => "o"
-
-# ó => o
-"\u00F3" => "o"
-
-# ô => o
-"\u00F4" => "o"
-
-# õ => o
-"\u00F5" => "o"
-
-# ö => o
-"\u00F6" => "o"
-
-# ø => o
-"\u00F8" => "o"
-
-# œ => oe
-"\u0153" => "oe"
-
-# ß => ss
-"\u00DF" => "ss"
-
-# þ => th
-"\u00FE" => "th"
-
-# ù => u
-"\u00F9" => "u"
-
-# ú => u
-"\u00FA" => "u"
-
-# û => u
-"\u00FB" => "u"
-
-# ü => u
-"\u00FC" => "u"
-
-# ý => y
-"\u00FD" => "y"
-
-# ÿ => y
-"\u00FF" => "y"
-
-# ff => ff
-"\uFB00" => "ff"
-
-# fi => fi
-"\uFB01" => "fi"
-
-# fl => fl
-"\uFB02" => "fl"
-
-# ffi => ffi
-"\uFB03" => "ffi"
-
-# ffl => ffl
-"\uFB04" => "ffl"
-
-# ſt => ft
-"\uFB05" => "ft"
-
-# st => st
-"\uFB06" => "st"
diff --git a/solr/example/example-DIH/solr/db/conf/protwords.txt b/solr/example/example-DIH/solr/db/conf/protwords.txt
deleted file mode 100644
index 1dfc0ab..0000000
--- a/solr/example/example-DIH/solr/db/conf/protwords.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-# Use a protected word file to protect against the stemmer reducing two
-# unrelated words to the same base word.
-
-# Some non-words that normally won't be encountered,
-# just to test that they won't be stemmed.
-dontstems
-zwhacky
-
diff --git a/solr/example/example-DIH/solr/db/conf/solrconfig.xml b/solr/example/example-DIH/solr/db/conf/solrconfig.xml
deleted file mode 100644
index 1127093..0000000
--- a/solr/example/example-DIH/solr/db/conf/solrconfig.xml
+++ /dev/null
@@ -1,1342 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
-     For more details about configurations options that may appear in
-     this file, see http://wiki.apache.org/solr/SolrConfigXml.
--->
-<config>
-  <!-- In all configuration below, a prefix of "solr." for class names
-       is an alias that causes solr to search appropriate packages,
-       including org.apache.solr.(search|update|request|core|analysis)
-
-       You may also specify a fully qualified Java classname if you
-       have your own custom plugins.
-    -->
-
-  <!-- Controls what version of Lucene various components of Solr
-       adhere to.  Generally, you want to use the latest version to
-       get all bug fixes and improvements. It is highly recommended
-       that you fully re-index after changing this setting as it can
-       affect both how text is indexed and queried.
-  -->
-  <luceneMatchVersion>9.0.0</luceneMatchVersion>
-
-  <!-- <lib/> directives can be used to instruct Solr to load any Jars
-       identified and use them to resolve any "plugins" specified in
-       your solrconfig.xml or schema.xml (ie: Analyzers, Request
-       Handlers, etc...).
-
-       All directories and paths are resolved relative to the
-       instanceDir.
-
-       Please note that <lib/> directives are processed in the order
-       that they appear in your solrconfig.xml file, and are "stacked"
-       on top of each other when building a ClassLoader - so if you have
-       plugin jars with dependencies on other jars, the "lower level"
-       dependency jars should be loaded first.
-
-       If a "./lib" directory exists in your instanceDir, all files
-       found in it are included as if you had used the following
-       syntax...
-
-              <lib dir="./lib" />
-    -->
-
-  <!-- A 'dir' option by itself adds any files found in the directory
-       to the classpath, this is useful for including all jars in a
-       directory.
-
-       When a 'regex' is specified in addition to a 'dir', only the
-       files in that directory which completely match the regex
-       (anchored on both ends) will be included.
-
-       If a 'dir' option (with or without a regex) is used and nothing
-       is found that matches, a warning will be logged.
-
-       The examples below can be used to load some solr-contribs along
-       with their external dependencies.
-    -->
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
-
-  <!-- an exact 'path' can be used instead of a 'dir' to specify a
-       specific jar file.  This will cause a serious error to be logged
-       if it can't be loaded.
-    -->
-  <!--
-     <lib path="../a-jar-that-does-not-exist.jar" />
-  -->
-
-  <!-- Data Directory
-
-       Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.  If
-       replication is in use, this should match the replication
-       configuration.
-    -->
-  <dataDir>${solr.data.dir:}</dataDir>
-
-
-  <!-- The DirectoryFactory to use for indexes.
-
-       solr.StandardDirectoryFactory is filesystem
-       based and tries to pick the best implementation for the current
-       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,
-       wraps solr.StandardDirectoryFactory and caches small files in memory
-       for better NRT performance.
-
-       One can force a particular implementation via solr.MMapDirectoryFactory
-       or solr.NIOFSDirectoryFactory.
-
-       solr.RAMDirectoryFactory is memory based and not persistent.
-    -->
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <!-- The CodecFactory for defining the format of the inverted index.
-       The default implementation is SchemaCodecFactory, which is the official Lucene
-       index format, but hooks into the schema to provide per-field customization of
-       the postings lists and per-document values in the fieldType element
-       (postingsFormat/docValuesFormat). Note that most of the alternative implementations
-       are experimental, so if you choose to customize the index format, it's a good
-       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
-       before upgrading to a newer version to avoid unnecessary reindexing.
-  -->
-  <codecFactory class="solr.SchemaCodecFactory"/>
-
-  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-       Index Config - These settings control low-level behavior of indexing
-       Most example settings here show the default value, but are commented
-       out, to more easily see where customizations have been made.
-
-       Note: This replaces <indexDefaults> and <mainIndex> from older versions
-       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <indexConfig>
-    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
-         LimitTokenCountFilterFactory in your fieldType definition. E.g.
-     <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-    -->
-    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
-    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->
-
-    <!-- Expert: Enabling compound file will use less files for the index,
-         using fewer file descriptors on the expense of performance decrease.
-         Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
-    <!-- <useCompoundFile>false</useCompoundFile> -->
-
-    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene
-         indexing for buffering added documents and deletions before they are
-         flushed to the Directory.
-         maxBufferedDocs sets a limit on the number of documents buffered
-         before flushing.
-         If both ramBufferSizeMB and maxBufferedDocs is set, then
-         Lucene will flush based on whichever limit is hit first.
-         The default is 100 MB.  -->
-    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
-    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
-
-    <!-- Expert: Merge Policy
-         The Merge Policy in Lucene controls how merging of segments is done.
-         The default since Solr/Lucene 3.3 is TieredMergePolicy.
-         The default since Lucene 2.3 was the LogByteSizeMergePolicy,
-         Even older versions of Lucene used LogDocMergePolicy.
-     -->
-    <!--
-        <mergePolicyFactory class="solr.TieredMergePolicyFactory">
-          <int name="maxMergeAtOnce">10</int>
-          <int name="segmentsPerTier">10</int>
-        </mergePolicyFactory>
-     -->
-
-    <!-- Expert: Merge Scheduler
-         The Merge Scheduler in Lucene controls how merges are
-         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)
-         can perform merges in the background using separate threads.
-         The SerialMergeScheduler (Lucene 2.2 default) does not.
-     -->
-    <!--
-       <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-       -->
-
-    <!-- LockFactory
-
-         This option specifies which Lucene LockFactory implementation
-         to use.
-
-         single = SingleInstanceLockFactory - suggested for a
-                  read-only index or when there is no possibility of
-                  another process trying to modify the index.
-         native = NativeFSLockFactory - uses OS native file locking.
-                  Do not use when multiple solr webapps in the same
-                  JVM are attempting to share a single index.
-         simple = SimpleFSLockFactory  - uses a plain file for locking
-
-         Defaults: 'native' is default for Solr3.6 and later, otherwise
-                   'simple' is the default
-
-         More details on the nuances of each LockFactory...
-         http://wiki.apache.org/lucene-java/AvailableLockFactories
-    -->
-    <lockType>${solr.lock.type:native}</lockType>
-
-    <!-- Commit Deletion Policy
-         Custom deletion policies can be specified here. The class must
-         implement org.apache.lucene.index.IndexDeletionPolicy.
-
-         The default Solr IndexDeletionPolicy implementation supports
-         deleting index commit points on number of commits, age of
-         commit point and optimized status.
-
-         The latest commit point should always be preserved regardless
-         of the criteria.
-    -->
-    <!--
-    <deletionPolicy class="solr.SolrDeletionPolicy">
-    -->
-      <!-- The number of commit points to be kept -->
-      <!-- <str name="maxCommitsToKeep">1</str> -->
-      <!-- The number of optimized commit points to be kept -->
-      <!-- <str name="maxOptimizedCommitsToKeep">0</str> -->
-      <!--
-          Delete all commit points once they have reached the given age.
-          Supports DateMathParser syntax e.g.
-        -->
-      <!--
-         <str name="maxCommitAge">30MINUTES</str>
-         <str name="maxCommitAge">1DAY</str>
-      -->
-    <!--
-    </deletionPolicy>
-    -->
-
-    <!-- Lucene Infostream
-
-         To aid in advanced debugging, Lucene provides an "InfoStream"
-         of detailed information when indexing.
-
-         Setting the value to true will instruct the underlying Lucene
-         IndexWriter to write its info stream to solr's log. By default,
-         this is enabled here, and controlled through log4j2.xml
-      -->
-     <infoStream>true</infoStream>
-  </indexConfig>
-
-
-  <!-- JMX
-
-       This example enables JMX if and only if an existing MBeanServer
-       is found, use this if you want to configure JMX through JVM
-       parameters. Remove this to disable exposing Solr configuration
-       and statistics to JMX.
-
-       For more details see http://wiki.apache.org/solr/SolrJmx
-    -->
-  <jmx />
-  <!-- If you want to connect to a particular server, specify the
-       agentId
-    -->
-  <!-- <jmx agentId="myAgent" /> -->
-  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->
-  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
-    -->
-
-  <!-- The default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- Enables a transaction log, used for real-time get, durability, and
-         and solr cloud replica recovery.  The log can grow as big as
-         uncommitted changes to the index, so use of a hard autoCommit
-         is recommended (see below).
-         "dir" - the target directory for transaction logs, defaults to the
-                solr data directory.  -->
-    <updateLog>
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-
-    <!-- AutoCommit
-
-         Perform a hard commit automatically under certain conditions.
-         Instead of enabling autoCommit, consider using "commitWithin"
-         when adding documents.
-
-         http://wiki.apache.org/solr/UpdateXmlMessages
-
-         maxDocs - Maximum number of documents to add since the last
-                   commit before automatically triggering a new commit.
-
-         maxTime - Maximum amount of time in ms that is allowed to pass
-                   since a document was added before automatically
-                   triggering a new commit.
-         openSearcher - if false, the commit causes recent index changes
-           to be flushed to stable storage, but does not cause a new
-           searcher to be opened to make those changes visible.
-
-         If the updateLog is enabled, then it's highly recommended to
-         have some sort of hard autoCommit to limit the log size.
-      -->
-     <autoCommit>
-       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
-       <openSearcher>false</openSearcher>
-     </autoCommit>
-
-    <!-- softAutoCommit is like autoCommit except it causes a
-         'soft' commit which only ensures that changes are visible
-         but does not ensure that data is synced to disk.  This is
-         faster and more near-realtime friendly than a hard commit.
-      -->
-
-     <autoSoftCommit>
-       <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
-     </autoSoftCommit>
-
-    <!-- Update Related Event Listeners
-
-         Various IndexWriter related events can trigger Listeners to
-         take actions.
-
-         postCommit - fired after every commit or optimize command
-         postOptimize - fired after every optimize command
-      -->
-
-  </updateHandler>
-
-  <!-- IndexReaderFactory
-
-       Use the following format to specify a custom IndexReaderFactory,
-       which allows for alternate IndexReader implementations.
-
-       ** Experimental Feature **
-
-       Please note - Using a custom IndexReaderFactory may prevent
-       certain other features from working. The API to
-       IndexReaderFactory may change without warning or may even be
-       removed from future releases if the problems cannot be
-       resolved.
-
-
-       ** Features that may not work with custom IndexReaderFactory **
-
-       The ReplicationHandler assumes a disk-resident index. Using a
-       custom IndexReader implementation may cause incompatibility
-       with ReplicationHandler and may cause replication to not work
-       correctly. See SOLR-1366 for details.
-
-    -->
-  <!--
-  <indexReaderFactory name="IndexReaderFactory" class="package.class">
-    <str name="someArg">Some Value</str>
-  </indexReaderFactory >
-  -->
-
-  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-       Query section - these settings control query time things like caches
-       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <query>
-    <!-- Max Boolean Clauses
-
-         Maximum number of clauses in each BooleanQuery,  an exception
-         is thrown if exceeded.
-
-         ** WARNING **
-
-         This option actually modifies a global Lucene property that
-         will affect all SolrCores.  If multiple solrconfig.xml files
-         disagree on this property, the value at any given moment will
-         be based on the last SolrCore to be initialized.
-
-      -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-
-    <!-- Solr Internal Query Caches
-         Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
-    -->
-
-    <!-- Filter Cache
-
-         Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.  When a
-         new searcher is opened, its caches may be prepopulated or
-         "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.
-
-         Parameters:
-           class - the SolrCache implementation
-           size - the maximum number of entries in the cache
-           initialSize - the initial capacity (number of entries) of
-               the cache.  (see java.util.HashMap)
-           autowarmCount - the number of entries to prepopulate from
-               and old cache.
-      -->
-    <filterCache class="solr.CaffeineCache"
-                 size="512"
-                 initialSize="512"
-                 autowarmCount="0"/>
-
-    <!-- Query Result Cache
-
-         Caches results of searches - ordered lists of document ids
-         (DocList) based on a query, a sort, and the range of documents requested.
-      -->
-    <queryResultCache class="solr.CaffeineCache"
-                     size="512"
-                     initialSize="512"
-                     autowarmCount="0"/>
-
-    <!-- Document Cache
-
-         Caches Lucene Document objects (the stored fields for each
-         document).  Since Lucene internal document ids are transient,
-         this cache will not be autowarmed.
-      -->
-    <documentCache class="solr.CaffeineCache"
-                   size="512"
-                   initialSize="512"
-                   autowarmCount="0"/>
-
-    <!-- custom cache currently used by block join -->
-    <cache name="perSegFilter"
-      class="solr.search.CaffeineCache"
-      size="10"
-      initialSize="0"
-      autowarmCount="10"
-      regenerator="solr.NoOpRegenerator" />
-
-    <!-- Field Value Cache
-
-         Cache used to hold field values that are quickly accessible
-         by document id.  The fieldValueCache is created by default
-         even if not configured here.
-      -->
-    <!--
-       <fieldValueCache class="solr.CaffeineCache"
-                        size="512"
-                        autowarmCount="128"
-                        showItems="32" />
-      -->
-
-    <!-- Custom Cache
-
-         Example of a generic cache.  These caches may be accessed by
-         name through SolrIndexSearcher.getCache(),cacheLookup(), and
-         cacheInsert().  The purpose is to enable easy caching of
-         user/application level data.  The regenerator argument should
-         be specified as an implementation of solr.CacheRegenerator
-         if autowarming is desired.
-      -->
-    <!--
-       <cache name="myUserCache"
-              class="solr.CaffeineCache"
-              size="4096"
-              initialSize="1024"
-              autowarmCount="1024"
-              regenerator="com.mycompany.MyRegenerator"
-              />
-      -->
-
-
-    <!-- Lazy Field Loading
-
-         If true, stored fields that are not requested will be loaded
-         lazily.  This can result in a significant speed improvement
-         if the usual case is to not load all stored fields,
-         especially if the skipped fields are large compressed text
-         fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-   <!-- Use Filter For Sorted Query
-
-        A possible optimization that attempts to use a filter to
-        satisfy a search.  If the requested sort does not include
-        score, then the filterCache will be checked for a filter
-        matching the query. If found, the filter will be used as the
-        source of document ids, and then the sort will be applied to
-        that.
-
-        For most situations, this will not be useful unless you
-        frequently get the same search repeatedly with different sort
-        options, and none of them ever use "score"
-     -->
-   <!--
-      <useFilterForSortedQuery>true</useFilterForSortedQuery>
-     -->
-
-   <!-- Result Window Size
-
-        An optimization for use with the queryResultCache.  When a search
-        is requested, a superset of the requested number of document ids
-        are collected.  For example, if a search for a particular query
-        requests matching documents 10 through 19, and queryWindowSize is 50,
-        then documents 0 through 49 will be collected and cached.  Any further
-        requests in that range can be satisfied via the cache.
-     -->
-   <queryResultWindowSize>20</queryResultWindowSize>
-
-   <!-- Maximum number of documents to cache for any entry in the
-        queryResultCache.
-     -->
-   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-   <!-- Query Related Event Listeners
-
-        Various IndexSearcher related events can trigger Listeners to
-        take actions.
-
-        newSearcher - fired whenever a new searcher is being prepared
-        and there is a current searcher handling requests (aka
-        registered).  It can be used to prime certain caches to
-        prevent long request times for certain requests.
-
-        firstSearcher - fired whenever a new searcher is being
-        prepared but there is no current registered searcher to handle
-        requests or to gain autowarming data from.
-
-
-     -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence.
-      -->
-    <listener event="newSearcher" class="solr.QuerySenderListener">
-      <arr name="queries">
-        <!--
-           <lst><str name="q">solr</str><str name="sort">price asc</str></lst>
-           <lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
-          -->
-      </arr>
-    </listener>
-    <listener event="firstSearcher" class="solr.QuerySenderListener">
-      <arr name="queries">
-        <lst>
-          <str name="q">static firstSearcher warming in solrconfig.xml</str>
-        </lst>
-      </arr>
-    </listener>
-
-    <!-- Use Cold Searcher
-
-         If a search request comes in and there is no current
-         registered searcher, then immediately register the still
-         warming searcher and use it.  If "false" then all requests
-         will block until the first searcher is done warming.
-      -->
-    <useColdSearcher>false</useColdSearcher>
-
-  </query>
-
-
-  <!-- Request Dispatcher
-
-       This section contains instructions for how the SolrDispatchFilter
-       should behave when processing requests for this SolrCore.
-    -->
-  <requestDispatcher>
-    <!-- Request Parsing
-
-         These settings indicate how Solr Requests may be parsed, and
-         what restrictions may be placed on the ContentStreams from
-         those requests
-
-         enableRemoteStreaming - enables use of the stream.file
-         and stream.url parameters for specifying remote streams.
-
-         multipartUploadLimitInKB - specifies the max size (in KiB) of
-         Multipart File Uploads that Solr will allow in a Request.
-
-         formdataUploadLimitInKB - specifies the max size (in KiB) of
-         form data (application/x-www-form-urlencoded) sent via
-         POST. You can use POST to pass request parameters not
-         fitting into the URL.
-
-         addHttpRequestToContext - if set to true, it will instruct
-         the requestParsers to include the original HttpServletRequest
-         object in the context map of the SolrQueryRequest under the
-         key "httpRequest". It will not be used by any of the existing
-         Solr components, but may be useful when developing custom
-         plugins.
-
-         *** WARNING ***
-         Before enabling remote streaming, you should make sure your
-         system has authentication enabled.
-
-    <requestParsers enableRemoteStreaming="false"
-                    multipartUploadLimitInKB="-1"
-                    formdataUploadLimitInKB="-1"
-                    addHttpRequestToContext="false"/>
-      -->
-
-    <!-- HTTP Caching
-
-         Set HTTP caching related parameters (for proxy caches and clients).
-
-         The options below instruct Solr not to output any HTTP Caching
-         related headers
-      -->
-    <httpCaching never304="true" />
-    <!-- If you include a <cacheControl> directive, it will be used to
-         generate a Cache-Control header (as well as an Expires header
-         if the value contains "max-age=")
-
-         By default, no Cache-Control header is generated.
-
-         You can use the <cacheControl> option even if you have set
-         never304="true"
-      -->
-    <!--
-       <httpCaching never304="true" >
-         <cacheControl>max-age=30, public</cacheControl>
-       </httpCaching>
-      -->
-    <!-- To enable Solr to respond with automatically generated HTTP
-         Caching headers, and to response to Cache Validation requests
-         correctly, set the value of never304="false"
-
-         This will cause Solr to generate Last-Modified and ETag
-         headers based on the properties of the Index.
-
-         The following options can also be specified to affect the
-         values of these headers...
-
-         lastModFrom - the default value is "openTime" which means the
-         Last-Modified value (and validation against If-Modified-Since
-         requests) will all be relative to when the current Searcher
-         was opened.  You can change it to lastModFrom="dirLastMod" if
-         you want the value to exactly correspond to when the physical
-         index was last modified.
-
-         etagSeed="..." is an option you can change to force the ETag
-         header (and validation against If-None-Match requests) to be
-         different even if the index has not changed (ie: when making
-         significant changes to your config file)
-
-         (lastModifiedFrom and etagSeed are both ignored if you use
-         the never304="true" option)
-      -->
-    <!--
-       <httpCaching lastModifiedFrom="openTime"
-                    etagSeed="Solr">
-         <cacheControl>max-age=30, public</cacheControl>
-       </httpCaching>
-      -->
-  </requestDispatcher>
-
-  <!-- Request Handlers
-
-       http://wiki.apache.org/solr/SolrRequestHandler
-
-       Incoming queries will be dispatched to a specific handler by name
-       based on the path specified in the request.
-
-       If a Request Handler is declared with startup="lazy", then it will
-       not be initialized until the first request that uses it.
-
-    -->
-
-  <requestHandler name="/dataimport" class="solr.DataImportHandler">
-    <lst name="defaults">
-      <str name="config">db-data-config.xml</str>
-    </lst>
-  </requestHandler>
-
-  <!-- SearchHandler
-
-       http://wiki.apache.org/solr/SearchHandler
-
-       For processing Search Queries, the primary Request Handler
-       provided with Solr is "SearchHandler" It delegates to a sequent
-       of SearchComponents (see below) and supports distributed
-       queries across multiple shards
-    -->
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters can be specified, these
-         will be overridden by parameters in the request
-      -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <int name="rows">10</int>
-       <str name="df">text</str>
-       <!-- Change from JSON to XML format (the default prior to Solr 7.0)
-          <str name="wt">xml</str> 
-         -->
-     </lst>
-    <!-- In addition to defaults, "appends" params can be specified
-         to identify values which should be appended to the list of
-         multi-val params from the query (or the existing "defaults").
-      -->
-    <!-- In this example, the param "fq=instock:true" would be appended to
-         any query time fq params the user may specify, as a mechanism for
-         partitioning the index, independent of any user selected filtering
-         that may also be desired (perhaps as a result of faceted searching).
-
-         NOTE: there is *absolutely* nothing a client can do to prevent these
-         "appends" values from being used, so don't use this mechanism
-         unless you are sure you always want it.
-      -->
-    <!--
-       <lst name="appends">
-         <str name="fq">inStock:true</str>
-       </lst>
-      -->
-    <!-- "invariants" are a way of letting the Solr maintainer lock down
-         the options available to Solr clients.  Any params values
-         specified here are used regardless of what values may be specified
-         in either the query, the "defaults", or the "appends" params.
-
-         In this example, the facet.field and facet.query params would
-         be fixed, limiting the facets clients can use.  Faceting is
-         not turned on by default - but if the client does specify
-         facet=true in the request, these are the only facets they
-         will be able to see counts for; regardless of what other
-         facet.field or facet.query params they may specify.
-
-         NOTE: there is *absolutely* nothing a client can do to prevent these
-         "invariants" values from being used, so don't use this mechanism
-         unless you are sure you always want it.
-      -->
-    <!--
-       <lst name="invariants">
-         <str name="facet.field">cat</str>
-         <str name="facet.field">manu_exact</str>
-         <str name="facet.query">price:[* TO 500]</str>
-         <str name="facet.query">price:[500 TO *]</str>
-       </lst>
-      -->
-    <!-- If the default list of SearchComponents is not desired, that
-         list can either be overridden completely, or components can be
-         prepended or appended to the default list.  (see below)
-      -->
-    <!--
-       <arr name="components">
-         <str>nameOfCustomComponent1</str>
-         <str>nameOfCustomComponent2</str>
-       </arr>
-      -->
-    </requestHandler>
-
-  <!-- A request handler that returns indented JSON by default -->
-  <requestHandler name="/query" class="solr.SearchHandler">
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <str name="wt">json</str>
-       <str name="indent">true</str>
-       <str name="df">text</str>
-     </lst>
-  </requestHandler>
-
-
-  <!-- A Robust Example
-
-       This example SearchHandler declaration shows off usage of the
-       SearchHandler with many defaults declared
-
-       Note that multiple instances of the same Request Handler
-       (SearchHandler) can be registered multiple times with different
-       names (and different init parameters)
-    -->
-  <requestHandler name="/browse" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-
-      <!-- VelocityResponseWriter settings -->
-      <str name="wt">velocity</str>
-      <str name="v.template">browse</str>
-      <str name="v.layout">layout</str>
-
-      <!-- Query settings -->
-      <str name="defType">edismax</str>
-      <str name="q.alt">*:*</str>
-      <str name="rows">10</str>
-      <str name="fl">*,score</str>
-
-      <!-- Faceting defaults -->
-      <str name="facet">on</str>
-      <str name="facet.mincount">1</str>
-    </lst>
-  </requestHandler>
-
-  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
-    <lst name="defaults">
-      <str name="df">text</str>
-    </lst>
-  </initParams>
-
-  <!-- Solr Cell Update Request Handler
-
-       http://wiki.apache.org/solr/ExtractingRequestHandler
-
-    -->
-  <requestHandler name="/update/extract"
-                  startup="lazy"
-                  class="solr.extraction.ExtractingRequestHandler" >
-    <lst name="defaults">
-      <str name="lowernames">true</str>
-      <str name="uprefix">ignored_</str>
-
-      <!-- capture link hrefs but ignore div attributes -->
-      <str name="captureAttr">true</str>
-      <str name="fmap.a">links</str>
-      <str name="fmap.div">ignored_</str>
-    </lst>
-  </requestHandler>
-
-  <!-- Search Components
-
-       Search components are registered to SolrCore and used by
-       instances of SearchHandler (which can access them by name)
-
-       By default, the following components are available:
-
-       <searchComponent name="query"     class="solr.QueryComponent" />
-       <searchComponent name="facet"     class="solr.FacetComponent" />
-       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
-       <searchComponent name="highlight" class="solr.HighlightComponent" />
-       <searchComponent name="stats"     class="solr.StatsComponent" />
-       <searchComponent name="debug"     class="solr.DebugComponent" />
-
-       Default configuration in a requestHandler would look like:
-
-       <arr name="components">
-         <str>query</str>
-         <str>facet</str>
-         <str>mlt</str>
-         <str>highlight</str>
-         <str>stats</str>
-         <str>debug</str>
-       </arr>
-
-       If you register a searchComponent to one of the standard names,
-       that will be used instead of the default.
-
-       To insert components before or after the 'standard' components, use:
-
-       <arr name="first-components">
-         <str>myFirstComponentName</str>
-       </arr>
-
-       <arr name="last-components">
-         <str>myLastComponentName</str>
-       </arr>
-
-       NOTE: The component registered with the name "debug" will
-       always be executed after the "last-components"
-
-     -->
-
-   <!-- Spell Check
-
-        The spell check component can return a list of alternative spelling
-        suggestions.
-
-        http://wiki.apache.org/solr/SpellCheckComponent
-     -->
-  <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
-
-    <str name="queryAnalyzerFieldType">text_general</str>
-
-    <!-- Multiple "Spell Checkers" can be declared and used by this
-         component
-      -->
-
-    <!-- a spellchecker built from a field of the main index -->
-    <lst name="spellchecker">
-      <str name="name">default</str>
-      <str name="field">text</str>
-      <str name="classname">solr.DirectSolrSpellChecker</str>
-      <!-- the spellcheck distance measure used, the default is the internal levenshtein -->
-      <str name="distanceMeasure">internal</str>
-      <!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
-      <float name="accuracy">0.5</float>
-      <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
-      <int name="maxEdits">2</int>
-      <!-- the minimum shared prefix when enumerating terms -->
-      <int name="minPrefix">1</int>
-      <!-- maximum number of inspections per result. -->
-      <int name="maxInspections">5</int>
-      <!-- minimum length of a query term to be considered for correction -->
-      <int name="minQueryLength">4</int>
-      <!-- maximum threshold of documents a query term can appear to be considered for correction -->
-      <float name="maxQueryFrequency">0.01</float>
-      <!-- uncomment this to require suggestions to occur in 1% of the documents
-        <float name="thresholdTokenFrequency">.01</float>
-      -->
-    </lst>
-
-    <!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage -->
-    <lst name="spellchecker">
-      <str name="name">wordbreak</str>
-      <str name="classname">solr.WordBreakSolrSpellChecker</str>
-      <str name="field">name</str>
-      <str name="combineWords">true</str>
-      <str name="breakWords">true</str>
-      <int name="maxChanges">10</int>
-    </lst>
-
-    <!-- a spellchecker that uses a different distance measure -->
-    <!--
-       <lst name="spellchecker">
-         <str name="name">jarowinkler</str>
-         <str name="field">spell</str>
-         <str name="classname">solr.DirectSolrSpellChecker</str>
-         <str name="distanceMeasure">
-           org.apache.lucene.search.spell.JaroWinklerDistance
-         </str>
-       </lst>
-     -->
-
-    <!-- a spellchecker that use an alternate comparator
-
-         comparatorClass be one of:
-          1. score (default)
-          2. freq (Frequency first, then score)
-          3. A fully qualified class name
-      -->
-    <!--
-       <lst name="spellchecker">
-         <str name="name">freq</str>
-         <str name="field">lowerfilt</str>
-         <str name="classname">solr.DirectSolrSpellChecker</str>
-         <str name="comparatorClass">freq</str>
-      -->
-
-    <!-- A spellchecker that reads the list of words from a file -->
-    <!--
-       <lst name="spellchecker">
-         <str name="classname">solr.FileBasedSpellChecker</str>
-         <str name="name">file</str>
-         <str name="sourceLocation">spellings.txt</str>
-         <str name="characterEncoding">UTF-8</str>
-         <str name="spellcheckIndexDir">spellcheckerFile</str>
-       </lst>
-      -->
-  </searchComponent>
-
-  <!-- A request handler for demonstrating the spellcheck component.
-
-       NOTE: This is purely as an example.  The whole purpose of the
-       SpellCheckComponent is to hook it into the request handler that
-       handles your normal user queries so that a separate request is
-       not needed to get suggestions.
-
-       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
-       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
-
-       See http://wiki.apache.org/solr/SpellCheckComponent for details
-       on the request parameters.
-    -->
-  <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="df">text</str>
-      <!-- Solr will use suggestions from both the 'default' spellchecker
-           and from the 'wordbreak' spellchecker and combine them.
-           collations (re-written queries) can include a combination of
-           corrections from both spellcheckers -->
-      <str name="spellcheck.dictionary">default</str>
-      <str name="spellcheck.dictionary">wordbreak</str>
-      <str name="spellcheck">on</str>
-      <str name="spellcheck.extendedResults">true</str>
-      <str name="spellcheck.count">10</str>
-      <str name="spellcheck.alternativeTermCount">5</str>
-      <str name="spellcheck.maxResultsForSuggest">5</str>
-      <str name="spellcheck.collate">true</str>
-      <str name="spellcheck.collateExtendedResults">true</str>
-      <str name="spellcheck.maxCollationTries">10</str>
-      <str name="spellcheck.maxCollations">5</str>
-    </lst>
-    <arr name="last-components">
-      <str>spellcheck</str>
-    </arr>
-  </requestHandler>
-
-  <searchComponent name="suggest" class="solr.SuggestComponent">
-    <lst name="suggester">
-      <str name="name">mySuggester</str>
-      <str name="lookupImpl">FuzzyLookupFactory</str>      <!-- org.apache.solr.spelling.suggest.fst -->
-      <str name="dictionaryImpl">DocumentDictionaryFactory</str>     <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
-      <str name="field">cat</str>
-      <str name="weightField">price</str>
-      <str name="suggestAnalyzerFieldType">string</str>
-    </lst>
-  </searchComponent>
-
-  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="suggest">true</str>
-      <str name="suggest.count">10</str>
-    </lst>
-    <arr name="components">
-      <str>suggest</str>
-    </arr>
-  </requestHandler>
-  <!-- Term Vector Component
-
-       http://wiki.apache.org/solr/TermVectorComponent
-    -->
-  <searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
-
-  <!-- A request handler for demonstrating the term vector component
-
-       This is purely as an example.
-
-       In reality you will likely want to add the component to your
-       already specified request handlers.
-    -->
-  <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="df">text</str>
-      <bool name="tv">true</bool>
-    </lst>
-    <arr name="last-components">
-      <str>tvComponent</str>
-    </arr>
-  </requestHandler>
-
-  <!-- Terms Component
-
-       http://wiki.apache.org/solr/TermsComponent
-
-       A component to return terms and document frequency of those
-       terms
-    -->
-  <searchComponent name="terms" class="solr.TermsComponent"/>
-
-  <!-- A request handler for demonstrating the terms component -->
-  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
-     <lst name="defaults">
-      <bool name="terms">true</bool>
-      <bool name="distrib">false</bool>
-    </lst>
-    <arr name="components">
-      <str>terms</str>
-    </arr>
-  </requestHandler>
-
-
-  <!-- Query Elevation Component
-
-       http://wiki.apache.org/solr/QueryElevationComponent
-
-       a search component that enables you to configure the top
-       results for a given query regardless of the normal lucene
-       scoring.
-    -->
-  <searchComponent name="elevator" class="solr.QueryElevationComponent" >
-    <!-- pick a fieldType to analyze queries -->
-    <str name="queryFieldType">string</str>
-    <str name="config-file">elevate.xml</str>
-  </searchComponent>
-
-  <!-- A request handler for demonstrating the elevator component -->
-  <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="df">text</str>
-    </lst>
-    <arr name="last-components">
-      <str>elevator</str>
-    </arr>
-  </requestHandler>
-
-  <!-- Highlighting Component
-
-       http://wiki.apache.org/solr/HighlightingParameters
-    -->
-  <searchComponent class="solr.HighlightComponent" name="highlight">
-    <highlighting>
-      <!-- Configure the standard fragmenter -->
-      <!-- This could most likely be commented out in the "default" case -->
-      <fragmenter name="gap"
-                  default="true"
-                  class="solr.highlight.GapFragmenter">
-        <lst name="defaults">
-          <int name="hl.fragsize">100</int>
-        </lst>
-      </fragmenter>
-
-      <!-- A regular-expression-based fragmenter
-           (for sentence extraction)
-        -->
-      <fragmenter name="regex"
-                  class="solr.highlight.RegexFragmenter">
-        <lst name="defaults">
-          <!-- slightly smaller fragsizes work better because of slop -->
-          <int name="hl.fragsize">70</int>
-          <!-- allow 50% slop on fragment sizes -->
-          <float name="hl.regex.slop">0.5</float>
-          <!-- a basic sentence pattern -->
-          <str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
-        </lst>
-      </fragmenter>
-
-      <!-- Configure the standard formatter -->
-      <formatter name="html"
-                 default="true"
-                 class="solr.highlight.HtmlFormatter">
-        <lst name="defaults">
-          <str name="hl.simple.pre"><![CDATA[<em>]]></str>
-          <str name="hl.simple.post"><![CDATA[</em>]]></str>
-        </lst>
-      </formatter>
-
-      <!-- Configure the standard encoder -->
-      <encoder name="html"
-               class="solr.highlight.HtmlEncoder" />
-
-      <!-- Configure the standard fragListBuilder -->
-      <fragListBuilder name="simple"
-                       class="solr.highlight.SimpleFragListBuilder"/>
-
-      <!-- Configure the single fragListBuilder -->
-      <fragListBuilder name="single"
-                       class="solr.highlight.SingleFragListBuilder"/>
-
-      <!-- Configure the weighted fragListBuilder -->
-      <fragListBuilder name="weighted"
-                       default="true"
-                       class="solr.highlight.WeightedFragListBuilder"/>
-
-      <!-- default tag FragmentsBuilder -->
-      <fragmentsBuilder name="default"
-                        default="true"
-                        class="solr.highlight.ScoreOrderFragmentsBuilder">
-        <!--
-        <lst name="defaults">
-          <str name="hl.multiValuedSeparatorChar">/</str>
-        </lst>
-        -->
-      </fragmentsBuilder>
-
-      <!-- multi-colored tag FragmentsBuilder -->
-      <fragmentsBuilder name="colored"
-                        class="solr.highlight.ScoreOrderFragmentsBuilder">
-        <lst name="defaults">
-          <str name="hl.tag.pre"><![CDATA[
-               <b style="background:yellow">,<b style="background:lawgreen">,
-               <b style="background:aquamarine">,<b style="background:magenta">,
-               <b style="background:palegreen">,<b style="background:coral">,
-               <b style="background:wheat">,<b style="background:khaki">,
-               <b style="background:lime">,<b style="background:deepskyblue">]]></str>
-          <str name="hl.tag.post"><![CDATA[</b>]]></str>
-        </lst>
-      </fragmentsBuilder>
-
-      <boundaryScanner name="default"
-                       default="true"
-                       class="solr.highlight.SimpleBoundaryScanner">
-        <lst name="defaults">
-          <str name="hl.bs.maxScan">10</str>
-          <str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
-        </lst>
-      </boundaryScanner>
-
-      <boundaryScanner name="breakIterator"
-                       class="solr.highlight.BreakIteratorBoundaryScanner">
-        <lst name="defaults">
-          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
-          <str name="hl.bs.type">WORD</str>
-          <!-- language and country are used when constructing Locale object.  -->
-          <!-- And the Locale object will be used when getting instance of BreakIterator -->
-          <str name="hl.bs.language">en</str>
-          <str name="hl.bs.country">US</str>
-        </lst>
-      </boundaryScanner>
-    </highlighting>
-  </searchComponent>
-
-  <!-- Update Processors
-
-       Chains of Update Processor Factories for dealing with Update
-       Requests can be declared, and then used by name in Update
-       Request Processors
-
-       http://wiki.apache.org/solr/UpdateRequestProcessor
-
-    -->
-  <!-- Deduplication
-
-       An example dedup update processor that creates the "id" field
-       on the fly based on the hash code of some other fields.  This
-       example has overwriteDupes set to false since we are using the
-       id field as the signatureField and Solr will maintain
-       uniqueness based on that anyway.
-
-    -->
-  <!--
-     <updateRequestProcessorChain name="dedupe">
-       <processor class="solr.processor.SignatureUpdateProcessorFactory">
-         <bool name="enabled">true</bool>
-         <str name="signatureField">id</str>
-         <bool name="overwriteDupes">false</bool>
-         <str name="fields">name,features,cat</str>
-         <str name="signatureClass">solr.processor.Lookup3Signature</str>
-       </processor>
-       <processor class="solr.LogUpdateProcessorFactory" />
-       <processor class="solr.RunUpdateProcessorFactory" />
-     </updateRequestProcessorChain>
-    -->
-
-  <!-- Language identification
-
-       This example update chain identifies the language of the incoming
-       documents using the langid contrib. The detected language is
-       written to field language_s. No field name mapping is done.
-       The fields used for detection are text, title, subject and description,
-       making this example suitable for detecting languages form full-text
-       rich documents injected via ExtractingRequestHandler.
-       See more about langId at http://wiki.apache.org/solr/LanguageDetection
-    -->
-    <!--
-     <updateRequestProcessorChain name="langid">
-       <processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
-         <str name="langid.fl">text,title,subject,description</str>
-         <str name="langid.langField">language_s</str>
-         <str name="langid.fallback">en</str>
-       </processor>
-       <processor class="solr.LogUpdateProcessorFactory" />
-       <processor class="solr.RunUpdateProcessorFactory" />
-     </updateRequestProcessorChain>
-    -->
-
-  <!-- Script update processor
-
-    This example hooks in an update processor implemented using JavaScript.
-
-    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor
-  -->
-  <!--
-    <updateRequestProcessorChain name="script">
-      <processor class="solr.StatelessScriptUpdateProcessorFactory">
-        <str name="script">update-script.js</str>
-        <lst name="params">
-          <str name="config_param">example config parameter</str>
-        </lst>
-      </processor>
-      <processor class="solr.RunUpdateProcessorFactory" />
-    </updateRequestProcessorChain>
-  -->
-
-  <!-- Response Writers
-
-       http://wiki.apache.org/solr/QueryResponseWriter
-
-       Request responses will be written using the writer specified by
-       the 'wt' request parameter matching the name of a registered
-       writer.
-
-       The "default" writer is the default and will be used if 'wt' is
-       not specified in the request.
-    -->
-  <!-- The following response writers are implicitly configured unless
-       overridden...
-    -->
-  <!--
-     <queryResponseWriter name="xml"
-                          default="true"
-                          class="solr.XMLResponseWriter" />
-     <queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
-     <queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
-     <queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
-     <queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
-     <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
-     <queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
-     <queryResponseWriter name="schema.xml" class="solr.SchemaXmlResponseWriter"/>
-    -->
-
-  <queryResponseWriter name="json" class="solr.JSONResponseWriter">
-     <!-- For the purposes of the tutorial, JSON responses are written as
-      plain text so that they are easy to read in *any* browser.
-      If you expect a MIME type of "application/json" just remove this override.
-     -->
-    <str name="content-type">text/plain; charset=UTF-8</str>
-  </queryResponseWriter>
-
-  <!--
-     Custom response writers can be declared as needed...
-    -->
-  <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy">
-    <str name="template.base.dir">${velocity.template.base.dir:}</str>
-  </queryResponseWriter>
-
-  <!-- XSLT response writer transforms the XML output by any xslt file found
-       in Solr's conf/xslt directory.  Changes to xslt files are checked for
-       every xsltCacheLifetimeSeconds.
-    -->
-  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
-    <int name="xsltCacheLifetimeSeconds">5</int>
-  </queryResponseWriter>
-
-  <!-- Query Parsers
-
-       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
-
-       Multiple QParserPlugins can be registered by name, and then
-       used in either the "defType" param for the QueryComponent (used
-       by SearchHandler) or in LocalParams
-    -->
-  <!-- example of registering a query parser -->
-  <!--
-     <queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
-    -->
-
-  <!-- Function Parsers
-
-       http://wiki.apache.org/solr/FunctionQuery
-
-       Multiple ValueSourceParsers can be registered by name, and then
-       used as function names when using the "func" QParser.
-    -->
-  <!-- example of registering a custom function parser  -->
-  <!--
-     <valueSourceParser name="myfunc"
-                        class="com.mycompany.MyValueSourceParser" />
-    -->
-
-
-  <!-- Document Transformers
-       http://wiki.apache.org/solr/DocTransformers
-    -->
-  <!--
-     Could be something like:
-     <transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
-       <int name="connection">jdbc://....</int>
-     </transformer>
-
-     To add a constant value to all docs, use:
-     <transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
-       <int name="value">5</int>
-     </transformer>
-
-     If you want the user to still be able to change it with _value:something_ use this:
-     <transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
-       <double name="defaultValue">5</double>
-     </transformer>
-
-      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The
-      EditorialMarkerFactory will do exactly that:
-     <transformer name="qecBooster" class="org.apache.solr.response.transform.EditorialMarkerFactory" />
-    -->
-
-</config>
diff --git a/solr/example/example-DIH/solr/db/conf/spellings.txt b/solr/example/example-DIH/solr/db/conf/spellings.txt
deleted file mode 100644
index d7ede6f..0000000
--- a/solr/example/example-DIH/solr/db/conf/spellings.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-pizza
-history
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/db/conf/stopwords.txt b/solr/example/example-DIH/solr/db/conf/stopwords.txt
deleted file mode 100644
index ae1e83e..0000000
--- a/solr/example/example-DIH/solr/db/conf/stopwords.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
diff --git a/solr/example/example-DIH/solr/db/conf/synonyms.txt b/solr/example/example-DIH/solr/db/conf/synonyms.txt
deleted file mode 100644
index eab4ee8..0000000
--- a/solr/example/example-DIH/solr/db/conf/synonyms.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-#some test synonym mappings unlikely to appear in real input text
-aaafoo => aaabar
-bbbfoo => bbbfoo bbbbar
-cccfoo => cccbar cccbaz
-fooaaa,baraaa,bazaaa
-
-# Some synonym groups specific to this example
-GB,gib,gigabyte,gigabytes
-MB,mib,megabyte,megabytes
-Television, Televisions, TV, TVs
-#notice we use "gib" instead of "GiB" so any WordDelimiterGraphFilter coming
-#after us won't split it into two words.
-
-# Synonym mappings can be used for spelling correction too
-pixima => pixma
-
diff --git a/solr/example/example-DIH/solr/db/conf/update-script.js b/solr/example/example-DIH/solr/db/conf/update-script.js
deleted file mode 100644
index 49b07f9..0000000
--- a/solr/example/example-DIH/solr/db/conf/update-script.js
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
-  This is a basic skeleton JavaScript update processor.
-
-  In order for this to be executed, it must be properly wired into solrconfig.xml; by default it is commented out in
-  the example solrconfig.xml and must be uncommented to be enabled.
-
-  See http://wiki.apache.org/solr/ScriptUpdateProcessor for more details.
-*/
-
-function processAdd(cmd) {
-
-  doc = cmd.solrDoc;  // org.apache.solr.common.SolrInputDocument
-  id = doc.getFieldValue("id");
-  logger.info("update-script#processAdd: id=" + id);
-
-// Set a field value:
-//  doc.setField("foo_s", "whatever");
-
-// Get a configuration parameter:
-//  config_param = params.get('config_param');  // "params" only exists if processor configured with <lst name="params">
-
-// Get a request parameter:
-// some_param = req.getParams().get("some_param")
-
-// Add a field of field names that match a pattern:
-//   - Potentially useful to determine the fields/attributes represented in a result set, via faceting on field_name_ss
-//  field_names = doc.getFieldNames().toArray();
-//  for(i=0; i < field_names.length; i++) {
-//    field_name = field_names[i];
-//    if (/attr_.*/.test(field_name)) { doc.addField("attribute_ss", field_names[i]); }
-//  }
-
-}
-
-function processDelete(cmd) {
-  // no-op
-}
-
-function processMergeIndexes(cmd) {
-  // no-op
-}
-
-function processCommit(cmd) {
-  // no-op
-}
-
-function processRollback(cmd) {
-  // no-op
-}
-
-function finish() {
-  // no-op
-}
diff --git a/solr/example/example-DIH/solr/db/conf/xslt/example.xsl b/solr/example/example-DIH/solr/db/conf/xslt/example.xsl
deleted file mode 100644
index b899270..0000000
--- a/solr/example/example-DIH/solr/db/conf/xslt/example.xsl
+++ /dev/null
@@ -1,132 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to HTML
- -->
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'
->
-
-  <xsl:output media-type="text/html" encoding="UTF-8"/> 
-  
-  <xsl:variable name="title" select="concat('Solr search results (',response/result/@numFound,' documents)')"/>
-  
-  <xsl:template match='/'>
-    <html>
-      <head>
-        <title><xsl:value-of select="$title"/></title>
-        <xsl:call-template name="css"/>
-      </head>
-      <body>
-        <h1><xsl:value-of select="$title"/></h1>
-        <div class="note">
-          This has been formatted by the sample "example.xsl" transform -
-          use your own XSLT to get a nicer page
-        </div>
-        <xsl:apply-templates select="response/result/doc"/>
-      </body>
-    </html>
-  </xsl:template>
-  
-  <xsl:template match="doc">
-    <xsl:variable name="pos" select="position()"/>
-    <div class="doc">
-      <table width="100%">
-        <xsl:apply-templates>
-          <xsl:with-param name="pos"><xsl:value-of select="$pos"/></xsl:with-param>
-        </xsl:apply-templates>
-      </table>
-    </div>
-  </xsl:template>
-
-  <xsl:template match="doc/*[@name='score']" priority="100">
-    <xsl:param name="pos"></xsl:param>
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <xsl:value-of select="."/>
-
-        <xsl:if test="boolean(//lst[@name='explain'])">
-          <xsl:element name="a">
-            <!-- can't allow whitespace here -->
-            <xsl:attribute name="href">javascript:toggle("<xsl:value-of select="concat('exp-',$pos)" />");</xsl:attribute>?</xsl:element>
-          <br/>
-          <xsl:element name="div">
-            <xsl:attribute name="class">exp</xsl:attribute>
-            <xsl:attribute name="id">
-              <xsl:value-of select="concat('exp-',$pos)" />
-            </xsl:attribute>
-            <xsl:value-of select="//lst[@name='explain']/str[position()=$pos]"/>
-          </xsl:element>
-        </xsl:if>
-      </td>
-    </tr>
-  </xsl:template>
-
-  <xsl:template match="doc/arr" priority="100">
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <ul>
-        <xsl:for-each select="*">
-          <li><xsl:value-of select="."/></li>
-        </xsl:for-each>
-        </ul>
-      </td>
-    </tr>
-  </xsl:template>
-
-
-  <xsl:template match="doc/*">
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <xsl:value-of select="."/>
-      </td>
-    </tr>
-  </xsl:template>
-
-  <xsl:template match="*"/>
-  
-  <xsl:template name="css">
-    <script>
-      function toggle(id) {
-        var obj = document.getElementById(id);
-        obj.style.display = (obj.style.display != 'block') ? 'block' : 'none';
-      }
-    </script>
-    <style type="text/css">
-      body { font-family: "Lucida Grande", sans-serif }
-      td.name { font-style: italic; font-size:80%; }
-      td { vertical-align: top; }
-      ul { margin: 0px; margin-left: 1em; padding: 0px; }
-      .note { font-size:80%; }
-      .doc { margin-top: 1em; border-top: solid grey 1px; }
-      .exp { display: none; font-family: monospace; white-space: pre; }
-    </style>
-  </xsl:template>
-
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/db/conf/xslt/example_atom.xsl b/solr/example/example-DIH/solr/db/conf/xslt/example_atom.xsl
deleted file mode 100644
index b6c2315..0000000
--- a/solr/example/example-DIH/solr/db/conf/xslt/example_atom.xsl
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to Atom
- -->
-
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-
-  <xsl:output
-       method="xml"
-       encoding="utf-8"
-       media-type="application/xml"
-  />
-
-  <xsl:template match='/'>
-    <xsl:variable name="query" select="response/lst[@name='responseHeader']/lst[@name='params']/str[@name='q']"/>
-    <feed xmlns="http://www.w3.org/2005/Atom">
-      <title>Example Solr Atom 1.0 Feed</title>
-      <subtitle>
-       This has been formatted by the sample "example_atom.xsl" transform -
-       use your own XSLT to get a nicer Atom feed.
-      </subtitle>
-      <author>
-        <name>Apache Solr</name>
-        <email>solr-user@lucene.apache.org</email>
-      </author>
-      <link rel="self" type="application/atom+xml" 
-            href="http://localhost:8983/solr/q={$query}&amp;wt=xslt&amp;tr=atom.xsl"/>
-      <updated>
-        <xsl:value-of select="response/result/doc[position()=1]/date[@name='timestamp']"/>
-      </updated>
-      <id>tag:localhost,2007:example</id>
-      <xsl:apply-templates select="response/result/doc"/>
-    </feed>
-  </xsl:template>
-    
-  <!-- search results xslt -->
-  <xsl:template match="doc">
-    <xsl:variable name="id" select="str[@name='id']"/>
-    <entry>
-      <title><xsl:value-of select="str[@name='name']"/></title>
-      <link href="http://localhost:8983/solr/select?q={$id}"/>
-      <id>tag:localhost,2007:<xsl:value-of select="$id"/></id>
-      <summary><xsl:value-of select="arr[@name='features']"/></summary>
-      <updated><xsl:value-of select="date[@name='timestamp']"/></updated>
-    </entry>
-  </xsl:template>
-
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/db/conf/xslt/example_rss.xsl b/solr/example/example-DIH/solr/db/conf/xslt/example_rss.xsl
deleted file mode 100644
index c8ab5bf..0000000
--- a/solr/example/example-DIH/solr/db/conf/xslt/example_rss.xsl
+++ /dev/null
@@ -1,66 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to RSS
- -->
-
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-
-  <xsl:output
-       method="xml"
-       encoding="utf-8"
-       media-type="application/xml"
-  />
-  <xsl:template match='/'>
-    <rss version="2.0">
-       <channel>
-         <title>Example Solr RSS 2.0 Feed</title>
-         <link>http://localhost:8983/solr</link>
-         <description>
-          This has been formatted by the sample "example_rss.xsl" transform -
-          use your own XSLT to get a nicer RSS feed.
-         </description>
-         <language>en-us</language>
-         <docs>http://localhost:8983/solr</docs>
-         <xsl:apply-templates select="response/result/doc"/>
-       </channel>
-    </rss>
-  </xsl:template>
-  
-  <!-- search results xslt -->
-  <xsl:template match="doc">
-    <xsl:variable name="id" select="str[@name='id']"/>
-    <xsl:variable name="timestamp" select="date[@name='timestamp']"/>
-    <item>
-      <title><xsl:value-of select="str[@name='name']"/></title>
-      <link>
-        http://localhost:8983/solr/select?q=id:<xsl:value-of select="$id"/>
-      </link>
-      <description>
-        <xsl:value-of select="arr[@name='features']"/>
-      </description>
-      <pubDate><xsl:value-of select="$timestamp"/></pubDate>
-      <guid>
-        http://localhost:8983/solr/select?q=id:<xsl:value-of select="$id"/>
-      </guid>
-    </item>
-  </xsl:template>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/db/conf/xslt/luke.xsl b/solr/example/example-DIH/solr/db/conf/xslt/luke.xsl
deleted file mode 100644
index 05fb5bf..0000000
--- a/solr/example/example-DIH/solr/db/conf/xslt/luke.xsl
+++ /dev/null
@@ -1,337 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-    
-    http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-
-
-<!-- 
-  Display the luke request handler with graphs
- -->
-<xsl:stylesheet
-    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
-    xmlns="http://www.w3.org/1999/xhtml"
-    version="1.0"
-    >
-    <xsl:output
-        method="html"
-        encoding="UTF-8"
-        media-type="text/html"
-        doctype-public="-//W3C//DTD XHTML 1.0 Strict//EN"
-        doctype-system="http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"
-    />
-
-    <xsl:variable name="title">Solr Luke Request Handler Response</xsl:variable>
-
-    <xsl:template match="/">
-        <html xmlns="http://www.w3.org/1999/xhtml">
-            <head>
-                <link rel="stylesheet" type="text/css" href="solr-admin.css"/>
-                <link rel="icon" href="favicon.ico" type="image/x-icon"/>
-                <link rel="shortcut icon" href="favicon.ico" type="image/x-icon"/>
-                <title>
-                    <xsl:value-of select="$title"/>
-                </title>
-                <xsl:call-template name="css"/>
-
-            </head>
-            <body>
-                <h1>
-                    <xsl:value-of select="$title"/>
-                </h1>
-                <div class="doc">
-                    <ul>
-                        <xsl:if test="response/lst[@name='index']">
-                            <li>
-                                <a href="#index">Index Statistics</a>
-                            </li>
-                        </xsl:if>
-                        <xsl:if test="response/lst[@name='fields']">
-                            <li>
-                                <a href="#fields">Field Statistics</a>
-                                <ul>
-                                    <xsl:for-each select="response/lst[@name='fields']/lst">
-                                        <li>
-                                            <a href="#{@name}">
-                                                <xsl:value-of select="@name"/>
-                                            </a>
-                                        </li>
-                                    </xsl:for-each>
-                                </ul>
-                            </li>
-                        </xsl:if>
-                        <xsl:if test="response/lst[@name='doc']">
-                            <li>
-                                <a href="#doc">Document statistics</a>
-                            </li>
-                        </xsl:if>
-                    </ul>
-                </div>
-                <xsl:if test="response/lst[@name='index']">
-                    <h2><a name="index"/>Index Statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='index']"/>
-                </xsl:if>
-                <xsl:if test="response/lst[@name='fields']">
-                    <h2><a name="fields"/>Field Statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='fields']"/>
-                </xsl:if>
-                <xsl:if test="response/lst[@name='doc']">
-                    <h2><a name="doc"/>Document statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='doc']"/>
-                </xsl:if>
-            </body>
-        </html>
-    </xsl:template>
-
-    <xsl:template match="lst">
-        <xsl:if test="parent::lst">
-            <tr>
-                <td colspan="2">
-                    <div class="doc">
-                        <xsl:call-template name="list"/>
-                    </div>
-                </td>
-            </tr>
-        </xsl:if>
-        <xsl:if test="not(parent::lst)">
-            <div class="doc">
-                <xsl:call-template name="list"/>
-            </div>
-        </xsl:if>
-    </xsl:template>
-
-    <xsl:template name="list">
-        <xsl:if test="count(child::*)>0">
-            <table>
-                <thead>
-                    <tr>
-                        <th colspan="2">
-                            <p>
-                                <a name="{@name}"/>
-                            </p>
-                            <xsl:value-of select="@name"/>
-                        </th>
-                    </tr>
-                </thead>
-                <tbody>
-                    <xsl:choose>
-                        <xsl:when
-                            test="@name='histogram'">
-                            <tr>
-                                <td colspan="2">
-                                    <xsl:call-template name="histogram"/>
-                                </td>
-                            </tr>
-                        </xsl:when>
-                        <xsl:otherwise>
-                            <xsl:apply-templates/>
-                        </xsl:otherwise>
-                    </xsl:choose>
-                </tbody>
-            </table>
-        </xsl:if>
-    </xsl:template>
-
-    <xsl:template name="histogram">
-        <div class="doc">
-            <xsl:call-template name="barchart">
-                <xsl:with-param name="max_bar_width">50</xsl:with-param>
-                <xsl:with-param name="iwidth">800</xsl:with-param>
-                <xsl:with-param name="iheight">160</xsl:with-param>
-                <xsl:with-param name="fill">blue</xsl:with-param>
-            </xsl:call-template>
-        </div>
-    </xsl:template>
-
-    <xsl:template name="barchart">
-        <xsl:param name="max_bar_width"/>
-        <xsl:param name="iwidth"/>
-        <xsl:param name="iheight"/>
-        <xsl:param name="fill"/>
-        <xsl:variable name="max">
-            <xsl:for-each select="int">
-                <xsl:sort data-type="number" order="descending"/>
-                <xsl:if test="position()=1">
-                    <xsl:value-of select="."/>
-                </xsl:if>
-            </xsl:for-each>
-        </xsl:variable>
-        <xsl:variable name="bars">
-           <xsl:value-of select="count(int)"/>
-        </xsl:variable>
-        <xsl:variable name="bar_width">
-           <xsl:choose>
-             <xsl:when test="$max_bar_width &lt; ($iwidth div $bars)">
-               <xsl:value-of select="$max_bar_width"/>
-             </xsl:when>
-             <xsl:otherwise>
-               <xsl:value-of select="$iwidth div $bars"/>
-             </xsl:otherwise>
-           </xsl:choose>
-        </xsl:variable>
-        <table class="histogram">
-           <tbody>
-              <tr>
-                <xsl:for-each select="int">
-                   <td>
-                 <xsl:value-of select="."/>
-                 <div class="histogram">
-                  <xsl:attribute name="style">background-color: <xsl:value-of select="$fill"/>; width: <xsl:value-of select="$bar_width"/>px; height: <xsl:value-of select="($iheight*number(.)) div $max"/>px;</xsl:attribute>
-                 </div>
-                   </td> 
-                </xsl:for-each>
-              </tr>
-              <tr>
-                <xsl:for-each select="int">
-                   <td>
-                       <xsl:value-of select="@name"/>
-                   </td>
-                </xsl:for-each>
-              </tr>
-           </tbody>
-        </table>
-    </xsl:template>
-
-    <xsl:template name="keyvalue">
-        <xsl:choose>
-            <xsl:when test="@name">
-                <tr>
-                    <td class="name">
-                        <xsl:value-of select="@name"/>
-                    </td>
-                    <td class="value">
-                        <xsl:value-of select="."/>
-                    </td>
-                </tr>
-            </xsl:when>
-            <xsl:otherwise>
-                <xsl:value-of select="."/>
-            </xsl:otherwise>
-        </xsl:choose>
-    </xsl:template>
-
-    <xsl:template match="int|bool|long|float|double|uuid|date">
-        <xsl:call-template name="keyvalue"/>
-    </xsl:template>
-
-    <xsl:template match="arr">
-        <tr>
-            <td class="name">
-                <xsl:value-of select="@name"/>
-            </td>
-            <td class="value">
-                <ul>
-                    <xsl:for-each select="child::*">
-                        <li>
-                            <xsl:apply-templates/>
-                        </li>
-                    </xsl:for-each>
-                </ul>
-            </td>
-        </tr>
-    </xsl:template>
-
-    <xsl:template match="str">
-        <xsl:choose>
-            <xsl:when test="@name='schema' or @name='index' or @name='flags'">
-                <xsl:call-template name="schema"/>
-            </xsl:when>
-            <xsl:otherwise>
-                <xsl:call-template name="keyvalue"/>
-            </xsl:otherwise>
-        </xsl:choose>
-    </xsl:template>
-
-    <xsl:template name="schema">
-        <tr>
-            <td class="name">
-                <xsl:value-of select="@name"/>
-            </td>
-            <td class="value">
-                <xsl:if test="contains(.,'unstored')">
-                    <xsl:value-of select="."/>
-                </xsl:if>
-                <xsl:if test="not(contains(.,'unstored'))">
-                    <xsl:call-template name="infochar2string">
-                        <xsl:with-param name="charList">
-                            <xsl:value-of select="."/>
-                        </xsl:with-param>
-                    </xsl:call-template>
-                </xsl:if>
-            </td>
-        </tr>
-    </xsl:template>
-
-    <xsl:template name="infochar2string">
-        <xsl:param name="i">1</xsl:param>
-        <xsl:param name="charList"/>
-
-        <xsl:variable name="char">
-            <xsl:value-of select="substring($charList,$i,1)"/>
-        </xsl:variable>
-        <xsl:choose>
-            <xsl:when test="$char='I'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='I']"/> - </xsl:when>
-            <xsl:when test="$char='T'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='T']"/> - </xsl:when>
-            <xsl:when test="$char='S'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='S']"/> - </xsl:when>
-            <xsl:when test="$char='M'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='M']"/> - </xsl:when>
-            <xsl:when test="$char='V'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='V']"/> - </xsl:when>
-            <xsl:when test="$char='o'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='o']"/> - </xsl:when>
-            <xsl:when test="$char='p'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='p']"/> - </xsl:when>
-            <xsl:when test="$char='O'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='O']"/> - </xsl:when>
-            <xsl:when test="$char='L'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='L']"/> - </xsl:when>
-            <xsl:when test="$char='B'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='B']"/> - </xsl:when>
-            <xsl:when test="$char='C'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='C']"/> - </xsl:when>
-            <xsl:when test="$char='f'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='f']"/> - </xsl:when>
-            <xsl:when test="$char='l'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='l']"/> -
-            </xsl:when>
-        </xsl:choose>
-
-        <xsl:if test="not($i>=string-length($charList))">
-            <xsl:call-template name="infochar2string">
-                <xsl:with-param name="i">
-                    <xsl:value-of select="$i+1"/>
-                </xsl:with-param>
-                <xsl:with-param name="charList">
-                    <xsl:value-of select="$charList"/>
-                </xsl:with-param>
-            </xsl:call-template>
-        </xsl:if>
-    </xsl:template>
-    <xsl:template name="css">
-        <style type="text/css">
-            <![CDATA[
-            td.name {font-style: italic; font-size:80%; }
-            .doc { margin: 0.5em; border: solid grey 1px; }
-            .exp { display: none; font-family: monospace; white-space: pre; }
-            div.histogram { background: none repeat scroll 0%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;}
-            table.histogram { width: auto; vertical-align: bottom; }
-            table.histogram td, table.histogram th { text-align: center; vertical-align: bottom; border-bottom: 1px solid #ff9933; width: auto; }
-            ]]>
-        </style>
-    </xsl:template>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/db/conf/xslt/updateXml.xsl b/solr/example/example-DIH/solr/db/conf/xslt/updateXml.xsl
deleted file mode 100644
index 7c4a48e..0000000
--- a/solr/example/example-DIH/solr/db/conf/xslt/updateXml.xsl
+++ /dev/null
@@ -1,70 +0,0 @@
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!--
-  Simple transform of Solr query response into Solr Update XML compliant XML.
-  When used in the xslt response writer you will get UpdaateXML as output.
-  But you can also store a query response XML to disk and feed this XML to
-  the XSLTUpdateRequestHandler to index the content. Provided as example only.
-  See http://wiki.apache.org/solr/XsltUpdateRequestHandler for more info
- -->
-<xsl:stylesheet version='1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-  <xsl:output media-type="text/xml" method="xml" indent="yes"/>
-
-  <xsl:template match='/'>
-    <add>
-        <xsl:apply-templates select="response/result/doc"/>
-    </add>
-  </xsl:template>
-  
-  <!-- Ignore score (makes no sense to index) -->
-  <xsl:template match="doc/*[@name='score']" priority="100">
-  </xsl:template>
-
-  <xsl:template match="doc">
-    <xsl:variable name="pos" select="position()"/>
-    <doc>
-        <xsl:apply-templates>
-          <xsl:with-param name="pos"><xsl:value-of select="$pos"/></xsl:with-param>
-        </xsl:apply-templates>
-    </doc>
-  </xsl:template>
-
-  <!-- Flatten arrays to duplicate field lines -->
-  <xsl:template match="doc/arr" priority="100">
-      <xsl:variable name="fn" select="@name"/>
-      
-      <xsl:for-each select="*">
-        <xsl:element name="field">
-            <xsl:attribute name="name"><xsl:value-of select="$fn"/></xsl:attribute>
-              <xsl:value-of select="."/>
-        </xsl:element>
-      </xsl:for-each>
-  </xsl:template>
-
-
-  <xsl:template match="doc/*">
-      <xsl:variable name="fn" select="@name"/>
-
-       <xsl:element name="field">
-        <xsl:attribute name="name"><xsl:value-of select="$fn"/></xsl:attribute>
-        <xsl:value-of select="."/>
-       </xsl:element>
-  </xsl:template>
-
-  <xsl:template match="*"/>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/db/core.properties b/solr/example/example-DIH/solr/db/core.properties
deleted file mode 100644
index e69de29..0000000
--- a/solr/example/example-DIH/solr/db/core.properties
+++ /dev/null
diff --git a/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/kmeans-attributes.xml b/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/kmeans-attributes.xml
deleted file mode 100644
index d802465..0000000
--- a/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/kmeans-attributes.xml
+++ /dev/null
@@ -1,19 +0,0 @@
-<!-- 
-  Default configuration for the bisecting k-means clustering algorithm.
-  
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="MultilingualClustering.languageAggregationStrategy">
-            <value type="org.carrot2.text.clustering.MultilingualClustering$LanguageAggregationStrategy" value="FLATTEN_MAJOR_LANGUAGE"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
diff --git a/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/lingo-attributes.xml b/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/lingo-attributes.xml
deleted file mode 100644
index 4bf1360..0000000
--- a/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/lingo-attributes.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-<!-- 
-  Default configuration for the Lingo clustering algorithm.
-
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <!-- 
-          The language to assume for clustered documents.
-          For a list of allowed values, see: 
-          http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage
-          -->
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="LingoClusteringAlgorithm.desiredClusterCountBase">
-            <value type="java.lang.Integer" value="20"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/stc-attributes.xml b/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/stc-attributes.xml
deleted file mode 100644
index c1bf110..0000000
--- a/solr/example/example-DIH/solr/mail/conf/clustering/carrot2/stc-attributes.xml
+++ /dev/null
@@ -1,19 +0,0 @@
-<!-- 
-  Default configuration for the STC clustering algorithm.
-
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="MultilingualClustering.languageAggregationStrategy">
-            <value type="org.carrot2.text.clustering.MultilingualClustering$LanguageAggregationStrategy" value="FLATTEN_MAJOR_LANGUAGE"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
diff --git a/solr/example/example-DIH/solr/mail/conf/currency.xml b/solr/example/example-DIH/solr/mail/conf/currency.xml
deleted file mode 100644
index 3a9c58a..0000000
--- a/solr/example/example-DIH/solr/mail/conf/currency.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version="1.0" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- Example exchange rates file for CurrencyField type named "currency" in example schema -->
-
-<currencyConfig version="1.0">
-  <rates>
-    <!-- Updated from http://www.exchangerate.com/ at 2011-09-27 -->
-    <rate from="USD" to="ARS" rate="4.333871" comment="ARGENTINA Peso" />
-    <rate from="USD" to="AUD" rate="1.025768" comment="AUSTRALIA Dollar" />
-    <rate from="USD" to="EUR" rate="0.743676" comment="European Euro" />
-    <rate from="USD" to="BRL" rate="1.881093" comment="BRAZIL Real" />
-    <rate from="USD" to="CAD" rate="1.030815" comment="CANADA Dollar" />
-    <rate from="USD" to="CLP" rate="519.0996" comment="CHILE Peso" />
-    <rate from="USD" to="CNY" rate="6.387310" comment="CHINA Yuan" />
-    <rate from="USD" to="CZK" rate="18.47134" comment="CZECH REP. Koruna" />
-    <rate from="USD" to="DKK" rate="5.515436" comment="DENMARK Krone" />
-    <rate from="USD" to="HKD" rate="7.801922" comment="HONG KONG Dollar" />
-    <rate from="USD" to="HUF" rate="215.6169" comment="HUNGARY Forint" />
-    <rate from="USD" to="ISK" rate="118.1280" comment="ICELAND Krona" />
-    <rate from="USD" to="INR" rate="49.49088" comment="INDIA Rupee" />
-    <rate from="USD" to="XDR" rate="0.641358" comment="INTNL MON. FUND SDR" />
-    <rate from="USD" to="ILS" rate="3.709739" comment="ISRAEL Sheqel" />
-    <rate from="USD" to="JPY" rate="76.32419" comment="JAPAN Yen" />
-    <rate from="USD" to="KRW" rate="1169.173" comment="KOREA (SOUTH) Won" />
-    <rate from="USD" to="KWD" rate="0.275142" comment="KUWAIT Dinar" />
-    <rate from="USD" to="MXN" rate="13.85895" comment="MEXICO Peso" />
-    <rate from="USD" to="NZD" rate="1.285159" comment="NEW ZEALAND Dollar" />
-    <rate from="USD" to="NOK" rate="5.859035" comment="NORWAY Krone" />
-    <rate from="USD" to="PKR" rate="87.57007" comment="PAKISTAN Rupee" />
-    <rate from="USD" to="PEN" rate="2.730683" comment="PERU Sol" />
-    <rate from="USD" to="PHP" rate="43.62039" comment="PHILIPPINES Peso" />
-    <rate from="USD" to="PLN" rate="3.310139" comment="POLAND Zloty" />
-    <rate from="USD" to="RON" rate="3.100932" comment="ROMANIA Leu" />
-    <rate from="USD" to="RUB" rate="32.14663" comment="RUSSIA Ruble" />
-    <rate from="USD" to="SAR" rate="3.750465" comment="SAUDI ARABIA Riyal" />
-    <rate from="USD" to="SGD" rate="1.299352" comment="SINGAPORE Dollar" />
-    <rate from="USD" to="ZAR" rate="8.329761" comment="SOUTH AFRICA Rand" />
-    <rate from="USD" to="SEK" rate="6.883442" comment="SWEDEN Krona" />
-    <rate from="USD" to="CHF" rate="0.906035" comment="SWITZERLAND Franc" />
-    <rate from="USD" to="TWD" rate="30.40283" comment="TAIWAN Dollar" />
-    <rate from="USD" to="THB" rate="30.89487" comment="THAILAND Baht" />
-    <rate from="USD" to="AED" rate="3.672955" comment="U.A.E. Dirham" />
-    <rate from="USD" to="UAH" rate="7.988582" comment="UKRAINE Hryvnia" />
-    <rate from="USD" to="GBP" rate="0.647910" comment="UNITED KINGDOM Pound" />
-    
-    <!-- Cross-rates for some common currencies -->
-    <rate from="EUR" to="GBP" rate="0.869914" />  
-    <rate from="EUR" to="NOK" rate="7.800095" />  
-    <rate from="GBP" to="NOK" rate="8.966508" />  
-  </rates>
-</currencyConfig>
diff --git a/solr/example/example-DIH/solr/mail/conf/elevate.xml b/solr/example/example-DIH/solr/mail/conf/elevate.xml
deleted file mode 100644
index 2c09ebe..0000000
--- a/solr/example/example-DIH/solr/mail/conf/elevate.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- If this file is found in the config directory, it will only be
-     loaded once at startup.  If it is found in Solr's data
-     directory, it will be re-loaded every commit.
-
-   See http://wiki.apache.org/solr/QueryElevationComponent for more info
-
--->
-<elevate>
- <!-- Query elevation examples
-  <query text="foo bar">
-    <doc id="1" />
-    <doc id="2" />
-    <doc id="3" />
-  </query>
-
-for use with techproducts example
- 
-  <query text="ipod">
-    <doc id="MA147LL/A" />  put the actual ipod at the top 
-    <doc id="IW-02" exclude="true" /> exclude this cable
-  </query>
--->
-
-</elevate>
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/contractions_ca.txt b/solr/example/example-DIH/solr/mail/conf/lang/contractions_ca.txt
deleted file mode 100644
index 307a85f..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/contractions_ca.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-# Set of Catalan contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-d
-l
-m
-n
-s
-t
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/contractions_fr.txt b/solr/example/example-DIH/solr/mail/conf/lang/contractions_fr.txt
deleted file mode 100644
index f1bba51..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/contractions_fr.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-# Set of French contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-l
-m
-t
-qu
-n
-s
-j
-d
-c
-jusqu
-quoiqu
-lorsqu
-puisqu
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/contractions_ga.txt b/solr/example/example-DIH/solr/mail/conf/lang/contractions_ga.txt
deleted file mode 100644
index 9ebe7fa..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/contractions_ga.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# Set of Irish contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-d
-m
-b
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/contractions_it.txt b/solr/example/example-DIH/solr/mail/conf/lang/contractions_it.txt
deleted file mode 100644
index cac0409..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/contractions_it.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-# Set of Italian contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-c
-l 
-all 
-dall 
-dell 
-nell 
-sull 
-coll 
-pell 
-gl 
-agl 
-dagl 
-degl 
-negl 
-sugl 
-un 
-m 
-t 
-s 
-v 
-d
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/hyphenations_ga.txt b/solr/example/example-DIH/solr/mail/conf/lang/hyphenations_ga.txt
deleted file mode 100644
index 4d2642c..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/hyphenations_ga.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# Set of Irish hyphenations for StopFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-h
-n
-t
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stemdict_nl.txt b/solr/example/example-DIH/solr/mail/conf/lang/stemdict_nl.txt
deleted file mode 100644
index 4410729..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stemdict_nl.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-# Set of overrides for the dutch stemmer
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-fiets	fiets
-bromfiets	bromfiets
-ei	eier
-kind	kinder
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stoptags_ja.txt b/solr/example/example-DIH/solr/mail/conf/lang/stoptags_ja.txt
deleted file mode 100644
index 71b7508..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stoptags_ja.txt
+++ /dev/null
@@ -1,420 +0,0 @@
-#
-# This file defines a Japanese stoptag set for JapanesePartOfSpeechStopFilter.
-#
-# Any token with a part-of-speech tag that exactly matches those defined in this
-# file are removed from the token stream.
-#
-# Set your own stoptags by uncommenting the lines below.  Note that comments are
-# not allowed on the same line as a stoptag.  See LUCENE-3745 for frequency lists,
-# etc. that can be useful for building you own stoptag set.
-#
-# The entire possible tagset is provided below for convenience.
-#
-#####
-#  noun: unclassified nouns
-#名詞
-#
-#  noun-common: Common nouns or nouns where the sub-classification is undefined
-#名詞-一般
-#
-#  noun-proper: Proper nouns where the sub-classification is undefined 
-#名詞-固有名詞
-#
-#  noun-proper-misc: miscellaneous proper nouns
-#名詞-固有名詞-一般
-#
-#  noun-proper-person: Personal names where the sub-classification is undefined
-#名詞-固有名詞-人名
-#
-#  noun-proper-person-misc: names that cannot be divided into surname and 
-#  given name; foreign names; names where the surname or given name is unknown.
-#  e.g. お市の方
-#名詞-固有名詞-人名-一般
-#
-#  noun-proper-person-surname: Mainly Japanese surnames.
-#  e.g. 山田
-#名詞-固有名詞-人名-姓
-#
-#  noun-proper-person-given_name: Mainly Japanese given names.
-#  e.g. 太郎
-#名詞-固有名詞-人名-名
-#
-#  noun-proper-organization: Names representing organizations.
-#  e.g. 通産省, NHK
-#名詞-固有名詞-組織
-#
-#  noun-proper-place: Place names where the sub-classification is undefined
-#名詞-固有名詞-地域
-#
-#  noun-proper-place-misc: Place names excluding countries.
-#  e.g. アジア, バルセロナ, 京都
-#名詞-固有名詞-地域-一般
-#
-#  noun-proper-place-country: Country names. 
-#  e.g. 日本, オーストラリア
-#名詞-固有名詞-地域-国
-#
-#  noun-pronoun: Pronouns where the sub-classification is undefined
-#名詞-代名詞
-#
-#  noun-pronoun-misc: miscellaneous pronouns: 
-#  e.g. それ, ここ, あいつ, あなた, あちこち, いくつ, どこか, なに, みなさん, みんな, わたくし, われわれ
-#名詞-代名詞-一般
-#
-#  noun-pronoun-contraction: Spoken language contraction made by combining a 
-#  pronoun and the particle 'wa'.
-#  e.g. ありゃ, こりゃ, こりゃあ, そりゃ, そりゃあ 
-#名詞-代名詞-縮約
-#
-#  noun-adverbial: Temporal nouns such as names of days or months that behave 
-#  like adverbs. Nouns that represent amount or ratios and can be used adverbially,
-#  e.g. 金曜, 一月, 午後, 少量
-#名詞-副詞可能
-#
-#  noun-verbal: Nouns that take arguments with case and can appear followed by 
-#  'suru' and related verbs (する, できる, なさる, くださる)
-#  e.g. インプット, 愛着, 悪化, 悪戦苦闘, 一安心, 下取り
-#名詞-サ変接続
-#
-#  noun-adjective-base: The base form of adjectives, words that appear before な ("na")
-#  e.g. 健康, 安易, 駄目, だめ
-#名詞-形容動詞語幹
-#
-#  noun-numeric: Arabic numbers, Chinese numerals, and counters like 何 (回), 数.
-#  e.g. 0, 1, 2, 何, 数, 幾
-#名詞-数
-#
-#  noun-affix: noun affixes where the sub-classification is undefined
-#名詞-非自立
-#
-#  noun-affix-misc: Of adnominalizers, the case-marker の ("no"), and words that 
-#  attach to the base form of inflectional words, words that cannot be classified 
-#  into any of the other categories below. This category includes indefinite nouns.
-#  e.g. あかつき, 暁, かい, 甲斐, 気, きらい, 嫌い, くせ, 癖, こと, 事, ごと, 毎, しだい, 次第, 
-#       順, せい, 所為, ついで, 序で, つもり, 積もり, 点, どころ, の, はず, 筈, はずみ, 弾み, 
-#       拍子, ふう, ふり, 振り, ほう, 方, 旨, もの, 物, 者, ゆえ, 故, ゆえん, 所以, わけ, 訳,
-#       わり, 割り, 割, ん-口語/, もん-口語/
-#名詞-非自立-一般
-#
-#  noun-affix-adverbial: noun affixes that that can behave as adverbs.
-#  e.g. あいだ, 間, あげく, 挙げ句, あと, 後, 余り, 以外, 以降, 以後, 以上, 以前, 一方, うえ, 
-#       上, うち, 内, おり, 折り, かぎり, 限り, きり, っきり, 結果, ころ, 頃, さい, 際, 最中, さなか, 
-#       最中, じたい, 自体, たび, 度, ため, 為, つど, 都度, とおり, 通り, とき, 時, ところ, 所, 
-#       とたん, 途端, なか, 中, のち, 後, ばあい, 場合, 日, ぶん, 分, ほか, 他, まえ, 前, まま, 
-#       儘, 侭, みぎり, 矢先
-#名詞-非自立-副詞可能
-#
-#  noun-affix-aux: noun affixes treated as 助動詞 ("auxiliary verb") in school grammars 
-#  with the stem よう(だ) ("you(da)").
-#  e.g.  よう, やう, 様 (よう)
-#名詞-非自立-助動詞語幹
-#  
-#  noun-affix-adjective-base: noun affixes that can connect to the indeclinable
-#  connection form な (aux "da").
-#  e.g. みたい, ふう
-#名詞-非自立-形容動詞語幹
-#
-#  noun-special: special nouns where the sub-classification is undefined.
-#名詞-特殊
-#
-#  noun-special-aux: The そうだ ("souda") stem form that is used for reporting news, is 
-#  treated as 助動詞 ("auxiliary verb") in school grammars, and attach to the base 
-#  form of inflectional words.
-#  e.g. そう
-#名詞-特殊-助動詞語幹
-#
-#  noun-suffix: noun suffixes where the sub-classification is undefined.
-#名詞-接尾
-#
-#  noun-suffix-misc: Of the nouns or stem forms of other parts of speech that connect 
-#  to ガル or タイ and can combine into compound nouns, words that cannot be classified into
-#  any of the other categories below. In general, this category is more inclusive than 
-#  接尾語 ("suffix") and is usually the last element in a compound noun.
-#  e.g. おき, かた, 方, 甲斐 (がい), がかり, ぎみ, 気味, ぐるみ, (~した) さ, 次第, 済 (ず) み,
-#       よう, (でき)っこ, 感, 観, 性, 学, 類, 面, 用
-#名詞-接尾-一般
-#
-#  noun-suffix-person: Suffixes that form nouns and attach to person names more often
-#  than other nouns.
-#  e.g. 君, 様, 著
-#名詞-接尾-人名
-#
-#  noun-suffix-place: Suffixes that form nouns and attach to place names more often 
-#  than other nouns.
-#  e.g. 町, 市, 県
-#名詞-接尾-地域
-#
-#  noun-suffix-verbal: Of the suffixes that attach to nouns and form nouns, those that 
-#  can appear before スル ("suru").
-#  e.g. 化, 視, 分け, 入り, 落ち, 買い
-#名詞-接尾-サ変接続
-#
-#  noun-suffix-aux: The stem form of そうだ (様態) that is used to indicate conditions, 
-#  is treated as 助動詞 ("auxiliary verb") in school grammars, and attach to the 
-#  conjunctive form of inflectional words.
-#  e.g. そう
-#名詞-接尾-助動詞語幹
-#
-#  noun-suffix-adjective-base: Suffixes that attach to other nouns or the conjunctive 
-#  form of inflectional words and appear before the copula だ ("da").
-#  e.g. 的, げ, がち
-#名詞-接尾-形容動詞語幹
-#
-#  noun-suffix-adverbial: Suffixes that attach to other nouns and can behave as adverbs.
-#  e.g. 後 (ご), 以後, 以降, 以前, 前後, 中, 末, 上, 時 (じ)
-#名詞-接尾-副詞可能
-#
-#  noun-suffix-classifier: Suffixes that attach to numbers and form nouns. This category 
-#  is more inclusive than 助数詞 ("classifier") and includes common nouns that attach 
-#  to numbers.
-#  e.g. 個, つ, 本, 冊, パーセント, cm, kg, カ月, か国, 区画, 時間, 時半
-#名詞-接尾-助数詞
-#
-#  noun-suffix-special: Special suffixes that mainly attach to inflecting words.
-#  e.g. (楽し) さ, (考え) 方
-#名詞-接尾-特殊
-#
-#  noun-suffix-conjunctive: Nouns that behave like conjunctions and join two words 
-#  together.
-#  e.g. (日本) 対 (アメリカ), 対 (アメリカ), (3) 対 (5), (女優) 兼 (主婦)
-#名詞-接続詞的
-#
-#  noun-verbal_aux: Nouns that attach to the conjunctive particle て ("te") and are 
-#  semantically verb-like.
-#  e.g. ごらん, ご覧, 御覧, 頂戴
-#名詞-動詞非自立的
-#
-#  noun-quotation: text that cannot be segmented into words, proverbs, Chinese poetry, 
-#  dialects, English, etc. Currently, the only entry for 名詞 引用文字列 ("noun quotation") 
-#  is いわく ("iwaku").
-#名詞-引用文字列
-#
-#  noun-nai_adjective: Words that appear before the auxiliary verb ない ("nai") and
-#  behave like an adjective.
-#  e.g. 申し訳, 仕方, とんでも, 違い
-#名詞-ナイ形容詞語幹
-#
-#####
-#  prefix: unclassified prefixes
-#接頭詞
-#
-#  prefix-nominal: Prefixes that attach to nouns (including adjective stem forms) 
-#  excluding numerical expressions.
-#  e.g. お (水), 某 (氏), 同 (社), 故 (~氏), 高 (品質), お (見事), ご (立派)
-#接頭詞-名詞接続
-#
-#  prefix-verbal: Prefixes that attach to the imperative form of a verb or a verb
-#  in conjunctive form followed by なる/なさる/くださる.
-#  e.g. お (読みなさい), お (座り)
-#接頭詞-動詞接続
-#
-#  prefix-adjectival: Prefixes that attach to adjectives.
-#  e.g. お (寒いですねえ), バカ (でかい)
-#接頭詞-形容詞接続
-#
-#  prefix-numerical: Prefixes that attach to numerical expressions.
-#  e.g. 約, およそ, 毎時
-#接頭詞-数接続
-#
-#####
-#  verb: unclassified verbs
-#動詞
-#
-#  verb-main:
-#動詞-自立
-#
-#  verb-auxiliary:
-#動詞-非自立
-#
-#  verb-suffix:
-#動詞-接尾
-#
-#####
-#  adjective: unclassified adjectives
-#形容詞
-#
-#  adjective-main:
-#形容詞-自立
-#
-#  adjective-auxiliary:
-#形容詞-非自立
-#
-#  adjective-suffix:
-#形容詞-接尾
-#
-#####
-#  adverb: unclassified adverbs
-#副詞
-#
-#  adverb-misc: Words that can be segmented into one unit and where adnominal 
-#  modification is not possible.
-#  e.g. あいかわらず, 多分
-#副詞-一般
-#
-#  adverb-particle_conjunction: Adverbs that can be followed by の, は, に, 
-#  な, する, だ, etc.
-#  e.g. こんなに, そんなに, あんなに, なにか, なんでも
-#副詞-助詞類接続
-#
-#####
-#  adnominal: Words that only have noun-modifying forms.
-#  e.g. この, その, あの, どの, いわゆる, なんらかの, 何らかの, いろんな, こういう, そういう, ああいう, 
-#       どういう, こんな, そんな, あんな, どんな, 大きな, 小さな, おかしな, ほんの, たいした, 
-#       「(, も) さる (ことながら)」, 微々たる, 堂々たる, 単なる, いかなる, 我が」「同じ, 亡き
-#連体詞
-#
-#####
-#  conjunction: Conjunctions that can occur independently.
-#  e.g. が, けれども, そして, じゃあ, それどころか
-接続詞
-#
-#####
-#  particle: unclassified particles.
-助詞
-#
-#  particle-case: case particles where the subclassification is undefined.
-助詞-格助詞
-#
-#  particle-case-misc: Case particles.
-#  e.g. から, が, で, と, に, へ, より, を, の, にて
-助詞-格助詞-一般
-#
-#  particle-case-quote: the "to" that appears after nouns, a person’s speech, 
-#  quotation marks, expressions of decisions from a meeting, reasons, judgements,
-#  conjectures, etc.
-#  e.g. ( だ) と (述べた.), ( である) と (して執行猶予...)
-助詞-格助詞-引用
-#
-#  particle-case-compound: Compounds of particles and verbs that mainly behave 
-#  like case particles.
-#  e.g. という, といった, とかいう, として, とともに, と共に, でもって, にあたって, に当たって, に当って,
-#       にあたり, に当たり, に当り, に当たる, にあたる, において, に於いて,に於て, における, に於ける, 
-#       にかけ, にかけて, にかんし, に関し, にかんして, に関して, にかんする, に関する, に際し, 
-#       に際して, にしたがい, に従い, に従う, にしたがって, に従って, にたいし, に対し, にたいして, 
-#       に対して, にたいする, に対する, について, につき, につけ, につけて, につれ, につれて, にとって,
-#       にとり, にまつわる, によって, に依って, に因って, により, に依り, に因り, による, に依る, に因る, 
-#       にわたって, にわたる, をもって, を以って, を通じ, を通じて, を通して, をめぐって, をめぐり, をめぐる,
-#       って-口語/, ちゅう-関西弁「という」/, (何) ていう (人)-口語/, っていう-口語/, といふ, とかいふ
-助詞-格助詞-連語
-#
-#  particle-conjunctive:
-#  e.g. から, からには, が, けれど, けれども, けど, し, つつ, て, で, と, ところが, どころか, とも, ども, 
-#       ながら, なり, ので, のに, ば, ものの, や ( した), やいなや, (ころん) じゃ(いけない)-口語/, 
-#       (行っ) ちゃ(いけない)-口語/, (言っ) たって (しかたがない)-口語/, (それがなく)ったって (平気)-口語/
-助詞-接続助詞
-#
-#  particle-dependency:
-#  e.g. こそ, さえ, しか, すら, は, も, ぞ
-助詞-係助詞
-#
-#  particle-adverbial:
-#  e.g. がてら, かも, くらい, 位, ぐらい, しも, (学校) じゃ(これが流行っている)-口語/, 
-#       (それ)じゃあ (よくない)-口語/, ずつ, (私) なぞ, など, (私) なり (に), (先生) なんか (大嫌い)-口語/,
-#       (私) なんぞ, (先生) なんて (大嫌い)-口語/, のみ, だけ, (私) だって-口語/, だに, 
-#       (彼)ったら-口語/, (お茶) でも (いかが), 等 (とう), (今後) とも, ばかり, ばっか-口語/, ばっかり-口語/,
-#       ほど, 程, まで, 迄, (誰) も (が)([助詞-格助詞] および [助詞-係助詞] の前に位置する「も」)
-助詞-副助詞
-#
-#  particle-interjective: particles with interjective grammatical roles.
-#  e.g. (松島) や
-助詞-間投助詞
-#
-#  particle-coordinate:
-#  e.g. と, たり, だの, だり, とか, なり, や, やら
-助詞-並立助詞
-#
-#  particle-final:
-#  e.g. かい, かしら, さ, ぜ, (だ)っけ-口語/, (とまってる) で-方言/, な, ナ, なあ-口語/, ぞ, ね, ネ, 
-#       ねぇ-口語/, ねえ-口語/, ねん-方言/, の, のう-口語/, や, よ, ヨ, よぉ-口語/, わ, わい-口語/
-助詞-終助詞
-#
-#  particle-adverbial/conjunctive/final: The particle "ka" when unknown whether it is 
-#  adverbial, conjunctive, or sentence final. For example:
-#       (a) 「A か B か」. Ex:「(国内で運用する) か,(海外で運用する) か (.)」
-#       (b) Inside an adverb phrase. Ex:「(幸いという) か (, 死者はいなかった.)」
-#           「(祈りが届いたせい) か (, 試験に合格した.)」
-#       (c) 「かのように」. Ex:「(何もなかった) か (のように振る舞った.)」
-#  e.g. か
-助詞-副助詞/並立助詞/終助詞
-#
-#  particle-adnominalizer: The "no" that attaches to nouns and modifies 
-#  non-inflectional words.
-助詞-連体化
-#
-#  particle-adnominalizer: The "ni" and "to" that appear following nouns and adverbs 
-#  that are giongo, giseigo, or gitaigo.
-#  e.g. に, と
-助詞-副詞化
-#
-#  particle-special: A particle that does not fit into one of the above classifications. 
-#  This includes particles that are used in Tanka, Haiku, and other poetry.
-#  e.g. かな, けむ, ( しただろう) に, (あんた) にゃ(わからん), (俺) ん (家)
-助詞-特殊
-#
-#####
-#  auxiliary-verb:
-助動詞
-#
-#####
-#  interjection: Greetings and other exclamations.
-#  e.g. おはよう, おはようございます, こんにちは, こんばんは, ありがとう, どうもありがとう, ありがとうございます, 
-#       いただきます, ごちそうさま, さよなら, さようなら, はい, いいえ, ごめん, ごめんなさい
-#感動詞
-#
-#####
-#  symbol: unclassified Symbols.
-記号
-#
-#  symbol-misc: A general symbol not in one of the categories below.
-#  e.g. [○◎@$〒→+]
-記号-一般
-#
-#  symbol-comma: Commas
-#  e.g. [,、]
-記号-読点
-#
-#  symbol-period: Periods and full stops.
-#  e.g. [..。]
-記号-句点
-#
-#  symbol-space: Full-width whitespace.
-記号-空白
-#
-#  symbol-open_bracket:
-#  e.g. [({‘“『【]
-記号-括弧開
-#
-#  symbol-close_bracket:
-#  e.g. [)}’”』」】]
-記号-括弧閉
-#
-#  symbol-alphabetic:
-#記号-アルファベット
-#
-#####
-#  other: unclassified other
-#その他
-#
-#  other-interjection: Words that are hard to classify as noun-suffixes or 
-#  sentence-final particles.
-#  e.g. (だ)ァ
-その他-間投
-#
-#####
-#  filler: Aizuchi that occurs during a conversation or sounds inserted as filler.
-#  e.g. あの, うんと, えと
-フィラー
-#
-#####
-#  non-verbal: non-verbal sound.
-非言語音
-#
-#####
-#  fragment:
-#語断片
-#
-#####
-#  unknown: unknown part of speech.
-#未知語
-#
-##### End of file
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ar.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ar.txt
deleted file mode 100644
index 046829d..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ar.txt
+++ /dev/null
@@ -1,125 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# Cleaned on October 11, 2009 (not normalized, so use before normalization)
-# This means that when modifying this list, you might need to add some 
-# redundant entries, for example containing forms with both أ and ا
-من
-ومن
-منها
-منه
-في
-وفي
-فيها
-فيه


-ثم
-او
-أو

-بها
-به


-اى
-اي
-أي
-أى
-لا
-ولا
-الا
-ألا
-إلا
-لكن
-ما
-وما
-كما
-فما
-عن
-مع
-اذا
-إذا
-ان
-أن
-إن
-انها
-أنها
-إنها
-انه
-أنه
-إنه
-بان
-بأن
-فان
-فأن
-وان
-وأن
-وإن
-التى
-التي
-الذى
-الذي
-الذين
-الى
-الي
-إلى
-إلي
-على
-عليها
-عليه
-اما
-أما
-إما
-ايضا
-أيضا
-كل
-وكل
-لم
-ولم
-لن
-ولن
-هى
-هي
-هو
-وهى
-وهي
-وهو
-فهى
-فهي
-فهو
-انت
-أنت
-لك
-لها
-له
-هذه
-هذا
-تلك
-ذلك
-هناك
-كانت
-كان
-يكون
-تكون
-وكانت
-وكان
-غير
-بعض
-قد
-نحو
-بين
-بينما
-منذ
-ضمن
-حيث
-الان
-الآن
-خلال
-بعد
-قبل
-حتى
-عند
-عندما
-لدى
-جميع
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_bg.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_bg.txt
deleted file mode 100644
index 1ae4ba2..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_bg.txt
+++ /dev/null
@@ -1,193 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html

-аз
-ако
-ала
-бе
-без
-беше
-би
-бил
-била
-били
-било
-близо
-бъдат
-бъде
-бяха

-вас
-ваш
-ваша
-вероятно
-вече
-взема
-ви
-вие
-винаги
-все
-всеки
-всички
-всичко
-всяка
-във
-въпреки
-върху

-ги
-главно
-го

-да
-дали
-до
-докато
-докога
-дори
-досега
-доста

-едва
-един
-ето
-за
-зад
-заедно
-заради
-засега
-затова
-защо
-защото

-из
-или
-им
-има
-имат
-иска

-каза
-как
-каква
-какво
-както
-какъв
-като
-кога
-когато
-което
-които
-кой
-който
-колко
-която
-къде
-където
-към
-ли

-ме
-между
-мен
-ми
-мнозина
-мога
-могат
-може
-моля
-момента
-му

-на
-над
-назад
-най
-направи
-напред
-например
-нас
-не
-него
-нея
-ни
-ние
-никой
-нито
-но
-някои
-някой
-няма
-обаче
-около
-освен
-особено
-от
-отгоре
-отново
-още
-пак
-по
-повече
-повечето
-под
-поне
-поради
-после
-почти
-прави
-пред
-преди
-през
-при
-пък
-първо

-са
-само
-се
-сега
-си
-скоро
-след
-сме
-според
-сред
-срещу
-сте
-съм
-със
-също

-тази
-така
-такива
-такъв
-там
-твой
-те
-тези
-ти
-тн
-то
-това
-тогава
-този
-той
-толкова
-точно
-трябва
-тук
-тъй
-тя
-тях

-харесва

-че
-често
-чрез
-ще
-щом

diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ca.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ca.txt
deleted file mode 100644
index 3da65de..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ca.txt
+++ /dev/null
@@ -1,220 +0,0 @@
-# Catalan stopwords from http://github.com/vcl/cue.language (Apache 2 Licensed)
-a
-abans
-ací
-ah
-així
-això
-al
-als
-aleshores
-algun
-alguna
-algunes
-alguns
-alhora
-allà
-allí
-allò
-altra
-altre
-altres
-amb
-ambdós
-ambdues
-apa
-aquell
-aquella
-aquelles
-aquells
-aquest
-aquesta
-aquestes
-aquests
-aquí
-baix
-cada
-cadascú
-cadascuna
-cadascunes
-cadascuns
-com
-contra
-d'un
-d'una
-d'unes
-d'uns
-dalt
-de
-del
-dels
-des
-després
-dins
-dintre
-donat
-doncs
-durant
-e
-eh
-el
-els
-em
-en
-encara
-ens
-entre
-érem
-eren
-éreu
-es
-és
-esta
-està
-estàvem
-estaven
-estàveu
-esteu
-et
-etc
-ets
-fins
-fora
-gairebé
-ha
-han
-has
-havia
-he
-hem
-heu
-hi 
-ho
-i
-igual
-iguals
-ja
-l'hi
-la
-les
-li
-li'n
-llavors
-m'he
-ma
-mal
-malgrat
-mateix
-mateixa
-mateixes
-mateixos
-me
-mentre
-més
-meu
-meus
-meva
-meves
-molt
-molta
-moltes
-molts
-mon
-mons
-n'he
-n'hi
-ne
-ni
-no
-nogensmenys
-només
-nosaltres
-nostra
-nostre
-nostres
-o
-oh
-oi
-on
-pas
-pel
-pels
-per
-però
-perquè
-poc 
-poca
-pocs
-poques
-potser
-propi
-qual
-quals
-quan
-quant 
-que
-què
-quelcom
-qui
-quin
-quina
-quines
-quins
-s'ha
-s'han
-sa
-semblant
-semblants
-ses
-seu 
-seus
-seva
-seva
-seves
-si
-sobre
-sobretot
-sóc
-solament
-sols
-son 
-són
-sons 
-sota
-sou
-t'ha
-t'han
-t'he
-ta
-tal
-també
-tampoc
-tan
-tant
-tanta
-tantes
-teu
-teus
-teva
-teves
-ton
-tons
-tot
-tota
-totes
-tots
-un
-una
-unes
-uns
-us
-va
-vaig
-vam
-van
-vas
-veu
-vosaltres
-vostra
-vostre
-vostres
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ckb.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ckb.txt
deleted file mode 100644
index 87abf11..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ckb.txt
+++ /dev/null
@@ -1,136 +0,0 @@
-# set of kurdish stopwords
-# note these have been normalized with our scheme (e represented with U+06D5, etc)
-# constructed from:
-# * Fig 5 of "Building A Test Collection For Sorani Kurdish" (Esmaili et al)
-# * "Sorani Kurdish: A Reference Grammar with selected readings" (Thackston)
-# * Corpus-based analysis of 77M word Sorani collection: wikipedia, news, blogs, etc
-
-# and

-# which
-کە
-# of

-# made/did
-کرد
-# that/which
-ئەوەی
-# on/head
-سەر
-# two
-دوو
-# also
-هەروەها
-# from/that
-لەو
-# makes/does
-دەکات
-# some
-چەند
-# every
-هەر
-
-# demonstratives
-# that
-ئەو
-# this
-ئەم
-
-# personal pronouns
-# I
-من
-# we
-ئێمە
-# you
-تۆ
-# you
-ئێوە
-# he/she/it
-ئەو
-# they
-ئەوان
-
-# prepositions
-# to/with/by
-بە
-پێ
-# without
-بەبێ
-# along with/while/during
-بەدەم
-# in the opinion of
-بەلای
-# according to
-بەپێی
-# before
-بەرلە
-# in the direction of
-بەرەوی
-# in front of/toward
-بەرەوە
-# before/in the face of
-بەردەم
-# without
-بێ
-# except for
-بێجگە
-# for
-بۆ
-# on/in
-دە
-تێ
-# with
-دەگەڵ
-# after
-دوای
-# except for/aside from
-جگە
-# in/from
-لە
-لێ
-# in front of/before/because of
-لەبەر
-# between/among
-لەبەینی
-# concerning/about
-لەبابەت
-# concerning
-لەبارەی
-# instead of
-لەباتی
-# beside
-لەبن
-# instead of
-لەبرێتی
-# behind
-لەدەم
-# with/together with
-لەگەڵ
-# by
-لەلایەن
-# within
-لەناو
-# between/among
-لەنێو
-# for the sake of
-لەپێناوی
-# with respect to
-لەرەوی
-# by means of/for
-لەرێ
-# for the sake of
-لەرێگا
-# on/on top of/according to
-لەسەر
-# under
-لەژێر
-# between/among
-ناو
-# between/among
-نێوان
-# after
-پاش
-# before
-پێش
-# like
-وەک
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_cz.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_cz.txt
deleted file mode 100644
index 53c6097..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_cz.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-a
-s
-k
-o
-i
-u
-v
-z
-dnes
-cz
-tímto
-budeš
-budem
-byli
-jseš
-můj
-svým
-ta
-tomto
-tohle
-tuto
-tyto
-jej
-zda
-proč
-máte
-tato
-kam
-tohoto
-kdo
-kteří
-mi
-nám
-tom
-tomuto
-mít
-nic
-proto
-kterou
-byla
-toho
-protože
-asi
-ho
-naši
-napište
-re
-což
-tím
-takže
-svých
-její
-svými
-jste
-aj
-tu
-tedy
-teto
-bylo
-kde
-ke
-pravé
-ji
-nad
-nejsou
-či
-pod
-téma
-mezi
-přes
-ty
-pak
-vám
-ani
-když
-však
-neg
-jsem
-tento
-článku
-články
-aby
-jsme
-před
-pta
-jejich
-byl
-ještě
-až
-bez
-také
-pouze
-první
-vaše
-která
-nás
-nový
-tipy
-pokud
-může
-strana
-jeho
-své
-jiné
-zprávy
-nové
-není
-vás
-jen
-podle
-zde
-už
-být
-více
-bude
-již
-než
-který
-by
-které
-co
-nebo
-ten
-tak
-má
-při
-od
-po
-jsou
-jak
-další
-ale
-si
-se
-ve
-to
-jako
-za
-zpět
-ze
-do
-pro
-je
-na
-atd
-atp
-jakmile
-přičemž
-já
-on
-ona
-ono
-oni
-ony
-my
-vy
-jí
-ji
-mě
-mne
-jemu
-tomu
-těm
-těmu
-němu
-němuž
-jehož
-jíž
-jelikož
-jež
-jakož
-načež
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_da.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_da.txt
deleted file mode 100644
index 42e6145..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_da.txt
+++ /dev/null
@@ -1,110 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/danish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Danish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
-
-og           | and
-i            | in
-jeg          | I
-det          | that (dem. pronoun)/it (pers. pronoun)
-at           | that (in front of a sentence)/to (with infinitive)
-en           | a/an
-den          | it (pers. pronoun)/that (dem. pronoun)
-til          | to/at/for/until/against/by/of/into, more
-er           | present tense of "to be"
-som          | who, as
-på           | on/upon/in/on/at/to/after/of/with/for, on
-de           | they
-med          | with/by/in, along
-han          | he
-af           | of/by/from/off/for/in/with/on, off
-for          | at/for/to/from/by/of/ago, in front/before, because
-ikke         | not
-der          | who/which, there/those
-var          | past tense of "to be"
-mig          | me/myself
-sig          | oneself/himself/herself/itself/themselves
-men          | but
-et           | a/an/one, one (number), someone/somebody/one
-har          | present tense of "to have"
-om           | round/about/for/in/a, about/around/down, if
-vi           | we
-min          | my
-havde        | past tense of "to have"
-ham          | him
-hun          | she
-nu           | now
-over         | over/above/across/by/beyond/past/on/about, over/past
-da           | then, when/as/since
-fra          | from/off/since, off, since
-du           | you
-ud           | out
-sin          | his/her/its/one's
-dem          | them
-os           | us/ourselves
-op           | up
-man          | you/one
-hans         | his
-hvor         | where
-eller        | or
-hvad         | what
-skal         | must/shall etc.
-selv         | myself/youself/herself/ourselves etc., even
-her          | here
-alle         | all/everyone/everybody etc.
-vil          | will (verb)
-blev         | past tense of "to stay/to remain/to get/to become"
-kunne        | could
-ind          | in
-når          | when
-være         | present tense of "to be"
-dog          | however/yet/after all
-noget        | something
-ville        | would
-jo           | you know/you see (adv), yes
-deres        | their/theirs
-efter        | after/behind/according to/for/by/from, later/afterwards
-ned          | down
-skulle       | should
-denne        | this
-end          | than
-dette        | this
-mit          | my/mine
-også         | also
-under        | under/beneath/below/during, below/underneath
-have         | have
-dig          | you
-anden        | other
-hende        | her
-mine         | my
-alt          | everything
-meget        | much/very, plenty of
-sit          | his, her, its, one's
-sine         | his, her, its, one's
-vor          | our
-mod          | against
-disse        | these
-hvis         | if
-din          | your/yours
-nogle        | some
-hos          | by/at
-blive        | be/become
-mange        | many
-ad           | by/through
-bliver       | present tense of "to be/to become"
-hendes       | her/hers
-været        | be
-thi          | for (conj)
-jer          | you
-sådan        | such, like this/like that
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_de.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_de.txt
deleted file mode 100644
index 86525e7..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_de.txt
+++ /dev/null
@@ -1,294 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/german/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A German stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | The number of forms in this list is reduced significantly by passing it
- | through the German stemmer.
-
-
-aber           |  but
-
-alle           |  all
-allem
-allen
-aller
-alles
-
-als            |  than, as
-also           |  so
-am             |  an + dem
-an             |  at
-
-ander          |  other
-andere
-anderem
-anderen
-anderer
-anderes
-anderm
-andern
-anderr
-anders
-
-auch           |  also
-auf            |  on
-aus            |  out of
-bei            |  by
-bin            |  am
-bis            |  until
-bist           |  art
-da             |  there
-damit          |  with it
-dann           |  then
-
-der            |  the
-den
-des
-dem
-die
-das
-
-daß            |  that
-
-derselbe       |  the same
-derselben
-denselben
-desselben
-demselben
-dieselbe
-dieselben
-dasselbe
-
-dazu           |  to that
-
-dein           |  thy
-deine
-deinem
-deinen
-deiner
-deines
-
-denn           |  because
-
-derer          |  of those
-dessen         |  of him
-
-dich           |  thee
-dir            |  to thee
-du             |  thou
-
-dies           |  this
-diese
-diesem
-diesen
-dieser
-dieses
-
-
-doch           |  (several meanings)
-dort           |  (over) there
-
-
-durch          |  through
-
-ein            |  a
-eine
-einem
-einen
-einer
-eines
-
-einig          |  some
-einige
-einigem
-einigen
-einiger
-einiges
-
-einmal         |  once
-
-er             |  he
-ihn            |  him
-ihm            |  to him
-
-es             |  it
-etwas          |  something
-
-euer           |  your
-eure
-eurem
-euren
-eurer
-eures
-
-für            |  for
-gegen          |  towards
-gewesen        |  p.p. of sein
-hab            |  have
-habe           |  have
-haben          |  have
-hat            |  has
-hatte          |  had
-hatten         |  had
-hier           |  here
-hin            |  there
-hinter         |  behind
-
-ich            |  I
-mich           |  me
-mir            |  to me
-
-
-ihr            |  you, to her
-ihre
-ihrem
-ihren
-ihrer
-ihres
-euch           |  to you
-
-im             |  in + dem
-in             |  in
-indem          |  while
-ins            |  in + das
-ist            |  is
-
-jede           |  each, every
-jedem
-jeden
-jeder
-jedes
-
-jene           |  that
-jenem
-jenen
-jener
-jenes
-
-jetzt          |  now
-kann           |  can
-
-kein           |  no
-keine
-keinem
-keinen
-keiner
-keines
-
-können         |  can
-könnte         |  could
-machen         |  do
-man            |  one
-
-manche         |  some, many a
-manchem
-manchen
-mancher
-manches
-
-mein           |  my
-meine
-meinem
-meinen
-meiner
-meines
-
-mit            |  with
-muss           |  must
-musste         |  had to
-nach           |  to(wards)
-nicht          |  not
-nichts         |  nothing
-noch           |  still, yet
-nun            |  now
-nur            |  only
-ob             |  whether
-oder           |  or
-ohne           |  without
-sehr           |  very
-
-sein           |  his
-seine
-seinem
-seinen
-seiner
-seines
-
-selbst         |  self
-sich           |  herself
-
-sie            |  they, she
-ihnen          |  to them
-
-sind           |  are
-so             |  so
-
-solche         |  such
-solchem
-solchen
-solcher
-solches
-
-soll           |  shall
-sollte         |  should
-sondern        |  but
-sonst          |  else
-über           |  over
-um             |  about, around
-und            |  and
-
-uns            |  us
-unse
-unsem
-unsen
-unser
-unses
-
-unter          |  under
-viel           |  much
-vom            |  von + dem
-von            |  from
-vor            |  before
-während        |  while
-war            |  was
-waren          |  were
-warst          |  wast
-was            |  what
-weg            |  away, off
-weil           |  because
-weiter         |  further
-
-welche         |  which
-welchem
-welchen
-welcher
-welches
-
-wenn           |  when
-werde          |  will
-werden         |  will
-wie            |  how
-wieder         |  again
-will           |  want
-wir            |  we
-wird           |  will
-wirst          |  willst
-wo             |  where
-wollen         |  want
-wollte         |  wanted
-würde          |  would
-würden         |  would
-zu             |  to
-zum            |  zu + dem
-zur            |  zu + der
-zwar           |  indeed
-zwischen       |  between
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_el.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_el.txt
deleted file mode 100644
index 232681f..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_el.txt
+++ /dev/null
@@ -1,78 +0,0 @@
-# Lucene Greek Stopwords list
-# Note: by default this file is used after GreekLowerCaseFilter,
-# so when modifying this file use 'σ' instead of 'ς' 
-ο

-το
-οι
-τα
-του
-τησ
-των
-τον
-την
-και 
-κι

-ειμαι
-εισαι
-ειναι
-ειμαστε
-ειστε
-στο
-στον
-στη
-στην
-μα
-αλλα
-απο
-για
-προσ
-με
-σε
-ωσ
-παρα
-αντι
-κατα
-μετα
-θα
-να
-δε
-δεν
-μη
-μην
-επι
-ενω
-εαν
-αν
-τοτε
-που
-πωσ
-ποιοσ
-ποια
-ποιο
-ποιοι
-ποιεσ
-ποιων
-ποιουσ
-αυτοσ
-αυτη
-αυτο
-αυτοι
-αυτων
-αυτουσ
-αυτεσ
-αυτα
-εκεινοσ
-εκεινη
-εκεινο
-εκεινοι
-εκεινεσ
-εκεινα
-εκεινων
-εκεινουσ
-οπωσ
-ομωσ
-ισωσ
-οσο
-οτι
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_en.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_en.txt
deleted file mode 100644
index 2c164c0..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_en.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# a couple of test stopwords to test that the words are really being
-# configured from this file:
-stopworda
-stopwordb
-
-# Standard english stop words taken from Lucene's StopAnalyzer
-a
-an
-and
-are
-as
-at
-be
-but
-by
-for
-if
-in
-into
-is
-it
-no
-not
-of
-on
-or
-such
-that
-the
-their
-then
-there
-these
-they
-this
-to
-was
-will
-with
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_es.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_es.txt
deleted file mode 100644
index 487d78c..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_es.txt
+++ /dev/null
@@ -1,356 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/spanish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Spanish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-
- | The following is a ranked list (commonest to rarest) of stopwords
- | deriving from a large sample of text.
-
- | Extra words have been added at the end.
-
-de             |  from, of
-la             |  the, her
-que            |  who, that
-el             |  the
-en             |  in
-y              |  and
-a              |  to
-los            |  the, them
-del            |  de + el
-se             |  himself, from him etc
-las            |  the, them
-por            |  for, by, etc
-un             |  a
-para           |  for
-con            |  with
-no             |  no
-una            |  a
-su             |  his, her
-al             |  a + el
-  | es         from SER
-lo             |  him
-como           |  how
-más            |  more
-pero           |  pero
-sus            |  su plural
-le             |  to him, her
-ya             |  already
-o              |  or
-  | fue        from SER
-este           |  this
-  | ha         from HABER
-sí             |  himself etc
-porque         |  because
-esta           |  this
-  | son        from SER
-entre          |  between
-  | está     from ESTAR
-cuando         |  when
-muy            |  very
-sin            |  without
-sobre          |  on
-  | ser        from SER
-  | tiene      from TENER
-también        |  also
-me             |  me
-hasta          |  until
-hay            |  there is/are
-donde          |  where
-  | han        from HABER
-quien          |  whom, that
-  | están      from ESTAR
-  | estado     from ESTAR
-desde          |  from
-todo           |  all
-nos            |  us
-durante        |  during
-  | estados    from ESTAR
-todos          |  all
-uno            |  a
-les            |  to them
-ni             |  nor
-contra         |  against
-otros          |  other
-  | fueron     from SER
-ese            |  that
-eso            |  that
-  | había      from HABER
-ante           |  before
-ellos          |  they
-e              |  and (variant of y)
-esto           |  this
-mí             |  me
-antes          |  before
-algunos        |  some
-qué            |  what?
-unos           |  a
-yo             |  I
-otro           |  other
-otras          |  other
-otra           |  other
-él             |  he
-tanto          |  so much, many
-esa            |  that
-estos          |  these
-mucho          |  much, many
-quienes        |  who
-nada           |  nothing
-muchos         |  many
-cual           |  who
-  | sea        from SER
-poco           |  few
-ella           |  she
-estar          |  to be
-  | haber      from HABER
-estas          |  these
-  | estaba     from ESTAR
-  | estamos    from ESTAR
-algunas        |  some
-algo           |  something
-nosotros       |  we
-
-      | other forms
-
-mi             |  me
-mis            |  mi plural
-tú             |  thou
-te             |  thee
-ti             |  thee
-tu             |  thy
-tus            |  tu plural
-ellas          |  they
-nosotras       |  we
-vosotros       |  you
-vosotras       |  you
-os             |  you
-mío            |  mine
-mía            |
-míos           |
-mías           |
-tuyo           |  thine
-tuya           |
-tuyos          |
-tuyas          |
-suyo           |  his, hers, theirs
-suya           |
-suyos          |
-suyas          |
-nuestro        |  ours
-nuestra        |
-nuestros       |
-nuestras       |
-vuestro        |  yours
-vuestra        |
-vuestros       |
-vuestras       |
-esos           |  those
-esas           |  those
-
-               | forms of estar, to be (not including the infinitive):
-estoy
-estás
-está
-estamos
-estáis
-están
-esté
-estés
-estemos
-estéis
-estén
-estaré
-estarás
-estará
-estaremos
-estaréis
-estarán
-estaría
-estarías
-estaríamos
-estaríais
-estarían
-estaba
-estabas
-estábamos
-estabais
-estaban
-estuve
-estuviste
-estuvo
-estuvimos
-estuvisteis
-estuvieron
-estuviera
-estuvieras
-estuviéramos
-estuvierais
-estuvieran
-estuviese
-estuvieses
-estuviésemos
-estuvieseis
-estuviesen
-estando
-estado
-estada
-estados
-estadas
-estad
-
-               | forms of haber, to have (not including the infinitive):
-he
-has
-ha
-hemos
-habéis
-han
-haya
-hayas
-hayamos
-hayáis
-hayan
-habré
-habrás
-habrá
-habremos
-habréis
-habrán
-habría
-habrías
-habríamos
-habríais
-habrían
-había
-habías
-habíamos
-habíais
-habían
-hube
-hubiste
-hubo
-hubimos
-hubisteis
-hubieron
-hubiera
-hubieras
-hubiéramos
-hubierais
-hubieran
-hubiese
-hubieses
-hubiésemos
-hubieseis
-hubiesen
-habiendo
-habido
-habida
-habidos
-habidas
-
-               | forms of ser, to be (not including the infinitive):
-soy
-eres
-es
-somos
-sois
-son
-sea
-seas
-seamos
-seáis
-sean
-seré
-serás
-será
-seremos
-seréis
-serán
-sería
-serías
-seríamos
-seríais
-serían
-era
-eras
-éramos
-erais
-eran
-fui
-fuiste
-fue
-fuimos
-fuisteis
-fueron
-fuera
-fueras
-fuéramos
-fuerais
-fueran
-fuese
-fueses
-fuésemos
-fueseis
-fuesen
-siendo
-sido
-  |  sed also means 'thirst'
-
-               | forms of tener, to have (not including the infinitive):
-tengo
-tienes
-tiene
-tenemos
-tenéis
-tienen
-tenga
-tengas
-tengamos
-tengáis
-tengan
-tendré
-tendrás
-tendrá
-tendremos
-tendréis
-tendrán
-tendría
-tendrías
-tendríamos
-tendríais
-tendrían
-tenía
-tenías
-teníamos
-teníais
-tenían
-tuve
-tuviste
-tuvo
-tuvimos
-tuvisteis
-tuvieron
-tuviera
-tuvieras
-tuviéramos
-tuvierais
-tuvieran
-tuviese
-tuvieses
-tuviésemos
-tuvieseis
-tuviesen
-teniendo
-tenido
-tenida
-tenidos
-tenidas
-tened
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_eu.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_eu.txt
deleted file mode 100644
index 25f1db9..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_eu.txt
+++ /dev/null
@@ -1,99 +0,0 @@
-# example set of basque stopwords
-al
-anitz
-arabera
-asko
-baina
-bat
-batean
-batek
-bati
-batzuei
-batzuek
-batzuetan
-batzuk
-bera
-beraiek
-berau
-berauek
-bere
-berori
-beroriek
-beste
-bezala
-da
-dago
-dira
-ditu
-du
-dute
-edo
-egin
-ere
-eta
-eurak
-ez
-gainera
-gu
-gutxi
-guzti
-haiei
-haiek
-haietan
-hainbeste
-hala
-han
-handik
-hango
-hara
-hari
-hark
-hartan
-hau
-hauei
-hauek
-hauetan
-hemen
-hemendik
-hemengo
-hi
-hona
-honek
-honela
-honetan
-honi
-hor
-hori
-horiei
-horiek
-horietan
-horko
-horra
-horrek
-horrela
-horretan
-horri
-hortik
-hura
-izan
-ni
-noiz
-nola
-non
-nondik
-nongo
-nor
-nora
-ze
-zein
-zen
-zenbait
-zenbat
-zer
-zergatik
-ziren
-zituen
-zu
-zuek
-zuen
-zuten
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fa.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fa.txt
deleted file mode 100644
index 723641c..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fa.txt
+++ /dev/null
@@ -1,313 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# Note: by default this file is used after normalization, so when adding entries
-# to this file, use the arabic 'ي' instead of 'ی'
-انان
-نداشته
-سراسر
-خياه
-ايشان
-وي
-تاكنون
-بيشتري
-دوم
-پس
-ناشي
-وگو
-يا
-داشتند
-سپس
-هنگام
-هرگز
-پنج
-نشان
-امسال
-ديگر
-گروهي
-شدند
-چطور
-ده

-دو
-نخستين
-ولي
-چرا
-چه
-وسط

-كدام
-قابل
-يك
-رفت
-هفت
-همچنين
-در
-هزار
-بله
-بلي
-شايد
-اما
-شناسي
-گرفته
-دهد
-داشته
-دانست
-داشتن
-خواهيم
-ميليارد
-وقتيكه
-امد
-خواهد
-جز
-اورده
-شده
-بلكه
-خدمات
-شدن
-برخي
-نبود
-بسياري
-جلوگيري
-حق
-كردند
-نوعي
-بعري
-نكرده
-نظير
-نبايد
-بوده
-بودن
-داد
-اورد
-هست
-جايي
-شود
-دنبال
-داده
-بايد
-سابق
-هيچ
-همان
-انجا
-كمتر
-كجاست
-گردد
-كسي
-تر
-مردم
-تان
-دادن
-بودند
-سري
-جدا
-ندارند
-مگر
-يكديگر
-دارد
-دهند
-بنابراين
-هنگامي
-سمت
-جا
-انچه
-خود
-دادند
-زياد
-دارند
-اثر
-بدون
-بهترين
-بيشتر
-البته
-به
-براساس
-بيرون
-كرد
-بعضي
-گرفت
-توي
-اي
-ميليون
-او
-جريان
-تول
-بر
-مانند
-برابر
-باشيم
-مدتي
-گويند
-اكنون
-تا
-تنها
-جديد
-چند
-بي
-نشده
-كردن
-كردم
-گويد
-كرده
-كنيم
-نمي
-نزد
-روي
-قصد
-فقط
-بالاي
-ديگران
-اين
-ديروز
-توسط
-سوم
-ايم
-دانند
-سوي
-استفاده
-شما
-كنار
-داريم
-ساخته
-طور
-امده
-رفته
-نخست
-بيست
-نزديك
-طي
-كنيد
-از
-انها
-تمامي
-داشت
-يكي
-طريق
-اش
-چيست
-روب
-نمايد
-گفت
-چندين
-چيزي
-تواند
-ام
-ايا
-با
-ان
-ايد
-ترين
-اينكه
-ديگري
-راه
-هايي
-بروز
-همچنان
-پاعين
-كس
-حدود
-مختلف
-مقابل
-چيز
-گيرد
-ندارد
-ضد
-همچون
-سازي
-شان
-مورد
-باره
-مرسي
-خويش
-برخوردار
-چون
-خارج
-شش
-هنوز
-تحت
-ضمن
-هستيم
-گفته
-فكر
-بسيار
-پيش
-براي
-روزهاي
-انكه
-نخواهد
-بالا
-كل
-وقتي
-كي
-چنين
-كه
-گيري
-نيست
-است
-كجا
-كند
-نيز
-يابد
-بندي
-حتي
-توانند
-عقب
-خواست
-كنند
-بين
-تمام
-همه
-ما
-باشند
-مثل
-شد
-اري
-باشد
-اره
-طبق
-بعد
-اگر
-صورت
-غير
-جاي
-بيش
-ريزي
-اند
-زيرا
-چگونه
-بار
-لطفا
-مي
-درباره
-من
-ديده
-همين
-گذاري
-برداري
-علت
-گذاشته
-هم
-فوق
-نه
-ها
-شوند
-اباد
-همواره
-هر
-اول
-خواهند
-چهار
-نام
-امروز
-مان
-هاي
-قبل
-كنم
-سعي
-تازه
-را
-هستند
-زير
-جلوي
-عنوان
-بود
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fi.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fi.txt
deleted file mode 100644
index 4372c9a..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fi.txt
+++ /dev/null
@@ -1,97 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/finnish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
- 
-| forms of BE
-
-olla
-olen
-olet
-on
-olemme
-olette
-ovat
-ole        | negative form
-
-oli
-olisi
-olisit
-olisin
-olisimme
-olisitte
-olisivat
-olit
-olin
-olimme
-olitte
-olivat
-ollut
-olleet
-
-en         | negation
-et
-ei
-emme
-ette
-eivät
-
-|Nom   Gen    Acc    Part   Iness   Elat    Illat  Adess   Ablat   Allat   Ess    Trans
-minä   minun  minut  minua  minussa minusta minuun minulla minulta minulle               | I
-sinä   sinun  sinut  sinua  sinussa sinusta sinuun sinulla sinulta sinulle               | you
-hän    hänen  hänet  häntä  hänessä hänestä häneen hänellä häneltä hänelle               | he she
-me     meidän meidät meitä  meissä  meistä  meihin meillä  meiltä  meille                | we
-te     teidän teidät teitä  teissä  teistä  teihin teillä  teiltä  teille                | you
-he     heidän heidät heitä  heissä  heistä  heihin heillä  heiltä  heille                | they
-
-tämä   tämän         tätä   tässä   tästä   tähän  tallä   tältä   tälle   tänä   täksi  | this
-tuo    tuon          tuotä  tuossa  tuosta  tuohon tuolla  tuolta  tuolle  tuona  tuoksi | that
-se     sen           sitä   siinä   siitä   siihen sillä   siltä   sille   sinä   siksi  | it
-nämä   näiden        näitä  näissä  näistä  näihin näillä  näiltä  näille  näinä  näiksi | these
-nuo    noiden        noita  noissa  noista  noihin noilla  noilta  noille  noina  noiksi | those
-ne     niiden        niitä  niissä  niistä  niihin niillä  niiltä  niille  niinä  niiksi | they
-
-kuka   kenen kenet   ketä   kenessä kenestä keneen kenellä keneltä kenelle kenenä keneksi| who
-ketkä  keiden ketkä  keitä  keissä  keistä  keihin keillä  keiltä  keille  keinä  keiksi | (pl)
-mikä   minkä minkä   mitä   missä   mistä   mihin  millä   miltä   mille   minä   miksi  | which what
-mitkä                                                                                    | (pl)
-
-joka   jonka         jota   jossa   josta   johon  jolla   jolta   jolle   jona   joksi  | who which
-jotka  joiden        joita  joissa  joista  joihin joilla  joilta  joille  joina  joiksi | (pl)
-
-| conjunctions
-
-että   | that
-ja     | and
-jos    | if
-koska  | because
-kuin   | than
-mutta  | but
-niin   | so
-sekä   | and
-sillä  | for
-tai    | or
-vaan   | but
-vai    | or
-vaikka | although
-
-
-| prepositions
-
-kanssa  | with
-mukaan  | according to
-noin    | about
-poikki  | across
-yli     | over, across
-
-| other
-
-kun    | when
-niin   | so
-nyt    | now
-itse   | self
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fr.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fr.txt
deleted file mode 100644
index 749abae..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_fr.txt
+++ /dev/null
@@ -1,186 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/french/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A French stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-au             |  a + le
-aux            |  a + les
-avec           |  with
-ce             |  this
-ces            |  these
-dans           |  with
-de             |  of
-des            |  de + les
-du             |  de + le
-elle           |  she
-en             |  `of them' etc
-et             |  and
-eux            |  them
-il             |  he
-je             |  I
-la             |  the
-le             |  the
-leur           |  their
-lui            |  him
-ma             |  my (fem)
-mais           |  but
-me             |  me
-même           |  same; as in moi-même (myself) etc
-mes            |  me (pl)
-moi            |  me
-mon            |  my (masc)
-ne             |  not
-nos            |  our (pl)
-notre          |  our
-nous           |  we
-on             |  one
-ou             |  where
-par            |  by
-pas            |  not
-pour           |  for
-qu             |  que before vowel
-que            |  that
-qui            |  who
-sa             |  his, her (fem)
-se             |  oneself
-ses            |  his (pl)
-son            |  his, her (masc)
-sur            |  on
-ta             |  thy (fem)
-te             |  thee
-tes            |  thy (pl)
-toi            |  thee
-ton            |  thy (masc)
-tu             |  thou
-un             |  a
-une            |  a
-vos            |  your (pl)
-votre          |  your
-vous           |  you
-
-               |  single letter forms
-
-c              |  c'
-d              |  d'
-j              |  j'
-l              |  l'
-à              |  to, at
-m              |  m'
-n              |  n'
-s              |  s'
-t              |  t'
-y              |  there
-
-               | forms of être (not including the infinitive):
-été
-étée
-étées
-étés
-étant
-suis
-es
-est
-sommes
-êtes
-sont
-serai
-seras
-sera
-serons
-serez
-seront
-serais
-serait
-serions
-seriez
-seraient
-étais
-était
-étions
-étiez
-étaient
-fus
-fut
-fûmes
-fûtes
-furent
-sois
-soit
-soyons
-soyez
-soient
-fusse
-fusses
-fût
-fussions
-fussiez
-fussent
-
-               | forms of avoir (not including the infinitive):
-ayant
-eu
-eue
-eues
-eus
-ai
-as
-avons
-avez
-ont
-aurai
-auras
-aura
-aurons
-aurez
-auront
-aurais
-aurait
-aurions
-auriez
-auraient
-avais
-avait
-avions
-aviez
-avaient
-eut
-eûmes
-eûtes
-eurent
-aie
-aies
-ait
-ayons
-ayez
-aient
-eusse
-eusses
-eût
-eussions
-eussiez
-eussent
-
-               | Later additions (from Jean-Christophe Deschamps)
-ceci           |  this
-cela           |  that
-celà           |  that
-cet            |  this
-cette          |  this
-ici            |  here
-ils            |  they
-les            |  the (pl)
-leurs          |  their (pl)
-quel           |  which
-quels          |  which
-quelle         |  which
-quelles        |  which
-sans           |  without
-soi            |  oneself
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ga.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ga.txt
deleted file mode 100644
index 9ff88d7..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ga.txt
+++ /dev/null
@@ -1,110 +0,0 @@
-
-a
-ach
-ag
-agus
-an
-aon
-ar
-arna
-as
-b'
-ba
-beirt
-bhúr
-caoga
-ceathair
-ceathrar
-chomh
-chtó
-chuig
-chun
-cois
-céad
-cúig
-cúigear
-d'
-daichead
-dar
-de
-deich
-deichniúr
-den
-dhá
-do
-don
-dtí
-dá
-dár
-dó
-faoi
-faoin
-faoina
-faoinár
-fara
-fiche
-gach
-gan
-go
-gur
-haon
-hocht
-i
-iad
-idir
-in
-ina
-ins
-inár
-is
-le
-leis
-lena
-lenár
-m'
-mar
-mo
-mé
-na
-nach
-naoi
-naonúr
-ná
-ní
-níor
-nó
-nócha
-ocht
-ochtar
-os
-roimh
-sa
-seacht
-seachtar
-seachtó
-seasca
-seisear
-siad
-sibh
-sinn
-sna
-sé
-sí
-tar
-thar
-thú
-triúr
-trí
-trína
-trínár
-tríocha
-tú
-um
-ár

-éis


-ón
-óna
-ónár
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_gl.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_gl.txt
deleted file mode 100644
index d8760b1..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_gl.txt
+++ /dev/null
@@ -1,161 +0,0 @@
-# galican stopwords
-a
-aínda
-alí
-aquel
-aquela
-aquelas
-aqueles
-aquilo
-aquí
-ao
-aos
-as
-así

-ben
-cando
-che
-co
-coa
-comigo
-con
-connosco
-contigo
-convosco
-coas
-cos
-cun
-cuns
-cunha
-cunhas
-da
-dalgunha
-dalgunhas
-dalgún
-dalgúns
-das
-de
-del
-dela
-delas
-deles
-desde
-deste
-do
-dos
-dun
-duns
-dunha
-dunhas
-e
-el
-ela
-elas
-eles
-en
-era
-eran
-esa
-esas
-ese
-eses
-esta
-estar
-estaba
-está
-están
-este
-estes
-estiven
-estou
-eu

-facer
-foi
-foron
-fun
-había
-hai
-iso
-isto
-la
-las
-lle
-lles
-lo
-los
-mais
-me
-meu
-meus
-min
-miña
-miñas
-moi
-na
-nas
-neste
-nin
-no
-non
-nos
-nosa
-nosas
-noso
-nosos
-nós
-nun
-nunha
-nuns
-nunhas
-o
-os
-ou

-ós
-para
-pero
-pode
-pois
-pola
-polas
-polo
-polos
-por
-que
-se
-senón
-ser
-seu
-seus
-sexa
-sido
-sobre
-súa
-súas
-tamén
-tan
-te
-ten
-teñen
-teño
-ter
-teu
-teus
-ti
-tido
-tiña
-tiven
-túa
-túas
-un
-unha
-unhas
-uns
-vos
-vosa
-vosas
-voso
-vosos
-vós
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hi.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hi.txt
deleted file mode 100644
index 86286bb..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hi.txt
+++ /dev/null
@@ -1,235 +0,0 @@
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# Note: by default this file also contains forms normalized by HindiNormalizer 
-# for spelling variation (see section below), such that it can be used whether or 
-# not you enable that feature. When adding additional entries to this list,
-# please add the normalized form as well. 
-अंदर
-अत
-अपना
-अपनी
-अपने
-अभी
-आदि
-आप
-इत्यादि
-इन 
-इनका
-इन्हीं
-इन्हें
-इन्हों
-इस
-इसका
-इसकी
-इसके
-इसमें
-इसी
-इसे
-उन
-उनका
-उनकी
-उनके
-उनको
-उन्हीं
-उन्हें
-उन्हों
-उस
-उसके
-उसी
-उसे
-एक
-एवं
-एस
-ऐसे
-और
-कई
-कर
-करता
-करते
-करना
-करने
-करें
-कहते
-कहा
-का
-काफ़ी
-कि
-कितना
-किन्हें
-किन्हों
-किया
-किर
-किस
-किसी
-किसे
-की
-कुछ
-कुल
-के
-को
-कोई
-कौन
-कौनसा
-गया
-घर
-जब
-जहाँ
-जा
-जितना
-जिन
-जिन्हें
-जिन्हों
-जिस
-जिसे
-जीधर
-जैसा
-जैसे
-जो
-तक
-तब
-तरह
-तिन
-तिन्हें
-तिन्हों
-तिस
-तिसे
-तो
-था
-थी
-थे
-दबारा
-दिया
-दुसरा
-दूसरे
-दो
-द्वारा
-न
-नहीं
-ना
-निहायत
-नीचे
-ने
-पर
-पर  
-पहले
-पूरा
-पे
-फिर
-बनी
-बही
-बहुत
-बाद
-बाला
-बिलकुल
-भी
-भीतर
-मगर
-मानो
-मे
-में
-यदि
-यह
-यहाँ
-यही
-या
-यिह 
-ये
-रखें
-रहा
-रहे
-ऱ्वासा
-लिए
-लिये
-लेकिन
-व
-वर्ग
-वह
-वह 
-वहाँ
-वहीं
-वाले
-वुह 
-वे
-वग़ैरह
-संग
-सकता
-सकते
-सबसे
-सभी
-साथ
-साबुत
-साभ
-सारा
-से
-सो
-ही
-हुआ
-हुई
-हुए
-है
-हैं
-हो
-होता
-होती
-होते
-होना
-होने
-# additional normalized forms of the above
-अपनि
-जेसे
-होति
-सभि
-तिंहों
-इंहों
-दवारा
-इसि
-किंहें
-थि
-उंहों
-ओर
-जिंहें
-वहिं
-अभि
-बनि
-हि
-उंहिं
-उंहें
-हें
-वगेरह
-एसे
-रवासा
-कोन
-निचे
-काफि
-उसि
-पुरा
-भितर
-हे
-बहि
-वहां
-कोइ
-यहां
-जिंहों
-तिंहें
-किसि
-कइ
-यहि
-इंहिं
-जिधर
-इंहें
-अदि
-इतयादि
-हुइ
-कोनसा
-इसकि
-दुसरे
-जहां
-अप
-किंहों
-उनकि
-भि
-वरग
-हुअ
-जेसा
-नहिं
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hu.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hu.txt
deleted file mode 100644
index 37526da..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hu.txt
+++ /dev/null
@@ -1,211 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/hungarian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
- 
-| Hungarian stop word list
-| prepared by Anna Tordai
-
-a
-ahogy
-ahol
-aki
-akik
-akkor
-alatt
-által
-általában
-amely
-amelyek
-amelyekben
-amelyeket
-amelyet
-amelynek
-ami
-amit
-amolyan
-amíg
-amikor
-át
-abban
-ahhoz
-annak
-arra
-arról
-az
-azok
-azon
-azt
-azzal
-azért
-aztán
-azután
-azonban
-bár
-be
-belül
-benne
-cikk
-cikkek
-cikkeket
-csak
-de
-e
-eddig
-egész
-egy
-egyes
-egyetlen
-egyéb
-egyik
-egyre
-ekkor
-el
-elég
-ellen
-elő
-először
-előtt
-első
-én
-éppen
-ebben
-ehhez
-emilyen
-ennek
-erre
-ez
-ezt
-ezek
-ezen
-ezzel
-ezért
-és
-fel
-felé
-hanem
-hiszen
-hogy
-hogyan
-igen
-így
-illetve
-ill.
-ill
-ilyen
-ilyenkor
-ison
-ismét
-itt
-jó
-jól
-jobban
-kell
-kellett
-keresztül
-keressünk
-ki
-kívül
-között
-közül
-legalább
-lehet
-lehetett
-legyen
-lenne
-lenni
-lesz
-lett
-maga
-magát
-majd
-majd
-már
-más
-másik
-meg
-még
-mellett
-mert
-mely
-melyek
-mi
-mit
-míg
-miért
-milyen
-mikor
-minden
-mindent
-mindenki
-mindig
-mint
-mintha
-mivel
-most
-nagy
-nagyobb
-nagyon
-ne
-néha
-nekem
-neki
-nem
-néhány
-nélkül
-nincs
-olyan
-ott
-össze

-ők
-őket
-pedig
-persze
-rá
-s
-saját
-sem
-semmi
-sok
-sokat
-sokkal
-számára
-szemben
-szerint
-szinte
-talán
-tehát
-teljes
-tovább
-továbbá
-több
-úgy
-ugyanis
-új
-újabb
-újra
-után
-utána
-utolsó
-vagy
-vagyis
-valaki
-valami
-valamint
-való
-vagyok
-van
-vannak
-volt
-voltam
-voltak
-voltunk
-vissza
-vele
-viszont
-volna
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hy.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hy.txt
deleted file mode 100644
index 60c1c50..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_hy.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-# example set of Armenian stopwords.
-այդ
-այլ
-այն
-այս
-դու
-դուք
-եմ
-են
-ենք
-ես
-եք

-էի
-էին
-էինք
-էիր
-էիք
-էր
-ըստ


-ին
-իսկ
-իր
-կամ
-համար
-հետ
-հետո
-մենք
-մեջ
-մի

-նա
-նաև
-նրա
-նրանք
-որ
-որը
-որոնք
-որպես
-ու
-ում
-պիտի
-վրա

diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_id.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_id.txt
deleted file mode 100644
index 4617f83..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_id.txt
+++ /dev/null
@@ -1,359 +0,0 @@
-# from appendix D of: A Study of Stemming Effects on Information
-# Retrieval in Bahasa Indonesia
-ada
-adanya
-adalah
-adapun
-agak
-agaknya
-agar
-akan
-akankah
-akhirnya
-aku
-akulah
-amat
-amatlah
-anda
-andalah
-antar
-diantaranya
-antara
-antaranya
-diantara
-apa
-apaan
-mengapa
-apabila
-apakah
-apalagi
-apatah
-atau
-ataukah
-ataupun
-bagai
-bagaikan
-sebagai
-sebagainya
-bagaimana
-bagaimanapun
-sebagaimana
-bagaimanakah
-bagi
-bahkan
-bahwa
-bahwasanya
-sebaliknya
-banyak
-sebanyak
-beberapa
-seberapa
-begini
-beginian
-beginikah
-beginilah
-sebegini
-begitu
-begitukah
-begitulah
-begitupun
-sebegitu
-belum
-belumlah
-sebelum
-sebelumnya
-sebenarnya
-berapa
-berapakah
-berapalah
-berapapun
-betulkah
-sebetulnya
-biasa
-biasanya
-bila
-bilakah
-bisa
-bisakah
-sebisanya
-boleh
-bolehkah
-bolehlah
-buat
-bukan
-bukankah
-bukanlah
-bukannya
-cuma
-percuma
-dahulu
-dalam
-dan
-dapat
-dari
-daripada
-dekat
-demi
-demikian
-demikianlah
-sedemikian
-dengan
-depan
-di
-dia
-dialah
-dini
-diri
-dirinya
-terdiri
-dong
-dulu
-enggak
-enggaknya
-entah
-entahlah
-terhadap
-terhadapnya
-hal
-hampir
-hanya
-hanyalah
-harus
-haruslah
-harusnya
-seharusnya
-hendak
-hendaklah
-hendaknya
-hingga
-sehingga
-ia
-ialah
-ibarat
-ingin
-inginkah
-inginkan
-ini
-inikah
-inilah
-itu
-itukah
-itulah
-jangan
-jangankan
-janganlah
-jika
-jikalau
-juga
-justru
-kala
-kalau
-kalaulah
-kalaupun
-kalian
-kami
-kamilah
-kamu
-kamulah
-kan
-kapan
-kapankah
-kapanpun
-dikarenakan
-karena
-karenanya
-ke
-kecil
-kemudian
-kenapa
-kepada
-kepadanya
-ketika
-seketika
-khususnya
-kini
-kinilah
-kiranya
-sekiranya
-kita
-kitalah
-kok
-lagi
-lagian
-selagi
-lah
-lain
-lainnya
-melainkan
-selaku
-lalu
-melalui
-terlalu
-lama
-lamanya
-selama
-selama
-selamanya
-lebih
-terlebih
-bermacam
-macam
-semacam
-maka
-makanya
-makin
-malah
-malahan
-mampu
-mampukah
-mana
-manakala
-manalagi
-masih
-masihkah
-semasih
-masing
-mau
-maupun
-semaunya
-memang
-mereka
-merekalah
-meski
-meskipun
-semula
-mungkin
-mungkinkah
-nah
-namun
-nanti
-nantinya
-nyaris
-oleh
-olehnya
-seorang
-seseorang
-pada
-padanya
-padahal
-paling
-sepanjang
-pantas
-sepantasnya
-sepantasnyalah
-para
-pasti
-pastilah
-per
-pernah
-pula
-pun
-merupakan
-rupanya
-serupa
-saat
-saatnya
-sesaat
-saja
-sajalah
-saling
-bersama
-sama
-sesama
-sambil
-sampai
-sana
-sangat
-sangatlah
-saya
-sayalah
-se
-sebab
-sebabnya
-sebuah
-tersebut
-tersebutlah
-sedang
-sedangkan
-sedikit
-sedikitnya
-segala
-segalanya
-segera
-sesegera
-sejak
-sejenak
-sekali
-sekalian
-sekalipun
-sesekali
-sekaligus
-sekarang
-sekarang
-sekitar
-sekitarnya
-sela
-selain
-selalu
-seluruh
-seluruhnya
-semakin
-sementara
-sempat
-semua
-semuanya
-sendiri
-sendirinya
-seolah
-seperti
-sepertinya
-sering
-seringnya
-serta
-siapa
-siapakah
-siapapun
-disini
-disinilah
-sini
-sinilah
-sesuatu
-sesuatunya
-suatu
-sesudah
-sesudahnya
-sudah
-sudahkah
-sudahlah
-supaya
-tadi
-tadinya
-tak
-tanpa
-setelah
-telah
-tentang
-tentu
-tentulah
-tentunya
-tertentu
-seterusnya
-tapi
-tetapi
-setiap
-tiap
-setidaknya
-tidak
-tidakkah
-tidaklah
-toh
-waduh
-wah
-wahai
-sewaktu
-walau
-walaupun
-wong
-yaitu
-yakni
-yang
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_it.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_it.txt
deleted file mode 100644
index 1219cc7..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_it.txt
+++ /dev/null
@@ -1,303 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/italian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | An Italian stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-ad             |  a (to) before vowel
-al             |  a + il
-allo           |  a + lo
-ai             |  a + i
-agli           |  a + gli
-all            |  a + l'
-agl            |  a + gl'
-alla           |  a + la
-alle           |  a + le
-con            |  with
-col            |  con + il
-coi            |  con + i (forms collo, cogli etc are now very rare)
-da             |  from
-dal            |  da + il
-dallo          |  da + lo
-dai            |  da + i
-dagli          |  da + gli
-dall           |  da + l'
-dagl           |  da + gll'
-dalla          |  da + la
-dalle          |  da + le
-di             |  of
-del            |  di + il
-dello          |  di + lo
-dei            |  di + i
-degli          |  di + gli
-dell           |  di + l'
-degl           |  di + gl'
-della          |  di + la
-delle          |  di + le
-in             |  in
-nel            |  in + el
-nello          |  in + lo
-nei            |  in + i
-negli          |  in + gli
-nell           |  in + l'
-negl           |  in + gl'
-nella          |  in + la
-nelle          |  in + le
-su             |  on
-sul            |  su + il
-sullo          |  su + lo
-sui            |  su + i
-sugli          |  su + gli
-sull           |  su + l'
-sugl           |  su + gl'
-sulla          |  su + la
-sulle          |  su + le
-per            |  through, by
-tra            |  among
-contro         |  against
-io             |  I
-tu             |  thou
-lui            |  he
-lei            |  she
-noi            |  we
-voi            |  you
-loro           |  they
-mio            |  my
-mia            |
-miei           |
-mie            |
-tuo            |
-tua            |
-tuoi           |  thy
-tue            |
-suo            |
-sua            |
-suoi           |  his, her
-sue            |
-nostro         |  our
-nostra         |
-nostri         |
-nostre         |
-vostro         |  your
-vostra         |
-vostri         |
-vostre         |
-mi             |  me
-ti             |  thee
-ci             |  us, there
-vi             |  you, there
-lo             |  him, the
-la             |  her, the
-li             |  them
-le             |  them, the
-gli            |  to him, the
-ne             |  from there etc
-il             |  the
-un             |  a
-uno            |  a
-una            |  a
-ma             |  but
-ed             |  and
-se             |  if
-perché         |  why, because
-anche          |  also
-come           |  how
-dov            |  where (as dov')
-dove           |  where
-che            |  who, that
-chi            |  who
-cui            |  whom
-non            |  not
-più            |  more
-quale          |  who, that
-quanto         |  how much
-quanti         |
-quanta         |
-quante         |
-quello         |  that
-quelli         |
-quella         |
-quelle         |
-questo         |  this
-questi         |
-questa         |
-queste         |
-si             |  yes
-tutto          |  all
-tutti          |  all
-
-               |  single letter forms:
-
-a              |  at
-c              |  as c' for ce or ci
-e              |  and
-i              |  the
-l              |  as l'
-o              |  or
-
-               | forms of avere, to have (not including the infinitive):
-
-ho
-hai
-ha
-abbiamo
-avete
-hanno
-abbia
-abbiate
-abbiano
-avrò
-avrai
-avrà
-avremo
-avrete
-avranno
-avrei
-avresti
-avrebbe
-avremmo
-avreste
-avrebbero
-avevo
-avevi
-aveva
-avevamo
-avevate
-avevano
-ebbi
-avesti
-ebbe
-avemmo
-aveste
-ebbero
-avessi
-avesse
-avessimo
-avessero
-avendo
-avuto
-avuta
-avuti
-avute
-
-               | forms of essere, to be (not including the infinitive):
-sono
-sei

-siamo
-siete
-sia
-siate
-siano
-sarò
-sarai
-sarà
-saremo
-sarete
-saranno
-sarei
-saresti
-sarebbe
-saremmo
-sareste
-sarebbero
-ero
-eri
-era
-eravamo
-eravate
-erano
-fui
-fosti
-fu
-fummo
-foste
-furono
-fossi
-fosse
-fossimo
-fossero
-essendo
-
-               | forms of fare, to do (not including the infinitive, fa, fat-):
-faccio
-fai
-facciamo
-fanno
-faccia
-facciate
-facciano
-farò
-farai
-farà
-faremo
-farete
-faranno
-farei
-faresti
-farebbe
-faremmo
-fareste
-farebbero
-facevo
-facevi
-faceva
-facevamo
-facevate
-facevano
-feci
-facesti
-fece
-facemmo
-faceste
-fecero
-facessi
-facesse
-facessimo
-facessero
-facendo
-
-               | forms of stare, to be (not including the infinitive):
-sto
-stai
-sta
-stiamo
-stanno
-stia
-stiate
-stiano
-starò
-starai
-starà
-staremo
-starete
-staranno
-starei
-staresti
-starebbe
-staremmo
-stareste
-starebbero
-stavo
-stavi
-stava
-stavamo
-stavate
-stavano
-stetti
-stesti
-stette
-stemmo
-steste
-stettero
-stessi
-stesse
-stessimo
-stessero
-stando
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ja.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ja.txt
deleted file mode 100644
index d4321be..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ja.txt
+++ /dev/null
@@ -1,127 +0,0 @@
-#
-# This file defines a stopword set for Japanese.
-#
-# This set is made up of hand-picked frequent terms from segmented Japanese Wikipedia.
-# Punctuation characters and frequent kanji have mostly been left out.  See LUCENE-3745
-# for frequency lists, etc. that can be useful for making your own set (if desired)
-#
-# Note that there is an overlap between these stopwords and the terms stopped when used
-# in combination with the JapanesePartOfSpeechStopFilter.  When editing this file, note
-# that comments are not allowed on the same line as stopwords.
-#
-# Also note that stopping is done in a case-insensitive manner.  Change your StopFilter
-# configuration if you need case-sensitive stopping.  Lastly, note that stopping is done
-# using the same character width as the entries in this file.  Since this StopFilter is
-# normally done after a CJKWidthFilter in your chain, you would usually want your romaji
-# entries to be in half-width and your kana entries to be in full-width.
-#
-の
-に
-は
-を
-た
-が
-で
-て
-と
-し
-れ
-さ
-ある
-いる
-も
-する
-から
-な
-こと
-として
-い
-や
-れる
-など
-なっ
-ない
-この
-ため
-その
-あっ
-よう
-また
-もの
-という
-あり
-まで
-られ
-なる
-へ
-か
-だ
-これ
-によって
-により
-おり
-より
-による
-ず
-なり
-られる
-において
-ば
-なかっ
-なく
-しかし
-について
-せ
-だっ
-その後
-できる
-それ
-う
-ので
-なお
-のみ
-でき
-き
-つ
-における
-および
-いう
-さらに
-でも
-ら
-たり
-その他
-に関する
-たち
-ます
-ん
-なら
-に対して
-特に
-せる
-及び
-これら
-とき
-では
-にて
-ほか
-ながら
-うち
-そして
-とともに
-ただし
-かつて
-それぞれ
-または
-お
-ほど
-ものの
-に対する
-ほとんど
-と共に
-といった
-です
-とも
-ところ
-ここ
-##### End of file
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_lv.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_lv.txt
deleted file mode 100644
index e21a23c..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_lv.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-# Set of Latvian stopwords from A Stemming Algorithm for Latvian, Karlis Kreslins
-# the original list of over 800 forms was refined: 
-#   pronouns, adverbs, interjections were removed
-# 
-# prepositions
-aiz
-ap
-ar
-apakš
-ārpus
-augšpus
-bez
-caur
-dēļ
-gar
-iekš
-iz
-kopš
-labad
-lejpus
-līdz
-no
-otrpus
-pa
-par
-pār
-pēc
-pie
-pirms
-pret
-priekš
-starp
-šaipus
-uz
-viņpus
-virs
-virspus
-zem
-apakšpus
-# Conjunctions
-un
-bet
-jo
-ja
-ka
-lai
-tomēr
-tikko
-turpretī
-arī
-kaut
-gan
-tādēļ
-tā
-ne
-tikvien
-vien
-kā
-ir
-te
-vai
-kamēr
-# Particles
-ar
-diezin
-droši
-diemžēl
-nebūt
-ik
-it
-taču
-nu
-pat
-tiklab
-iekšpus
-nedz
-tik
-nevis
-turpretim
-jeb
-iekam
-iekām
-iekāms
-kolīdz
-līdzko
-tiklīdz
-jebšu
-tālab
-tāpēc
-nekā
-itin
-jā
-jau
-jel
-nē
-nezin
-tad
-tikai
-vis
-tak
-iekams
-vien
-# modal verbs
-būt  
-biju 
-biji
-bija
-bijām
-bijāt
-esmu
-esi
-esam
-esat 
-būšu     
-būsi
-būs
-būsim
-būsiet
-tikt
-tiku
-tiki
-tika
-tikām
-tikāt
-tieku
-tiec
-tiek
-tiekam
-tiekat
-tikšu
-tiks
-tiksim
-tiksiet
-tapt
-tapi
-tapāt
-topat
-tapšu
-tapsi
-taps
-tapsim
-tapsiet
-kļūt
-kļuvu
-kļuvi
-kļuva
-kļuvām
-kļuvāt
-kļūstu
-kļūsti
-kļūst
-kļūstam
-kļūstat
-kļūšu
-kļūsi
-kļūs
-kļūsim
-kļūsiet
-# verbs
-varēt
-varēju
-varējām
-varēšu
-varēsim
-var
-varēji
-varējāt
-varēsi
-varēsiet
-varat
-varēja
-varēs
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_nl.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_nl.txt
deleted file mode 100644
index 47a2aea..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_nl.txt
+++ /dev/null
@@ -1,119 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/dutch/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Dutch stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large sample of Dutch text.
-
- | Dutch stop words frequently exhibit homonym clashes. These are indicated
- | clearly below.
-
-de             |  the
-en             |  and
-van            |  of, from
-ik             |  I, the ego
-te             |  (1) chez, at etc, (2) to, (3) too
-dat            |  that, which
-die            |  that, those, who, which
-in             |  in, inside
-een            |  a, an, one
-hij            |  he
-het            |  the, it
-niet           |  not, nothing, naught
-zijn           |  (1) to be, being, (2) his, one's, its
-is             |  is
-was            |  (1) was, past tense of all persons sing. of 'zijn' (to be) (2) wax, (3) the washing, (4) rise of river
-op             |  on, upon, at, in, up, used up
-aan            |  on, upon, to (as dative)
-met            |  with, by
-als            |  like, such as, when
-voor           |  (1) before, in front of, (2) furrow
-had            |  had, past tense all persons sing. of 'hebben' (have)
-er             |  there
-maar           |  but, only
-om             |  round, about, for etc
-hem            |  him
-dan            |  then
-zou            |  should/would, past tense all persons sing. of 'zullen'
-of             |  or, whether, if
-wat            |  what, something, anything
-mijn           |  possessive and noun 'mine'
-men            |  people, 'one'
-dit            |  this
-zo             |  so, thus, in this way
-door           |  through by
-over           |  over, across
-ze             |  she, her, they, them
-zich           |  oneself
-bij            |  (1) a bee, (2) by, near, at
-ook            |  also, too
-tot            |  till, until
-je             |  you
-mij            |  me
-uit            |  out of, from
-der            |  Old Dutch form of 'van der' still found in surnames
-daar           |  (1) there, (2) because
-haar           |  (1) her, their, them, (2) hair
-naar           |  (1) unpleasant, unwell etc, (2) towards, (3) as
-heb            |  present first person sing. of 'to have'
-hoe            |  how, why
-heeft          |  present third person sing. of 'to have'
-hebben         |  'to have' and various parts thereof
-deze           |  this
-u              |  you
-want           |  (1) for, (2) mitten, (3) rigging
-nog            |  yet, still
-zal            |  'shall', first and third person sing. of verb 'zullen' (will)
-me             |  me
-zij            |  she, they
-nu             |  now
-ge             |  'thou', still used in Belgium and south Netherlands
-geen           |  none
-omdat          |  because
-iets           |  something, somewhat
-worden         |  to become, grow, get
-toch           |  yet, still
-al             |  all, every, each
-waren          |  (1) 'were' (2) to wander, (3) wares, (3)
-veel           |  much, many
-meer           |  (1) more, (2) lake
-doen           |  to do, to make
-toen           |  then, when
-moet           |  noun 'spot/mote' and present form of 'to must'
-ben            |  (1) am, (2) 'are' in interrogative second person singular of 'to be'
-zonder         |  without
-kan            |  noun 'can' and present form of 'to be able'
-hun            |  their, them
-dus            |  so, consequently
-alles          |  all, everything, anything
-onder          |  under, beneath
-ja             |  yes, of course
-eens           |  once, one day
-hier           |  here
-wie            |  who
-werd           |  imperfect third person sing. of 'become'
-altijd         |  always
-doch           |  yet, but etc
-wordt          |  present third person sing. of 'become'
-wezen          |  (1) to be, (2) 'been' as in 'been fishing', (3) orphans
-kunnen         |  to be able
-ons            |  us/our
-zelf           |  self
-tegen          |  against, towards, at
-na             |  after, near
-reeds          |  already
-wil            |  (1) present tense of 'want', (2) 'will', noun, (3) fender
-kon            |  could; past tense of 'to be able'
-niets          |  nothing
-uw             |  your
-iemand         |  somebody
-geweest        |  been; past participle of 'be'
-andere         |  other
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_no.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_no.txt
deleted file mode 100644
index a7a2c28..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_no.txt
+++ /dev/null
@@ -1,194 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/norwegian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Norwegian stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This stop word list is for the dominant bokmål dialect. Words unique
- | to nynorsk are marked *.
-
- | Revised by Jan Bruusgaard <Jan.Bruusgaard@ssb.no>, Jan 2005
-
-og             | and
-i              | in
-jeg            | I
-det            | it/this/that
-at             | to (w. inf.)
-en             | a/an
-et             | a/an
-den            | it/this/that
-til            | to
-er             | is/am/are
-som            | who/that
-på             | on
-de             | they / you(formal)
-med            | with
-han            | he
-av             | of
-ikke           | not
-ikkje          | not *
-der            | there
-så             | so
-var            | was/were
-meg            | me
-seg            | you
-men            | but
-ett            | one
-har            | have
-om             | about
-vi             | we
-min            | my
-mitt           | my
-ha             | have
-hadde          | had
-hun            | she
-nå             | now
-over           | over
-da             | when/as
-ved            | by/know
-fra            | from
-du             | you
-ut             | out
-sin            | your
-dem            | them
-oss            | us
-opp            | up
-man            | you/one
-kan            | can
-hans           | his
-hvor           | where
-eller          | or
-hva            | what
-skal           | shall/must
-selv           | self (reflective)
-sjøl           | self (reflective)
-her            | here
-alle           | all
-vil            | will
-bli            | become
-ble            | became
-blei           | became *
-blitt          | have become
-kunne          | could
-inn            | in
-når            | when
-være           | be
-kom            | come
-noen           | some
-noe            | some
-ville          | would
-dere           | you
-som            | who/which/that
-deres          | their/theirs
-kun            | only/just
-ja             | yes
-etter          | after
-ned            | down
-skulle         | should
-denne          | this
-for            | for/because
-deg            | you
-si             | hers/his
-sine           | hers/his
-sitt           | hers/his
-mot            | against
-å              | to
-meget          | much
-hvorfor        | why
-dette          | this
-disse          | these/those
-uten           | without
-hvordan        | how
-ingen          | none
-din            | your
-ditt           | your
-blir           | become
-samme          | same
-hvilken        | which
-hvilke         | which (plural)
-sånn           | such a
-inni           | inside/within
-mellom         | between
-vår            | our
-hver           | each
-hvem           | who
-vors           | us/ours
-hvis           | whose
-både           | both
-bare           | only/just
-enn            | than
-fordi          | as/because
-før            | before
-mange          | many
-også           | also
-slik           | just
-vært           | been
-være           | to be
-båe            | both *
-begge          | both
-siden          | since
-dykk           | your *
-dykkar         | yours *
-dei            | they *
-deira          | them *
-deires         | theirs *
-deim           | them *
-di             | your (fem.) *
-då             | as/when *
-eg             | I *
-ein            | a/an *
-eit            | a/an *
-eitt           | a/an *
-elles          | or *
-honom          | he *
-hjå            | at *
-ho             | she *
-hoe            | she *
-henne          | her
-hennar         | her/hers
-hennes         | hers
-hoss           | how *
-hossen         | how *
-ikkje          | not *
-ingi           | noone *
-inkje          | noone *
-korleis        | how *
-korso          | how *
-kva            | what/which *
-kvar           | where *
-kvarhelst      | where *
-kven           | who/whom *
-kvi            | why *
-kvifor         | why *
-me             | we *
-medan          | while *
-mi             | my *
-mine           | my *
-mykje          | much *
-no             | now *
-nokon          | some (masc./neut.) *
-noka           | some (fem.) *
-nokor          | some *
-noko           | some *
-nokre          | some *
-si             | his/hers *
-sia            | since *
-sidan          | since *
-so             | so *
-somt           | some *
-somme          | some *
-um             | about*
-upp            | up *
-vere           | be *
-vore           | was *
-verte          | become *
-vort           | become *
-varte          | became *
-vart           | became *
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_pt.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_pt.txt
deleted file mode 100644
index acfeb01..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_pt.txt
+++ /dev/null
@@ -1,253 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/portuguese/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Portuguese stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-
- | The following is a ranked list (commonest to rarest) of stopwords
- | deriving from a large sample of text.
-
- | Extra words have been added at the end.
-
-de             |  of, from
-a              |  the; to, at; her
-o              |  the; him
-que            |  who, that
-e              |  and
-do             |  de + o
-da             |  de + a
-em             |  in
-um             |  a
-para           |  for
-  | é          from SER
-com            |  with
-não            |  not, no
-uma            |  a
-os             |  the; them
-no             |  em + o
-se             |  himself etc
-na             |  em + a
-por            |  for
-mais           |  more
-as             |  the; them
-dos            |  de + os
-como           |  as, like
-mas            |  but
-  | foi        from SER
-ao             |  a + o
-ele            |  he
-das            |  de + as
-  | tem        from TER
-à              |  a + a
-seu            |  his
-sua            |  her
-ou             |  or
-  | ser        from SER
-quando         |  when
-muito          |  much
-  | há         from HAV
-nos            |  em + os; us
-já             |  already, now
-  | está       from EST
-eu             |  I
-também         |  also
-só             |  only, just
-pelo           |  per + o
-pela           |  per + a
-até            |  up to
-isso           |  that
-ela            |  he
-entre          |  between
-  | era        from SER
-depois         |  after
-sem            |  without
-mesmo          |  same
-aos            |  a + os
-  | ter        from TER
-seus           |  his
-quem           |  whom
-nas            |  em + as
-me             |  me
-esse           |  that
-eles           |  they
-  | estão      from EST
-você           |  you
-  | tinha      from TER
-  | foram      from SER
-essa           |  that
-num            |  em + um
-nem            |  nor
-suas           |  her
-meu            |  my
-às             |  a + as
-minha          |  my
-  | têm        from TER
-numa           |  em + uma
-pelos          |  per + os
-elas           |  they
-  | havia      from HAV
-  | seja       from SER
-qual           |  which
-  | será       from SER
-nós            |  we
-  | tenho      from TER
-lhe            |  to him, her
-deles          |  of them
-essas          |  those
-esses          |  those
-pelas          |  per + as
-este           |  this
-  | fosse      from SER
-dele           |  of him
-
- | other words. There are many contractions such as naquele = em+aquele,
- | mo = me+o, but they are rare.
- | Indefinite article plural forms are also rare.
-
-tu             |  thou
-te             |  thee
-vocês          |  you (plural)
-vos            |  you
-lhes           |  to them
-meus           |  my
-minhas
-teu            |  thy
-tua
-teus
-tuas
-nosso          | our
-nossa
-nossos
-nossas
-
-dela           |  of her
-delas          |  of them
-
-esta           |  this
-estes          |  these
-estas          |  these
-aquele         |  that
-aquela         |  that
-aqueles        |  those
-aquelas        |  those
-isto           |  this
-aquilo         |  that
-
-               | forms of estar, to be (not including the infinitive):
-estou
-está
-estamos
-estão
-estive
-esteve
-estivemos
-estiveram
-estava
-estávamos
-estavam
-estivera
-estivéramos
-esteja
-estejamos
-estejam
-estivesse
-estivéssemos
-estivessem
-estiver
-estivermos
-estiverem
-
-               | forms of haver, to have (not including the infinitive):
-hei
-há
-havemos
-hão
-houve
-houvemos
-houveram
-houvera
-houvéramos
-haja
-hajamos
-hajam
-houvesse
-houvéssemos
-houvessem
-houver
-houvermos
-houverem
-houverei
-houverá
-houveremos
-houverão
-houveria
-houveríamos
-houveriam
-
-               | forms of ser, to be (not including the infinitive):
-sou
-somos
-são
-era
-éramos
-eram
-fui
-foi
-fomos
-foram
-fora
-fôramos
-seja
-sejamos
-sejam
-fosse
-fôssemos
-fossem
-for
-formos
-forem
-serei
-será
-seremos
-serão
-seria
-seríamos
-seriam
-
-               | forms of ter, to have (not including the infinitive):
-tenho
-tem
-temos
-tém
-tinha
-tínhamos
-tinham
-tive
-teve
-tivemos
-tiveram
-tivera
-tivéramos
-tenha
-tenhamos
-tenham
-tivesse
-tivéssemos
-tivessem
-tiver
-tivermos
-tiverem
-terei
-terá
-teremos
-terão
-teria
-teríamos
-teriam
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ro.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ro.txt
deleted file mode 100644
index 4fdee90..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ro.txt
+++ /dev/null
@@ -1,233 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-acea
-aceasta
-această
-aceea
-acei
-aceia
-acel
-acela
-acele
-acelea
-acest
-acesta
-aceste
-acestea
-aceşti
-aceştia
-acolo
-acum
-ai
-aia
-aibă
-aici
-al
-ăla
-ale
-alea
-ălea
-altceva
-altcineva
-am
-ar
-are
-aş
-aşadar
-asemenea
-asta
-ăsta
-astăzi
-astea
-ăstea
-ăştia
-asupra
-aţi
-au
-avea
-avem
-aveţi
-azi
-bine
-bucur
-bună
-ca
-că
-căci
-când
-care
-cărei
-căror
-cărui
-cât
-câte
-câţi
-către
-câtva
-ce
-cel
-ceva
-chiar
-cînd
-cine
-cineva
-cît
-cîte
-cîţi
-cîtva
-contra
-cu
-cum
-cumva
-curând
-curînd
-da
-dă
-dacă
-dar
-datorită
-de
-deci
-deja
-deoarece
-departe
-deşi
-din
-dinaintea
-dintr
-dintre
-drept
-după
-ea
-ei
-el
-ele
-eram
-este
-eşti
-eu
-face
-fără
-fi
-fie
-fiecare
-fii
-fim
-fiţi
-iar
-ieri
-îi
-îl
-îmi
-împotriva
-în 
-înainte
-înaintea
-încât
-încît
-încotro
-între
-întrucât
-întrucît
-îţi
-la
-lângă
-le
-li
-lîngă
-lor
-lui
-mă
-mâine
-mea
-mei
-mele
-mereu
-meu
-mi
-mine
-mult
-multă
-mulţi
-ne
-nicăieri
-nici
-nimeni
-nişte
-noastră
-noastre
-noi
-noştri
-nostru
-nu
-ori
-oricând
-oricare
-oricât
-orice
-oricînd
-oricine
-oricît
-oricum
-oriunde
-până
-pe
-pentru
-peste
-pînă
-poate
-pot
-prea
-prima
-primul
-prin
-printr
-sa
-să
-săi
-sale
-sau
-său
-se
-şi
-sînt
-sîntem
-sînteţi
-spre
-sub
-sunt
-suntem
-sunteţi
-ta
-tăi
-tale
-tău
-te
-ţi
-ţie
-tine
-toată
-toate
-tot
-toţi
-totuşi
-tu
-un
-una
-unde
-undeva
-unei
-unele
-uneori
-unor
-vă
-vi
-voastră
-voastre
-voi
-voştri
-vostru
-vouă
-vreo
-vreun
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ru.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ru.txt
deleted file mode 100644
index 5527140..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_ru.txt
+++ /dev/null
@@ -1,243 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/russian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | a russian stop word list. comments begin with vertical bar. each stop
- | word is at the start of a line.
-
- | this is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
- | letter `ё' is translated to `е'.
-
-и              | and
-в              | in/into
-во             | alternative form
-не             | not
-что            | what/that
-он             | he
-на             | on/onto
-я              | i
-с              | from
-со             | alternative form
-как            | how
-а              | milder form of `no' (but)
-то             | conjunction and form of `that'
-все            | all
-она            | she
-так            | so, thus
-его            | him
-но             | but
-да             | yes/and
-ты             | thou
-к              | towards, by
-у              | around, chez
-же             | intensifier particle
-вы             | you
-за             | beyond, behind
-бы             | conditional/subj. particle
-по             | up to, along
-только         | only
-ее             | her
-мне            | to me
-было           | it was
-вот            | here is/are, particle
-от             | away from
-меня           | me
-еще            | still, yet, more
-нет            | no, there isnt/arent
-о              | about
-из             | out of
-ему            | to him
-теперь         | now
-когда          | when
-даже           | even
-ну             | so, well
-вдруг          | suddenly
-ли             | interrogative particle
-если           | if
-уже            | already, but homonym of `narrower'
-или            | or
-ни             | neither
-быть           | to be
-был            | he was
-него           | prepositional form of его
-до             | up to
-вас            | you accusative
-нибудь         | indef. suffix preceded by hyphen
-опять          | again
-уж             | already, but homonym of `adder'
-вам            | to you
-сказал         | he said
-ведь           | particle `after all'
-там            | there
-потом          | then
-себя           | oneself
-ничего         | nothing
-ей             | to her
-может          | usually with `быть' as `maybe'
-они            | they
-тут            | here
-где            | where
-есть           | there is/are
-надо           | got to, must
-ней            | prepositional form of  ей
-для            | for
-мы             | we
-тебя           | thee
-их             | them, their
-чем            | than
-была           | she was
-сам            | self
-чтоб           | in order to
-без            | without
-будто          | as if
-человек        | man, person, one
-чего           | genitive form of `what'
-раз            | once
-тоже           | also
-себе           | to oneself
-под            | beneath
-жизнь          | life
-будет          | will be
-ж              | short form of intensifer particle `же'
-тогда          | then
-кто            | who
-этот           | this
-говорил        | was saying
-того           | genitive form of `that'
-потому         | for that reason
-этого          | genitive form of `this'
-какой          | which
-совсем         | altogether
-ним            | prepositional form of `его', `они'
-здесь          | here
-этом           | prepositional form of `этот'
-один           | one
-почти          | almost
-мой            | my
-тем            | instrumental/dative plural of `тот', `то'
-чтобы          | full form of `in order that'
-нее            | her (acc.)
-кажется        | it seems
-сейчас         | now
-были           | they were
-куда           | where to
-зачем          | why
-сказать        | to say
-всех           | all (acc., gen. preposn. plural)
-никогда        | never
-сегодня        | today
-можно          | possible, one can
-при            | by
-наконец        | finally
-два            | two
-об             | alternative form of `о', about
-другой         | another
-хоть           | even
-после          | after
-над            | above
-больше         | more
-тот            | that one (masc.)
-через          | across, in
-эти            | these
-нас            | us
-про            | about
-всего          | in all, only, of all
-них            | prepositional form of `они' (they)
-какая          | which, feminine
-много          | lots
-разве          | interrogative particle
-сказала        | she said
-три            | three
-эту            | this, acc. fem. sing.
-моя            | my, feminine
-впрочем        | moreover, besides
-хорошо         | good
-свою           | ones own, acc. fem. sing.
-этой           | oblique form of `эта', fem. `this'
-перед          | in front of
-иногда         | sometimes
-лучше          | better
-чуть           | a little
-том            | preposn. form of `that one'
-нельзя         | one must not
-такой          | such a one
-им             | to them
-более          | more
-всегда         | always
-конечно        | of course
-всю            | acc. fem. sing of `all'
-между          | between
-
-
-  | b: some paradigms
-  |
-  | personal pronouns
-  |
-  | я  меня  мне  мной  [мною]
-  | ты  тебя  тебе  тобой  [тобою]
-  | он  его  ему  им  [него, нему, ним]
-  | она  ее  эи  ею  [нее, нэи, нею]
-  | оно  его  ему  им  [него, нему, ним]
-  |
-  | мы  нас  нам  нами
-  | вы  вас  вам  вами
-  | они  их  им  ими  [них, ним, ними]
-  |
-  |   себя  себе  собой   [собою]
-  |
-  | demonstrative pronouns: этот (this), тот (that)
-  |
-  | этот  эта  это  эти
-  | этого  эты  это  эти
-  | этого  этой  этого  этих
-  | этому  этой  этому  этим
-  | этим  этой  этим  [этою]  этими
-  | этом  этой  этом  этих
-  |
-  | тот  та  то  те
-  | того  ту  то  те
-  | того  той  того  тех
-  | тому  той  тому  тем
-  | тем  той  тем  [тою]  теми
-  | том  той  том  тех
-  |
-  | determinative pronouns
-  |
-  | (a) весь (all)
-  |
-  | весь  вся  все  все
-  | всего  всю  все  все
-  | всего  всей  всего  всех
-  | всему  всей  всему  всем
-  | всем  всей  всем  [всею]  всеми
-  | всем  всей  всем  всех
-  |
-  | (b) сам (himself etc)
-  |
-  | сам  сама  само  сами
-  | самого саму  само  самих
-  | самого самой самого  самих
-  | самому самой самому  самим
-  | самим  самой  самим  [самою]  самими
-  | самом самой самом  самих
-  |
-  | stems of verbs `to be', `to have', `to do' and modal
-  |
-  | быть  бы  буд  быв  есть  суть
-  | име
-  | дел
-  | мог   мож  мочь
-  | уме
-  | хоч  хот
-  | долж
-  | можн
-  | нужн
-  | нельзя
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_sv.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_sv.txt
deleted file mode 100644
index 096f87f..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_sv.txt
+++ /dev/null
@@ -1,133 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/swedish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Swedish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
- | Swedish stop words occasionally exhibit homonym clashes. For example
- |  så = so, but also seed. These are indicated clearly below.
-
-och            | and
-det            | it, this/that
-att            | to (with infinitive)
-i              | in, at
-en             | a
-jag            | I
-hon            | she
-som            | who, that
-han            | he
-på             | on
-den            | it, this/that
-med            | with
-var            | where, each
-sig            | him(self) etc
-för            | for
-så             | so (also: seed)
-till           | to
-är             | is
-men            | but
-ett            | a
-om             | if; around, about
-hade           | had
-de             | they, these/those
-av             | of
-icke           | not, no
-mig            | me
-du             | you
-henne          | her
-då             | then, when
-sin            | his
-nu             | now
-har            | have
-inte           | inte någon = no one
-hans           | his
-honom          | him
-skulle         | 'sake'
-hennes         | her
-där            | there
-min            | my
-man            | one (pronoun)
-ej             | nor
-vid            | at, by, on (also: vast)
-kunde          | could
-något          | some etc
-från           | from, off
-ut             | out
-när            | when
-efter          | after, behind
-upp            | up
-vi             | we
-dem            | them
-vara           | be
-vad            | what
-över           | over
-än             | than
-dig            | you
-kan            | can
-sina           | his
-här            | here
-ha             | have
-mot            | towards
-alla           | all
-under          | under (also: wonder)
-någon          | some etc
-eller          | or (else)
-allt           | all
-mycket         | much
-sedan          | since
-ju             | why
-denna          | this/that
-själv          | myself, yourself etc
-detta          | this/that
-åt             | to
-utan           | without
-varit          | was
-hur            | how
-ingen          | no
-mitt           | my
-ni             | you
-bli            | to be, become
-blev           | from bli
-oss            | us
-din            | thy
-dessa          | these/those
-några          | some etc
-deras          | their
-blir           | from bli
-mina           | my
-samma          | (the) same
-vilken         | who, that
-er             | you, your
-sådan          | such a
-vår            | our
-blivit         | from bli
-dess           | its
-inom           | within
-mellan         | between
-sådant         | such a
-varför         | why
-varje          | each
-vilka          | who, that
-ditt           | thy
-vem            | who
-vilket         | who, that
-sitta          | his
-sådana         | such a
-vart           | each
-dina           | thy
-vars           | whose
-vårt           | our
-våra           | our
-ert            | your
-era            | your
-vilkas         | whose
-
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_th.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_th.txt
deleted file mode 100644
index 07f0fab..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_th.txt
+++ /dev/null
@@ -1,119 +0,0 @@
-# Thai stopwords from:
-# "Opinion Detection in Thai Political News Columns
-# Based on Subjectivity Analysis"
-# Khampol Sukhum, Supot Nitsuwat, and Choochart Haruechaiyasak
-ไว้
-ไม่
-ไป
-ได้
-ให้
-ใน
-โดย
-แห่ง
-แล้ว
-และ
-แรก
-แบบ
-แต่
-เอง
-เห็น
-เลย
-เริ่ม
-เรา
-เมื่อ
-เพื่อ
-เพราะ
-เป็นการ
-เป็น
-เปิดเผย
-เปิด
-เนื่องจาก
-เดียวกัน
-เดียว
-เช่น
-เฉพาะ
-เคย
-เข้า
-เขา
-อีก
-อาจ
-อะไร
-ออก
-อย่าง
-อยู่
-อยาก
-หาก
-หลาย
-หลังจาก
-หลัง
-หรือ
-หนึ่ง
-ส่วน
-ส่ง
-สุด
-สําหรับ
-ว่า
-วัน
-ลง
-ร่วม
-ราย
-รับ
-ระหว่าง
-รวม
-ยัง
-มี
-มาก
-มา
-พร้อม
-พบ
-ผ่าน
-ผล
-บาง
-น่า
-นี้
-นํา
-นั้น
-นัก
-นอกจาก
-ทุก
-ที่สุด
-ที่
-ทําให้
-ทํา
-ทาง
-ทั้งนี้
-ทั้ง
-ถ้า
-ถูก
-ถึง
-ต้อง
-ต่างๆ
-ต่าง
-ต่อ
-ตาม
-ตั้งแต่
-ตั้ง
-ด้าน
-ด้วย
-ดัง
-ซึ่ง
-ช่วง
-จึง
-จาก
-จัด
-จะ
-คือ
-ความ
-ครั้ง
-คง
-ขึ้น
-ของ
-ขอ
-ขณะ
-ก่อน
-ก็
-การ
-กับ
-กัน
-กว่า
-กล่าว
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_tr.txt b/solr/example/example-DIH/solr/mail/conf/lang/stopwords_tr.txt
deleted file mode 100644
index 84d9408..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/stopwords_tr.txt
+++ /dev/null
@@ -1,212 +0,0 @@
-# Turkish stopwords from LUCENE-559
-# merged with the list from "Information Retrieval on Turkish Texts"
-#   (http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf)
-acaba
-altmış
-altı
-ama
-ancak
-arada
-aslında
-ayrıca
-bana
-bazı
-belki
-ben
-benden
-beni
-benim
-beri
-beş
-bile
-bin
-bir
-birçok
-biri
-birkaç
-birkez
-birşey
-birşeyi
-biz
-bize
-bizden
-bizi
-bizim
-böyle
-böylece
-bu
-buna
-bunda
-bundan
-bunlar
-bunları
-bunların
-bunu
-bunun
-burada
-çok
-çünkü
-da
-daha
-dahi
-de
-defa
-değil
-diğer
-diye
-doksan
-dokuz
-dolayı
-dolayısıyla
-dört
-edecek
-eden
-ederek
-edilecek
-ediliyor
-edilmesi
-ediyor
-eğer
-elli
-en
-etmesi
-etti
-ettiği
-ettiğini
-gibi
-göre
-halen
-hangi
-hatta
-hem
-henüz
-hep
-hepsi
-her
-herhangi
-herkesin
-hiç
-hiçbir
-için
-iki
-ile
-ilgili
-ise
-işte
-itibaren
-itibariyle
-kadar
-karşın
-katrilyon
-kendi
-kendilerine
-kendini
-kendisi
-kendisine
-kendisini
-kez
-ki
-kim
-kimden
-kime
-kimi
-kimse
-kırk
-milyar
-milyon
-mu
-mü
-mı
-nasıl
-ne
-neden
-nedenle
-nerde
-nerede
-nereye
-niye
-niçin
-o
-olan
-olarak
-oldu
-olduğu
-olduğunu
-olduklarını
-olmadı
-olmadığı
-olmak
-olması
-olmayan
-olmaz
-olsa
-olsun
-olup
-olur
-olursa
-oluyor
-on
-ona
-ondan
-onlar
-onlardan
-onları
-onların
-onu
-onun
-otuz
-oysa
-öyle
-pek
-rağmen
-sadece
-sanki
-sekiz
-seksen
-sen
-senden
-seni
-senin
-siz
-sizden
-sizi
-sizin
-şey
-şeyden
-şeyi
-şeyler
-şöyle
-şu
-şuna
-şunda
-şundan
-şunları
-şunu
-tarafından
-trilyon
-tüm
-üç
-üzere
-var
-vardı
-ve
-veya
-ya
-yani
-yapacak
-yapılan
-yapılması
-yapıyor
-yapmak
-yaptı
-yaptığı
-yaptığını
-yaptıkları
-yedi
-yerine
-yetmiş
-yine
-yirmi
-yoksa
-yüz
-zaten
diff --git a/solr/example/example-DIH/solr/mail/conf/lang/userdict_ja.txt b/solr/example/example-DIH/solr/mail/conf/lang/userdict_ja.txt
deleted file mode 100644
index 6f0368e..0000000
--- a/solr/example/example-DIH/solr/mail/conf/lang/userdict_ja.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-#
-# This is a sample user dictionary for Kuromoji (JapaneseTokenizer)
-#
-# Add entries to this file in order to override the statistical model in terms
-# of segmentation, readings and part-of-speech tags.  Notice that entries do
-# not have weights since they are always used when found.  This is by-design
-# in order to maximize ease-of-use.
-#
-# Entries are defined using the following CSV format:
-#  <text>,<token 1> ... <token n>,<reading 1> ... <reading n>,<part-of-speech tag>
-#
-# Notice that a single half-width space separates tokens and readings, and
-# that the number tokens and readings must match exactly.
-#
-# Also notice that multiple entries with the same <text> is undefined.
-#
-# Whitespace only lines are ignored.  Comments are not allowed on entry lines.
-#
-
-# Custom segmentation for kanji compounds
-日本経済新聞,日本 経済 新聞,ニホン ケイザイ シンブン,カスタム名詞
-関西国際空港,関西 国際 空港,カンサイ コクサイ クウコウ,カスタム名詞
-
-# Custom segmentation for compound katakana
-トートバッグ,トート バッグ,トート バッグ,かずカナ名詞
-ショルダーバッグ,ショルダー バッグ,ショルダー バッグ,かずカナ名詞
-
-# Custom reading for former sumo wrestler
-朝青龍,朝青龍,アサショウリュウ,カスタム人名
diff --git a/solr/example/example-DIH/solr/mail/conf/mail-data-config.xml b/solr/example/example-DIH/solr/mail/conf/mail-data-config.xml
deleted file mode 100644
index 736aea7..0000000
--- a/solr/example/example-DIH/solr/mail/conf/mail-data-config.xml
+++ /dev/null
@@ -1,12 +0,0 @@
-<dataConfig>
-  <document>
-      <!--
-        Note - In order to index attachments, set processAttachement="true" and drop
-        Tika and its dependencies to example-DIH/solr/mail/lib directory
-       -->
-      <entity processor="MailEntityProcessor" user="email@gmail.com"
-            password="password" host="imap.gmail.com" protocol="gimaps"
-            fetchMailsSince="2014-06-30 00:00:00" batchSize="20" folders="inbox" processAttachement="false"
-            name="mail_entity"/>
-  </document>
-</dataConfig>
diff --git a/solr/example/example-DIH/solr/mail/conf/managed-schema b/solr/example/example-DIH/solr/mail/conf/managed-schema
deleted file mode 100644
index d450212..0000000
--- a/solr/example/example-DIH/solr/mail/conf/managed-schema
+++ /dev/null
@@ -1,1062 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
-
- PERFORMANCE NOTE: this schema includes many optional features and should not
- be used for benchmarking.  To improve performance one could
-  - set stored="false" for all fields possible (esp large fields) when you
-    only need to search on the field but don't need to return the original
-    value.
-  - set indexed="false" if you don't need to search on the field, but only
-    return the field as a result of searching on other indexed fields.
-  - remove all unneeded copyField statements
-  - for best index size and searching performance, set "index" to false
-    for all general text fields, use copyField to copy them to the
-    catchall "text" field, and use that for searching.
-  - For maximum indexing performance, use the ConcurrentUpdateSolrServer
-    java client.
-  - Remember to run the JVM in server mode, and use a higher logging level
-    that avoids logging every request
--->
-
-<schema name="example-DIH-mail" version="1.6">
-  <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       version="x.y" is Solr's version number for the schema syntax and 
-       semantics.  It should not normally be changed by applications.
-
-       1.0: multiValued attribute did not exist, all fields are multiValued 
-            by nature
-       1.1: multiValued attribute introduced, false by default 
-       1.2: omitTermFreqAndPositions attribute introduced, true by default 
-            except for text fields.
-       1.3: removed optional field compress feature
-       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
-            behavior when a single string produces multiple tokens.  Defaults 
-            to off for version >= 1.4
-       1.5: omitNorms defaults to true for primitive field types 
-            (int, float, boolean, string...)
-       1.6: useDocValuesAsStored defaults to true.            
-     -->
-
-
-    <!-- Valid attributes for fields:
-     name: mandatory - the name for the field
-     type: mandatory - the name of a field type from the 
-       fieldTypes section
-     indexed: true if this field should be indexed (searchable or sortable)
-     stored: true if this field should be retrievable
-     docValues: true if this field should have doc values. Doc values are
-       useful (required, if you are using *Point fields) for faceting, 
-       grouping, sorting and function queries. Doc values will make the index 
-       faster to load, more NRT-friendly and more memory-efficient. 
-       They however come with some limitations: they are currently only 
-       supported by StrField, UUIDField, all *PointFields, and depending
-       on the field type, they might require the field to be single-valued,
-       be required or have a default value (check the documentation
-       of the field type you're interested in for more information)
-     multiValued: true if this field may contain multiple values per document
-     omitNorms: (expert) set to true to omit the norms associated with
-       this field (this disables length normalization and index-time
-       boosting for the field, and saves some memory).  Only full-text
-       fields or fields that need an index-time boost need norms.
-       Norms are omitted for primitive (non-analyzed) types by default.
-     termVectors: [false] set to true to store the term vector for a
-       given field.
-       When using MoreLikeThis, fields used for similarity should be
-       stored for best performance.
-     termPositions: Store position information with the term vector.  
-       This will increase storage costs.
-     termOffsets: Store offset information with the term vector. This 
-       will increase storage costs.
-     required: The field is required.  It will throw an error if the
-       value does not exist
-     default: a value that should be used if no value is specified
-       when adding a document.
-    -->
-
-   <!-- field names should consist of alphanumeric or underscore characters only and
-      not start with a digit.  This is not currently strictly enforced,
-      but other field names will not have first class support from all components
-      and back compatibility is not guaranteed.  Names with both leading and
-      trailing underscores (e.g. _version_) are reserved.
-   -->
-
-   <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
-      or Solr won't start. _version_ and update log are required for SolrCloud
-   --> 
-   <field name="_version_" type="plong" indexed="true" stored="true"/>
-   
-   <field name="content" type="text_general" indexed="true" stored="true" multiValued="true"/>
-
-   <!-- catchall field, containing all other searchable text fields (implemented
-        via copyField further on in this schema  -->
-   <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
-
-   <field name="messageId" type="string" indexed="true" stored="true" required="true" multiValued="false"/>
-   <field name="subject" type="text_general" indexed="true" stored="true"/>
-   <field name="from" type="string" indexed="true" stored="true" omitNorms="true"/>
-   <field name="sentDate" type="pdate" indexed="true" stored="true"/>
-   <field name="xMailer" type="string" indexed="true" stored="true" omitNorms="true"/>
-
-   <field name="allTo" type="string" indexed="true" stored="true" omitNorms="true" multiValued="true"/>
-   <field name="flags" type="string" indexed="true" stored="true" omitNorms="true" multiValued="true"/>
-   <field name="attachment" type="text_general" indexed="true" stored="true" multiValued="true"/>
-   <field name="attachmentNames" type="string" indexed="true" stored="true" omitNorms="true" multiValued="true"/>
-
-   <!-- Dynamic field definitions allow using convention over configuration
-       for fields via the specification of patterns to match field names.
-       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
-       RESTRICTION: the glob-like pattern in the name attribute must have
-       a "*" only at the start or the end.  -->
-   
-   <dynamicField name="*_i"  type="pint"    indexed="true"  stored="true"/>
-   <dynamicField name="*_is" type="pint"    indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
-   <dynamicField name="*_s_ns"  type="string"  indexed="true"  stored="false" />
-   <dynamicField name="*_ss" type="string"  indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_l"  type="plong"   indexed="true"  stored="true"/>
-   <dynamicField name="*_l_ns"  type="plong"   indexed="true"  stored="false"/>
-   <dynamicField name="*_ls" type="plong"   indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_t"  type="text_general"    indexed="true"  stored="true"/>
-   <dynamicField name="*_txt" type="text_general"   indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_en"  type="text_en"    indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
-   <dynamicField name="*_bs" type="boolean" indexed="true" stored="true"  multiValued="true"/>
-   <dynamicField name="*_f"  type="pfloat"  indexed="true"  stored="true"/>
-   <dynamicField name="*_fs" type="pfloat"  indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_d"  type="pdouble" indexed="true"  stored="true"/>
-   <dynamicField name="*_ds" type="pdouble" indexed="true"  stored="true"  multiValued="true"/>
-
-   <!-- Type used to index the lat and lon components for the "location" FieldType -->
-   <dynamicField name="*_coordinate"  type="pdouble" indexed="true"  stored="false" />
-
-   <dynamicField name="*_dt"  type="pdate"    indexed="true"  stored="true"/>
-   <dynamicField name="*_dts" type="pdate"    indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
-
-   <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
-
-   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
-   <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
-
-   <dynamicField name="random_*" type="random" />
-
-   <!-- uncomment the following to ignore any fields that don't already match an existing 
-        field name or dynamic field, rather than reporting them as an error. 
-        alternately, change the type="ignored" to some other type e.g. "text" if you want 
-        unknown fields indexed and/or stored by default --> 
-   <!--dynamicField name="*" type="ignored" multiValued="true" /-->
-   
-
-
-
- <!-- Field to use to determine and enforce document uniqueness. 
-      Unless this field is marked with required="false", it will be a required field
-   -->
- <uniqueKey>messageId</uniqueKey>
-
-  <!-- copyField commands copy one field to another at the time a document
-        is added to the index.  It's used either to index the same field differently,
-        or to add multiple fields to the same field for easier/faster searching.  -->
-
-    <copyField source="content" dest="text"/>
-    <copyField source="attachmentNames" dest="text"/>
-    <copyField source="attachment" dest="text"/>
-    <copyField source="subject" dest="text"/>
-    <copyField source="allTo" dest="text"/>
-
-   <!-- Above, multiple source fields are copied to the [text] field. 
-    Another way to map multiple source fields to the same 
-    destination field is to use the dynamic field syntax. 
-    copyField also supports a maxChars to copy setting.  -->
-     
-   <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
-
-   <!-- copy name to alphaNameSort, a field designed for sorting by name -->
-   <!-- <copyField source="name" dest="alphaNameSort"/> -->
- 
-  
-    <!-- field type definitions. The "name" attribute is
-       just a label to be used by field definitions.  The "class"
-       attribute and any other attributes determine the real
-       behavior of the fieldType.
-         Class names starting with "solr" refer to java classes in a
-       standard package such as org.apache.solr.analysis
-    -->
-
-    <!-- The StrField type is not analyzed, but indexed/stored verbatim. -->
-    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
-
-    <!-- boolean type: "true" or "false" -->
-    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
-
-    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
-         currently supported on types that are sorted internally as strings
-         and on numeric types.
-	     This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble".
-       - If sortMissingLast="true", then a sort on this field will cause documents
-         without the field to come after documents with the field,
-         regardless of the requested sort order (asc or desc).
-       - If sortMissingFirst="true", then a sort on this field will cause documents
-         without the field to come before documents with the field,
-         regardless of the requested sort order.
-       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
-         then default lucene sorting will be used which places docs without the
-         field first in an ascending sort and last in a descending sort.
-    -->
-
-    <!--
-      Numeric field types that index values using KD-trees.
-      Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc.
-    -->
-    <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
-    <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
-    <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
-    <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
-    
-    <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/>
-    <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/>
-    <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/>
-    <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/>
-
-    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
-         is a more restricted form of the canonical representation of dateTime
-         http://www.w3.org/TR/xmlschema-2/#dateTime    
-         The trailing "Z" designates UTC time and is mandatory.
-         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
-         All other components are mandatory.
-
-         Expressions can also be used to denote calculations that should be
-         performed relative to "NOW" to determine the value, ie...
-
-               NOW/HOUR
-                  ... Round to the start of the current hour
-               NOW-1DAY
-                  ... Exactly 1 day prior to now
-               NOW/DAY+6MONTHS+3DAYS
-                  ... 6 months and 3 days in the future from the start of
-                      the current day
-                      
-         Consult the DatePointField javadocs for more information.
-      -->
-    <!-- KD-tree versions of date fields -->
-    <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
-    <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/>
-    
-    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
-    <fieldType name="binary" class="solr.BinaryField"/>
-
-    <!-- The "RandomSortField" is not used to store or search any
-         data.  You can declare fields of this type it in your schema
-         to generate pseudo-random orderings of your docs for sorting 
-         or function purposes.  The ordering is generated based on the field
-         name and the version of the index. As long as the index version
-         remains unchanged, and the same field name is reused,
-         the ordering of the docs will be consistent.  
-         If you want different psuedo-random orderings of documents,
-         for the same version of the index, use a dynamicField and
-         change the field name in the request.
-     -->
-    <fieldType name="random" class="solr.RandomSortField" indexed="true" />
-
-    <!-- solr.TextField allows the specification of custom text analyzers
-         specified as a tokenizer and a list of token filters. Different
-         analyzers may be specified for indexing and querying.
-
-         The optional positionIncrementGap puts space between multiple fields of
-         this type on the same document, with the purpose of preventing false phrase
-         matching across fields.
-
-         For more info on customizing your analyzer chain, please see
-         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-     -->
-
-    <!-- One can also specify an existing Analyzer class that has a
-         default constructor via the class attribute on the analyzer element.
-         Example:
-    <fieldType name="text_greek" class="solr.TextField">
-      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
-    </fieldType>
-    -->
-
-    <!-- A text field that only splits on whitespace for exact matching of words -->
-    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="whitespace"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A general text field that has reasonable, generic
-         cross-language defaults: it tokenizes with StandardTokenizer,
-   removes stop words from case-insensitive "stopwords.txt"
-   (empty by default), and down cases.  At query time only, it
-   also applies synonyms. -->
-    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="flattenGraph"/>
-        -->
-        <filter name="lowercase"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A text field with defaults appropriate for English: it
-         tokenizes with StandardTokenizer, removes English stop words
-         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
-         finally applies Porter's stemming.  The query time analyzer
-         also applies synonyms from synonyms.txt. -->
-    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="flattenGraph"/>
-        -->
-        <!-- Case insensitive stop word removal.
-        -->
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="lowercase"/>
-  <filter name="englishPossessive"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
-        <filter name="englishMinimalStem"/>
-  -->
-        <filter name="porterStem"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="lowercase"/>
-  <filter name="englishPossessive"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
-        <filter name="englishMinimalStem"/>
-  -->
-        <filter name="porterStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A text field with defaults appropriate for English, plus
-   aggressive word-splitting and autophrase features enabled.
-   This field is just like text_en, except it adds
-   WordDelimiterGraphFilter to enable splitting and matching of
-   words on case-change, alpha numeric boundaries, and
-   non-alphanumeric chars.  This means certain compound word
-   cases will work, for example query "wi fi" will match
-   document "WiFi" or "wi-fi".
-        -->
-    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
-      <analyzer type="index">
-        <tokenizer name="whitespace"/>
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        -->
-        <!-- Case insensitive stop word removal.
-        -->
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="porterStem"/>
-        <filter name="flattenGraph" />
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="porterStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
-         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
-    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
-      <analyzer type="index">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="englishMinimalStem"/>
-        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
-             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
-        <filter name="removeDuplicates"/>
-        <filter name="flattenGraph" />
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="englishMinimalStem"/>
-        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
-             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
-        <filter name="removeDuplicates"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Just like text_general except it reverses the characters of
-   each token, to enable more efficient leading wildcard queries. -->
-    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="lowercase"/>
-        <filter name="reversedWildcard" withOriginal="true"
-           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- charFilter + WhitespaceTokenizer  -->
-    <!--
-    <fieldType name="text_char_norm" class="solr.TextField" positionIncrementGap="100" >
-      <analyzer>
-        <charFilter name="mapping" mapping="mapping-ISOLatin1Accent.txt"/>
-        <tokenizer name="whitespace"/>
-      </analyzer>
-    </fieldType>
-    -->
-
-    <!-- This is an example of using the KeywordTokenizer along
-         With various TokenFilterFactories to produce a sortable field
-         that does not include some properties of the source text
-      -->
-    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
-      <analyzer>
-        <!-- KeywordTokenizer does no actual tokenizing, so the entire
-             input string is preserved as a single token
-          -->
-        <tokenizer name="keyword"/>
-        <!-- The LowerCase TokenFilter does what you expect, which can be
-             when you want your sorting to be case insensitive
-          -->
-        <filter name="lowercase" />
-        <!-- The TrimFilter removes any leading or trailing whitespace -->
-        <filter name="trim" />
-        <!-- The PatternReplaceFilter gives you the flexibility to use
-             Java Regular expression to replace any sequence of characters
-             matching a pattern with an arbitrary replacement string, 
-             which may include back references to portions of the original
-             string matched by the pattern.
-             
-             See the Java Regular Expression documentation for more
-             information on pattern and replacement string syntax.
-             
-             http://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html
-          -->
-        <filter name="patternReplace"
-                pattern="([^a-z])" replacement="" replace="all"
-        />
-      </analyzer>
-    </fieldType>
-    
-    <fieldType name="phonetic" stored="false" indexed="true" class="solr.TextField" >
-      <analyzer>
-        <tokenizer name="standard"/>
-        <filter name="doubleMetaphone" inject="false"/>
-      </analyzer>
-    </fieldType>
-
-    <fieldType name="payloads" stored="false" indexed="true" class="solr.TextField" >
-      <analyzer>
-        <tokenizer name="whitespace"/>
-        <!--
-        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
-        a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
-        Attributes of the DelimitedPayloadTokenFilterFactory : 
-         "delimiter" - a one character delimiter. Default is | (pipe)
-   "encoder" - how to encode the following value into a playload
-      float -> org.apache.lucene.analysis.payloads.FloatEncoder,
-      integer -> o.a.l.a.p.IntegerEncoder
-      identity -> o.a.l.a.p.IdentityEncoder
-            Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.
-         -->
-        <filter name="delimitedPayload" encoder="float"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- lowercases the entire field value, keeping it as a single token.  -->
-    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="keyword"/>
-        <filter name="lowercase" />
-      </analyzer>
-    </fieldType>
-
-    <!-- 
-      Example of using PathHierarchyTokenizerFactory at index time, so
-      queries for paths match documents at that path, or in descendent paths
-    -->
-    <fieldType name="descendent_path" class="solr.TextField">
-      <analyzer type="index">
-  <tokenizer name="pathHierarchy" delimiter="/" />
-      </analyzer>
-      <analyzer type="query">
-  <tokenizer name="keyword" />
-      </analyzer>
-    </fieldType>
-    <!-- 
-      Example of using PathHierarchyTokenizerFactory at query time, so
-      queries for paths match documents at that path, or in ancestor paths
-    -->
-    <fieldType name="ancestor_path" class="solr.TextField">
-      <analyzer type="index">
-  <tokenizer name="keyword" />
-      </analyzer>
-      <analyzer type="query">
-  <tokenizer name="pathHierarchy" delimiter="/" />
-      </analyzer>
-    </fieldType>
-
-    <!-- since fields of this type are by default not stored or indexed,
-         any data added to them will be ignored outright.  --> 
-    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
-
-    <!-- This point type indexes the coordinates as separate fields (subFields)
-      If subFieldType is defined, it references a type, and a dynamic field
-      definition is created matching *___<typename>.  Alternately, if 
-      subFieldSuffix is defined, that is used to create the subFields.
-      Example: if subFieldType="double", then the coordinates would be
-        indexed in fields myloc_0___double,myloc_1___double.
-      Example: if subFieldSuffix="_d" then the coordinates would be indexed
-        in fields myloc_0_d,myloc_1_d
-      The subFields are an implementation detail of the fieldType, and end
-      users normally should not need to know about them.
-     -->
-    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
-
-    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
-    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
-
-    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
-      For more information about this and other Spatial fields new to Solr 4, see:
-      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
-    -->
-    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
-        geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
-
-   <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
-        Parameters:
-          amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field. 
-                              The dynamic field must have a field type that extends LongValueFieldType.
-                              Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
-          codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.
-                              The dynamic field must have a field type that extends StrField.
-                              Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
-          defaultCurrency:  Specifies the default currency if none specified. Defaults to "USD"
-          providerClass:    Lets you plug in other exchange provider backend:
-                            solr.FileExchangeRateProvider is the default and takes one parameter:
-                              currencyConfig: name of an xml file holding exchange rates
-                            solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
-                              ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
-                              refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
-   -->
-    <fieldType name="currency" class="solr.CurrencyFieldType" amountLongSuffix="_l_ns" codeStrSuffix="_s_ns"
-               defaultCurrency="USD" currencyConfig="currency.xml" />
-
-
-   <!-- some examples for different languages (generally ordered by ISO code) -->
-
-    <!-- Arabic -->
-    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- for any non-arabic -->
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />
-        <!-- normalizes ﻯ to ﻱ, etc -->
-        <filter name="arabicNormalization"/>
-        <filter name="arabicStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Bulgarian -->
-    <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/> 
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" /> 
-        <filter name="bulgarianStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Catalan -->
-    <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_ca.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" />
-        <filter name="snowballPorter" language="Catalan"/>       
-      </analyzer>
-    </fieldType>
-    
-    <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->
-    <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="standard"/>
-        <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->
-        <filter name="cjkWidth"/>
-        <!-- for any non-CJK -->
-        <filter name="lowercase"/>
-        <filter name="cjkBigram"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Kurdish -->
-    <fieldType name="text_ckb" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="standard"/>
-        <filter name="soraniNormalization"/>
-        <!-- for any latin text -->
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ckb.txt"/>
-        <filter name="soraniStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Czech -->
-    <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_cz.txt" />
-        <filter name="czechStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Danish -->
-    <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />
-        <filter name="snowballPorter" language="Danish"/>       
-      </analyzer>
-    </fieldType>
-    
-    <!-- German -->
-    <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />
-        <filter name="germanNormalization"/>
-        <filter name="germanLightStem"/>
-        <!-- less aggressive: <filter name="germanMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="German2"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Greek -->
-    <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- greek specific lowercase for sigma -->
-        <filter name="greekLowercase"/>
-        <filter name="stop" ignoreCase="false" words="lang/stopwords_el.txt" />
-        <filter name="greekStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Spanish -->
-    <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />
-        <filter name="spanishLightStem"/>
-        <!-- more aggressive: <filter name="snowballPorter" language="Spanish"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Basque -->
-    <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_eu.txt" />
-        <filter name="snowballPorter" language="Basque"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Persian -->
-    <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <!-- for ZWNJ -->
-        <charFilter name="persian"/>
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="arabicNormalization"/>
-        <filter name="persianNormalization"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fa.txt" />
-      </analyzer>
-    </fieldType>
-    
-    <!-- Finnish -->
-    <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />
-        <filter name="snowballPorter" language="Finnish"/>
-        <!-- less aggressive: <filter name="finnishLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- French -->
-    <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_fr.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />
-        <filter name="frenchLightStem"/>
-        <!-- less aggressive: <filter name="frenchMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="French"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Irish -->
-    <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes d', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_ga.txt"/>
-        <!-- removes n-, etc. position increments is intentionally false! -->
-        <filter name="stop" ignoreCase="true" words="lang/hyphenations_ga.txt"/>
-        <filter name="irishLowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ga.txt"/>
-        <filter name="snowballPorter" language="Irish"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Galician -->
-    <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_gl.txt" />
-        <filter name="galicianStem"/>
-        <!-- less aggressive: <filter name="galicianMinimalStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Hindi -->
-    <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <!-- normalizes unicode representation -->
-        <filter name="indicNormalization"/>
-        <!-- normalizes variation in spelling -->
-        <filter name="hindiNormalization"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hi.txt" />
-        <filter name="hindiStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Hungarian -->
-    <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />
-        <filter name="snowballPorter" language="Hungarian"/>
-        <!-- less aggressive: <filter name="hungarianLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Armenian -->
-    <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hy.txt" />
-        <filter name="snowballPorter" language="Armenian"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Indonesian -->
-    <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_id.txt" />
-        <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->
-        <filter name="indonesianStem" stemDerivational="true"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Italian -->
-    <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_it.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />
-        <filter name="italianLightStem"/>
-        <!-- more aggressive: <filter name="snowballPorter" language="Italian"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)
-
-         NOTE: If you want to optimize search for precision, use default operator AND in your request
-         handler config (q.op) Use OR if you would like to optimize for recall (default).
-    -->
-    <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
-      <analyzer>
-      <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)
-
-           Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic
-           is used to segment compounds into its parts and the compound itself is kept as synonym.
-
-           Valid values for attribute mode are:
-              normal: regular segmentation
-              search: segmentation useful for search with synonyms compounds (default)
-            extended: same as search mode, but unigrams unknown words (experimental)
-
-           For some applications it might be good to use search mode for indexing and normal mode for
-           queries to reduce recall and prevent parts of compounds from being matched and highlighted.
-           Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.
-
-           Kuromoji also has a convenient user dictionary feature that allows overriding the statistical
-           model with your own entries for segmentation, part-of-speech tags and readings without a need
-           to specify weights.  Notice that user dictionaries have not been subject to extensive testing.
-
-           User dictionary attributes are:
-                     userDictionary: user dictionary filename
-             userDictionaryEncoding: user dictionary encoding (default is UTF-8)
-
-           See lang/userdict_ja.txt for a sample user dictionary file.
-
-           Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.
-
-           See http://wiki.apache.org/solr/JapaneseLanguageSupport for more on Japanese language support.
-        -->
-        <tokenizer name="japanese" mode="search"/>
-        <!--<tokenizer name="japanese" mode="search" userDictionary="lang/userdict_ja.txt"/>-->
-        <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->
-        <filter name="japaneseBaseForm"/>
-        <!-- Removes tokens with certain part-of-speech tags -->
-        <filter name="japanesePartOfSpeechStop" tags="lang/stoptags_ja.txt" />
-        <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->
-        <filter name="cjkWidth"/>
-        <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ja.txt" />
-        <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->
-        <filter name="japaneseKatakanaStem" minimumLength="4"/>
-        <!-- Lower-cases romaji characters -->
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Korean morphological analysis -->
-    <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>
-    <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)
-          The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.
-          It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.
-
-          This dictionary was built with MeCab, it defines a format for the features adapted
-          for the Korean language.
-          
-          Nori also has a convenient user dictionary feature that allows overriding the statistical
-          model with your own entries for segmentation, part-of-speech tags and readings without a need
-          to specify weights. Notice that user dictionaries have not been subject to extensive testing.
-
-          The tokenizer supports multiple schema attributes:
-            * userDictionary: User dictionary path.
-            * userDictionaryEncoding: User dictionary encoding.
-            * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.
-            * outputUnknownUnigrams: If true outputs unigrams for unknown words.
-        -->
-        <tokenizer name="korean" decompoundMode="discard" outputUnknownUnigrams="false"/>
-        <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',
-          listing the tags to remove. By default it removes: 
-          E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV
-          This is basically an equivalent to stemming.
-        -->
-        <filter name="koreanPartOfSpeechStop" />
-        <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->
-        <filter name="koreanReadingForm" />
-        <filter name="lowercase" />
-      </analyzer>
-    </fieldType>
-
-    <!-- Latvian -->
-    <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_lv.txt" />
-        <filter name="latvianStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Dutch -->
-    <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />
-        <filter name="stemmerOverride" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>
-        <filter name="snowballPorter" language="Dutch"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Norwegian -->
-    <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />
-        <filter name="snowballPorter" language="Norwegian"/>
-        <!-- less aggressive: <filter name="norwegianLightStem" variant="nb"/> -->
-        <!-- singular/plural: <filter name="norwegianMinimalStem" variant="nb"/> -->
-        <!-- Tfhe "light" and "minimal" stemmers support variants: nb=Bokmål, nn=Nynorsk, no=Both -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Portuguese -->
-    <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />
-        <filter name="portugueseLightStem"/>
-        <!-- less aggressive: <filter name="portugueseMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="Portuguese"/> -->
-        <!-- most aggressive: <filter name="portugueseStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Romanian -->
-    <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ro.txt" />
-        <filter name="snowballPorter" language="Romanian"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Russian -->
-    <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />
-        <filter name="snowballPorter" language="Russian"/>
-        <!-- less aggressive: <filter name="russianLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Swedish -->
-    <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />
-        <filter name="snowballPorter" language="Swedish"/>
-        <!-- less aggressive: <filter name="swedishLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Thai -->
-    <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="thai"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_th.txt" />
-      </analyzer>
-    </fieldType>
-    
-    <!-- Turkish -->
-    <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="apostrophe"/>
-        <filter name="turkishLowercase"/>
-        <filter name="stop" ignoreCase="false" words="lang/stopwords_tr.txt" />
-        <filter name="snowballPorter" language="Turkish"/>
-      </analyzer>
-    </fieldType>
-  
-  <!-- Similarity is the scoring routine for each document vs. a query.
-       A custom Similarity or SimilarityFactory may be specified here, but 
-       the default is fine for most applications.  
-       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
-    -->
-  <!--
-     <similarity class="com.example.solr.CustomSimilarityFactory">
-       <str name="paramkey">param value</str>
-     </similarity>
-    -->
-
-</schema>
diff --git a/solr/example/example-DIH/solr/mail/conf/mapping-FoldToASCII.txt b/solr/example/example-DIH/solr/mail/conf/mapping-FoldToASCII.txt
deleted file mode 100644
index 9a84b6e..0000000
--- a/solr/example/example-DIH/solr/mail/conf/mapping-FoldToASCII.txt
+++ /dev/null
@@ -1,3813 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# This map converts alphabetic, numeric, and symbolic Unicode characters
-# which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
-# block) into their ASCII equivalents, if one exists.
-#
-# Characters from the following Unicode blocks are converted; however, only
-# those characters with reasonable ASCII alternatives are converted:
-#
-# - C1 Controls and Latin-1 Supplement: http://www.unicode.org/charts/PDF/U0080.pdf
-# - Latin Extended-A: http://www.unicode.org/charts/PDF/U0100.pdf
-# - Latin Extended-B: http://www.unicode.org/charts/PDF/U0180.pdf
-# - Latin Extended Additional: http://www.unicode.org/charts/PDF/U1E00.pdf
-# - Latin Extended-C: http://www.unicode.org/charts/PDF/U2C60.pdf
-# - Latin Extended-D: http://www.unicode.org/charts/PDF/UA720.pdf
-# - IPA Extensions: http://www.unicode.org/charts/PDF/U0250.pdf
-# - Phonetic Extensions: http://www.unicode.org/charts/PDF/U1D00.pdf
-# - Phonetic Extensions Supplement: http://www.unicode.org/charts/PDF/U1D80.pdf
-# - General Punctuation: http://www.unicode.org/charts/PDF/U2000.pdf
-# - Superscripts and Subscripts: http://www.unicode.org/charts/PDF/U2070.pdf
-# - Enclosed Alphanumerics: http://www.unicode.org/charts/PDF/U2460.pdf
-# - Dingbats: http://www.unicode.org/charts/PDF/U2700.pdf
-# - Supplemental Punctuation: http://www.unicode.org/charts/PDF/U2E00.pdf
-# - Alphabetic Presentation Forms: http://www.unicode.org/charts/PDF/UFB00.pdf
-# - Halfwidth and Fullwidth Forms: http://www.unicode.org/charts/PDF/UFF00.pdf
-#  
-# See: http://en.wikipedia.org/wiki/Latin_characters_in_Unicode
-#
-# The set of character conversions supported by this map is a superset of
-# those supported by the map represented by mapping-ISOLatin1Accent.txt.
-#
-# See the bottom of this file for the Perl script used to generate the contents
-# of this file (without this header) from ASCIIFoldingFilter.java.
-
-
-# Syntax:
-#   "source" => "target"
-#     "source".length() > 0 (source cannot be empty.)
-#     "target".length() >= 0 (target can be empty.)
-
-
-# À  [LATIN CAPITAL LETTER A WITH GRAVE]
-"\u00C0" => "A"
-
-# Á  [LATIN CAPITAL LETTER A WITH ACUTE]
-"\u00C1" => "A"
-
-# Â  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX]
-"\u00C2" => "A"
-
-# Ã  [LATIN CAPITAL LETTER A WITH TILDE]
-"\u00C3" => "A"
-
-# Ä  [LATIN CAPITAL LETTER A WITH DIAERESIS]
-"\u00C4" => "A"
-
-# Å  [LATIN CAPITAL LETTER A WITH RING ABOVE]
-"\u00C5" => "A"
-
-# Ā  [LATIN CAPITAL LETTER A WITH MACRON]
-"\u0100" => "A"
-
-# Ă  [LATIN CAPITAL LETTER A WITH BREVE]
-"\u0102" => "A"
-
-# Ą  [LATIN CAPITAL LETTER A WITH OGONEK]
-"\u0104" => "A"
-
-# Ə  http://en.wikipedia.org/wiki/Schwa  [LATIN CAPITAL LETTER SCHWA]
-"\u018F" => "A"
-
-# Ǎ  [LATIN CAPITAL LETTER A WITH CARON]
-"\u01CD" => "A"
-
-# Ǟ  [LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON]
-"\u01DE" => "A"
-
-# Ǡ  [LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON]
-"\u01E0" => "A"
-
-# Ǻ  [LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE]
-"\u01FA" => "A"
-
-# Ȁ  [LATIN CAPITAL LETTER A WITH DOUBLE GRAVE]
-"\u0200" => "A"
-
-# Ȃ  [LATIN CAPITAL LETTER A WITH INVERTED BREVE]
-"\u0202" => "A"
-
-# Ȧ  [LATIN CAPITAL LETTER A WITH DOT ABOVE]
-"\u0226" => "A"
-
-# Ⱥ  [LATIN CAPITAL LETTER A WITH STROKE]
-"\u023A" => "A"
-
-# ᴀ  [LATIN LETTER SMALL CAPITAL A]
-"\u1D00" => "A"
-
-# Ḁ  [LATIN CAPITAL LETTER A WITH RING BELOW]
-"\u1E00" => "A"
-
-# Ạ  [LATIN CAPITAL LETTER A WITH DOT BELOW]
-"\u1EA0" => "A"
-
-# Ả  [LATIN CAPITAL LETTER A WITH HOOK ABOVE]
-"\u1EA2" => "A"
-
-# Ấ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE]
-"\u1EA4" => "A"
-
-# Ầ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE]
-"\u1EA6" => "A"
-
-# Ẩ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EA8" => "A"
-
-# Ẫ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE]
-"\u1EAA" => "A"
-
-# Ậ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EAC" => "A"
-
-# Ắ  [LATIN CAPITAL LETTER A WITH BREVE AND ACUTE]
-"\u1EAE" => "A"
-
-# Ằ  [LATIN CAPITAL LETTER A WITH BREVE AND GRAVE]
-"\u1EB0" => "A"
-
-# Ẳ  [LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE]
-"\u1EB2" => "A"
-
-# Ẵ  [LATIN CAPITAL LETTER A WITH BREVE AND TILDE]
-"\u1EB4" => "A"
-
-# Ặ  [LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW]
-"\u1EB6" => "A"
-
-# Ⓐ  [CIRCLED LATIN CAPITAL LETTER A]
-"\u24B6" => "A"
-
-# A  [FULLWIDTH LATIN CAPITAL LETTER A]
-"\uFF21" => "A"
-
-# à  [LATIN SMALL LETTER A WITH GRAVE]
-"\u00E0" => "a"
-
-# á  [LATIN SMALL LETTER A WITH ACUTE]
-"\u00E1" => "a"
-
-# â  [LATIN SMALL LETTER A WITH CIRCUMFLEX]
-"\u00E2" => "a"
-
-# ã  [LATIN SMALL LETTER A WITH TILDE]
-"\u00E3" => "a"
-
-# ä  [LATIN SMALL LETTER A WITH DIAERESIS]
-"\u00E4" => "a"
-
-# å  [LATIN SMALL LETTER A WITH RING ABOVE]
-"\u00E5" => "a"
-
-# ā  [LATIN SMALL LETTER A WITH MACRON]
-"\u0101" => "a"
-
-# ă  [LATIN SMALL LETTER A WITH BREVE]
-"\u0103" => "a"
-
-# ą  [LATIN SMALL LETTER A WITH OGONEK]
-"\u0105" => "a"
-
-# ǎ  [LATIN SMALL LETTER A WITH CARON]
-"\u01CE" => "a"
-
-# ǟ  [LATIN SMALL LETTER A WITH DIAERESIS AND MACRON]
-"\u01DF" => "a"
-
-# ǡ  [LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON]
-"\u01E1" => "a"
-
-# ǻ  [LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE]
-"\u01FB" => "a"
-
-# ȁ  [LATIN SMALL LETTER A WITH DOUBLE GRAVE]
-"\u0201" => "a"
-
-# ȃ  [LATIN SMALL LETTER A WITH INVERTED BREVE]
-"\u0203" => "a"
-
-# ȧ  [LATIN SMALL LETTER A WITH DOT ABOVE]
-"\u0227" => "a"
-
-# ɐ  [LATIN SMALL LETTER TURNED A]
-"\u0250" => "a"
-
-# ə  [LATIN SMALL LETTER SCHWA]
-"\u0259" => "a"
-
-# ɚ  [LATIN SMALL LETTER SCHWA WITH HOOK]
-"\u025A" => "a"
-
-# ᶏ  [LATIN SMALL LETTER A WITH RETROFLEX HOOK]
-"\u1D8F" => "a"
-
-# ᶕ  [LATIN SMALL LETTER SCHWA WITH RETROFLEX HOOK]
-"\u1D95" => "a"
-
-# ạ  [LATIN SMALL LETTER A WITH RING BELOW]
-"\u1E01" => "a"
-
-# ả  [LATIN SMALL LETTER A WITH RIGHT HALF RING]
-"\u1E9A" => "a"
-
-# ạ  [LATIN SMALL LETTER A WITH DOT BELOW]
-"\u1EA1" => "a"
-
-# ả  [LATIN SMALL LETTER A WITH HOOK ABOVE]
-"\u1EA3" => "a"
-
-# ấ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE]
-"\u1EA5" => "a"
-
-# ầ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE]
-"\u1EA7" => "a"
-
-# ẩ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EA9" => "a"
-
-# ẫ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE]
-"\u1EAB" => "a"
-
-# ậ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EAD" => "a"
-
-# ắ  [LATIN SMALL LETTER A WITH BREVE AND ACUTE]
-"\u1EAF" => "a"
-
-# ằ  [LATIN SMALL LETTER A WITH BREVE AND GRAVE]
-"\u1EB1" => "a"
-
-# ẳ  [LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE]
-"\u1EB3" => "a"
-
-# ẵ  [LATIN SMALL LETTER A WITH BREVE AND TILDE]
-"\u1EB5" => "a"
-
-# ặ  [LATIN SMALL LETTER A WITH BREVE AND DOT BELOW]
-"\u1EB7" => "a"
-
-# ₐ  [LATIN SUBSCRIPT SMALL LETTER A]
-"\u2090" => "a"
-
-# ₔ  [LATIN SUBSCRIPT SMALL LETTER SCHWA]
-"\u2094" => "a"
-
-# ⓐ  [CIRCLED LATIN SMALL LETTER A]
-"\u24D0" => "a"
-
-# ⱥ  [LATIN SMALL LETTER A WITH STROKE]
-"\u2C65" => "a"
-
-# Ɐ  [LATIN CAPITAL LETTER TURNED A]
-"\u2C6F" => "a"
-
-# a  [FULLWIDTH LATIN SMALL LETTER A]
-"\uFF41" => "a"
-
-# Ꜳ  [LATIN CAPITAL LETTER AA]
-"\uA732" => "AA"
-
-# Æ  [LATIN CAPITAL LETTER AE]
-"\u00C6" => "AE"
-
-# Ǣ  [LATIN CAPITAL LETTER AE WITH MACRON]
-"\u01E2" => "AE"
-
-# Ǽ  [LATIN CAPITAL LETTER AE WITH ACUTE]
-"\u01FC" => "AE"
-
-# ᴁ  [LATIN LETTER SMALL CAPITAL AE]
-"\u1D01" => "AE"
-
-# Ꜵ  [LATIN CAPITAL LETTER AO]
-"\uA734" => "AO"
-
-# Ꜷ  [LATIN CAPITAL LETTER AU]
-"\uA736" => "AU"
-
-# Ꜹ  [LATIN CAPITAL LETTER AV]
-"\uA738" => "AV"
-
-# Ꜻ  [LATIN CAPITAL LETTER AV WITH HORIZONTAL BAR]
-"\uA73A" => "AV"
-
-# Ꜽ  [LATIN CAPITAL LETTER AY]
-"\uA73C" => "AY"
-
-# ⒜  [PARENTHESIZED LATIN SMALL LETTER A]
-"\u249C" => "(a)"
-
-# ꜳ  [LATIN SMALL LETTER AA]
-"\uA733" => "aa"
-
-# æ  [LATIN SMALL LETTER AE]
-"\u00E6" => "ae"
-
-# ǣ  [LATIN SMALL LETTER AE WITH MACRON]
-"\u01E3" => "ae"
-
-# ǽ  [LATIN SMALL LETTER AE WITH ACUTE]
-"\u01FD" => "ae"
-
-# ᴂ  [LATIN SMALL LETTER TURNED AE]
-"\u1D02" => "ae"
-
-# ꜵ  [LATIN SMALL LETTER AO]
-"\uA735" => "ao"
-
-# ꜷ  [LATIN SMALL LETTER AU]
-"\uA737" => "au"
-
-# ꜹ  [LATIN SMALL LETTER AV]
-"\uA739" => "av"
-
-# ꜻ  [LATIN SMALL LETTER AV WITH HORIZONTAL BAR]
-"\uA73B" => "av"
-
-# ꜽ  [LATIN SMALL LETTER AY]
-"\uA73D" => "ay"
-
-# Ɓ  [LATIN CAPITAL LETTER B WITH HOOK]
-"\u0181" => "B"
-
-# Ƃ  [LATIN CAPITAL LETTER B WITH TOPBAR]
-"\u0182" => "B"
-
-# Ƀ  [LATIN CAPITAL LETTER B WITH STROKE]
-"\u0243" => "B"
-
-# ʙ  [LATIN LETTER SMALL CAPITAL B]
-"\u0299" => "B"
-
-# ᴃ  [LATIN LETTER SMALL CAPITAL BARRED B]
-"\u1D03" => "B"
-
-# Ḃ  [LATIN CAPITAL LETTER B WITH DOT ABOVE]
-"\u1E02" => "B"
-
-# Ḅ  [LATIN CAPITAL LETTER B WITH DOT BELOW]
-"\u1E04" => "B"
-
-# Ḇ  [LATIN CAPITAL LETTER B WITH LINE BELOW]
-"\u1E06" => "B"
-
-# Ⓑ  [CIRCLED LATIN CAPITAL LETTER B]
-"\u24B7" => "B"
-
-# B  [FULLWIDTH LATIN CAPITAL LETTER B]
-"\uFF22" => "B"
-
-# ƀ  [LATIN SMALL LETTER B WITH STROKE]
-"\u0180" => "b"
-
-# ƃ  [LATIN SMALL LETTER B WITH TOPBAR]
-"\u0183" => "b"
-
-# ɓ  [LATIN SMALL LETTER B WITH HOOK]
-"\u0253" => "b"
-
-# ᵬ  [LATIN SMALL LETTER B WITH MIDDLE TILDE]
-"\u1D6C" => "b"
-
-# ᶀ  [LATIN SMALL LETTER B WITH PALATAL HOOK]
-"\u1D80" => "b"
-
-# ḃ  [LATIN SMALL LETTER B WITH DOT ABOVE]
-"\u1E03" => "b"
-
-# ḅ  [LATIN SMALL LETTER B WITH DOT BELOW]
-"\u1E05" => "b"
-
-# ḇ  [LATIN SMALL LETTER B WITH LINE BELOW]
-"\u1E07" => "b"
-
-# ⓑ  [CIRCLED LATIN SMALL LETTER B]
-"\u24D1" => "b"
-
-# b  [FULLWIDTH LATIN SMALL LETTER B]
-"\uFF42" => "b"
-
-# ⒝  [PARENTHESIZED LATIN SMALL LETTER B]
-"\u249D" => "(b)"
-
-# Ç  [LATIN CAPITAL LETTER C WITH CEDILLA]
-"\u00C7" => "C"
-
-# Ć  [LATIN CAPITAL LETTER C WITH ACUTE]
-"\u0106" => "C"
-
-# Ĉ  [LATIN CAPITAL LETTER C WITH CIRCUMFLEX]
-"\u0108" => "C"
-
-# Ċ  [LATIN CAPITAL LETTER C WITH DOT ABOVE]
-"\u010A" => "C"
-
-# Č  [LATIN CAPITAL LETTER C WITH CARON]
-"\u010C" => "C"
-
-# Ƈ  [LATIN CAPITAL LETTER C WITH HOOK]
-"\u0187" => "C"
-
-# Ȼ  [LATIN CAPITAL LETTER C WITH STROKE]
-"\u023B" => "C"
-
-# ʗ  [LATIN LETTER STRETCHED C]
-"\u0297" => "C"
-
-# ᴄ  [LATIN LETTER SMALL CAPITAL C]
-"\u1D04" => "C"
-
-# Ḉ  [LATIN CAPITAL LETTER C WITH CEDILLA AND ACUTE]
-"\u1E08" => "C"
-
-# Ⓒ  [CIRCLED LATIN CAPITAL LETTER C]
-"\u24B8" => "C"
-
-# C  [FULLWIDTH LATIN CAPITAL LETTER C]
-"\uFF23" => "C"
-
-# ç  [LATIN SMALL LETTER C WITH CEDILLA]
-"\u00E7" => "c"
-
-# ć  [LATIN SMALL LETTER C WITH ACUTE]
-"\u0107" => "c"
-
-# ĉ  [LATIN SMALL LETTER C WITH CIRCUMFLEX]
-"\u0109" => "c"
-
-# ċ  [LATIN SMALL LETTER C WITH DOT ABOVE]
-"\u010B" => "c"
-
-# č  [LATIN SMALL LETTER C WITH CARON]
-"\u010D" => "c"
-
-# ƈ  [LATIN SMALL LETTER C WITH HOOK]
-"\u0188" => "c"
-
-# ȼ  [LATIN SMALL LETTER C WITH STROKE]
-"\u023C" => "c"
-
-# ɕ  [LATIN SMALL LETTER C WITH CURL]
-"\u0255" => "c"
-
-# ḉ  [LATIN SMALL LETTER C WITH CEDILLA AND ACUTE]
-"\u1E09" => "c"
-
-# ↄ  [LATIN SMALL LETTER REVERSED C]
-"\u2184" => "c"
-
-# ⓒ  [CIRCLED LATIN SMALL LETTER C]
-"\u24D2" => "c"
-
-# Ꜿ  [LATIN CAPITAL LETTER REVERSED C WITH DOT]
-"\uA73E" => "c"
-
-# ꜿ  [LATIN SMALL LETTER REVERSED C WITH DOT]
-"\uA73F" => "c"
-
-# c  [FULLWIDTH LATIN SMALL LETTER C]
-"\uFF43" => "c"
-
-# ⒞  [PARENTHESIZED LATIN SMALL LETTER C]
-"\u249E" => "(c)"
-
-# Ð  [LATIN CAPITAL LETTER ETH]
-"\u00D0" => "D"
-
-# Ď  [LATIN CAPITAL LETTER D WITH CARON]
-"\u010E" => "D"
-
-# Đ  [LATIN CAPITAL LETTER D WITH STROKE]
-"\u0110" => "D"
-
-# Ɖ  [LATIN CAPITAL LETTER AFRICAN D]
-"\u0189" => "D"
-
-# Ɗ  [LATIN CAPITAL LETTER D WITH HOOK]
-"\u018A" => "D"
-
-# Ƌ  [LATIN CAPITAL LETTER D WITH TOPBAR]
-"\u018B" => "D"
-
-# ᴅ  [LATIN LETTER SMALL CAPITAL D]
-"\u1D05" => "D"
-
-# ᴆ  [LATIN LETTER SMALL CAPITAL ETH]
-"\u1D06" => "D"
-
-# Ḋ  [LATIN CAPITAL LETTER D WITH DOT ABOVE]
-"\u1E0A" => "D"
-
-# Ḍ  [LATIN CAPITAL LETTER D WITH DOT BELOW]
-"\u1E0C" => "D"
-
-# Ḏ  [LATIN CAPITAL LETTER D WITH LINE BELOW]
-"\u1E0E" => "D"
-
-# Ḑ  [LATIN CAPITAL LETTER D WITH CEDILLA]
-"\u1E10" => "D"
-
-# Ḓ  [LATIN CAPITAL LETTER D WITH CIRCUMFLEX BELOW]
-"\u1E12" => "D"
-
-# Ⓓ  [CIRCLED LATIN CAPITAL LETTER D]
-"\u24B9" => "D"
-
-# Ꝺ  [LATIN CAPITAL LETTER INSULAR D]
-"\uA779" => "D"
-
-# D  [FULLWIDTH LATIN CAPITAL LETTER D]
-"\uFF24" => "D"
-
-# ð  [LATIN SMALL LETTER ETH]
-"\u00F0" => "d"
-
-# ď  [LATIN SMALL LETTER D WITH CARON]
-"\u010F" => "d"
-
-# đ  [LATIN SMALL LETTER D WITH STROKE]
-"\u0111" => "d"
-
-# ƌ  [LATIN SMALL LETTER D WITH TOPBAR]
-"\u018C" => "d"
-
-# ȡ  [LATIN SMALL LETTER D WITH CURL]
-"\u0221" => "d"
-
-# ɖ  [LATIN SMALL LETTER D WITH TAIL]
-"\u0256" => "d"
-
-# ɗ  [LATIN SMALL LETTER D WITH HOOK]
-"\u0257" => "d"
-
-# ᵭ  [LATIN SMALL LETTER D WITH MIDDLE TILDE]
-"\u1D6D" => "d"
-
-# ᶁ  [LATIN SMALL LETTER D WITH PALATAL HOOK]
-"\u1D81" => "d"
-
-# ᶑ  [LATIN SMALL LETTER D WITH HOOK AND TAIL]
-"\u1D91" => "d"
-
-# ḋ  [LATIN SMALL LETTER D WITH DOT ABOVE]
-"\u1E0B" => "d"
-
-# ḍ  [LATIN SMALL LETTER D WITH DOT BELOW]
-"\u1E0D" => "d"
-
-# ḏ  [LATIN SMALL LETTER D WITH LINE BELOW]
-"\u1E0F" => "d"
-
-# ḑ  [LATIN SMALL LETTER D WITH CEDILLA]
-"\u1E11" => "d"
-
-# ḓ  [LATIN SMALL LETTER D WITH CIRCUMFLEX BELOW]
-"\u1E13" => "d"
-
-# ⓓ  [CIRCLED LATIN SMALL LETTER D]
-"\u24D3" => "d"
-
-# ꝺ  [LATIN SMALL LETTER INSULAR D]
-"\uA77A" => "d"
-
-# d  [FULLWIDTH LATIN SMALL LETTER D]
-"\uFF44" => "d"
-
-# DŽ  [LATIN CAPITAL LETTER DZ WITH CARON]
-"\u01C4" => "DZ"
-
-# DZ  [LATIN CAPITAL LETTER DZ]
-"\u01F1" => "DZ"
-
-# Dž  [LATIN CAPITAL LETTER D WITH SMALL LETTER Z WITH CARON]
-"\u01C5" => "Dz"
-
-# Dz  [LATIN CAPITAL LETTER D WITH SMALL LETTER Z]
-"\u01F2" => "Dz"
-
-# ⒟  [PARENTHESIZED LATIN SMALL LETTER D]
-"\u249F" => "(d)"
-
-# ȸ  [LATIN SMALL LETTER DB DIGRAPH]
-"\u0238" => "db"
-
-# dž  [LATIN SMALL LETTER DZ WITH CARON]
-"\u01C6" => "dz"
-
-# dz  [LATIN SMALL LETTER DZ]
-"\u01F3" => "dz"
-
-# ʣ  [LATIN SMALL LETTER DZ DIGRAPH]
-"\u02A3" => "dz"
-
-# ʥ  [LATIN SMALL LETTER DZ DIGRAPH WITH CURL]
-"\u02A5" => "dz"
-
-# È  [LATIN CAPITAL LETTER E WITH GRAVE]
-"\u00C8" => "E"
-
-# É  [LATIN CAPITAL LETTER E WITH ACUTE]
-"\u00C9" => "E"
-
-# Ê  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX]
-"\u00CA" => "E"
-
-# Ë  [LATIN CAPITAL LETTER E WITH DIAERESIS]
-"\u00CB" => "E"
-
-# Ē  [LATIN CAPITAL LETTER E WITH MACRON]
-"\u0112" => "E"
-
-# Ĕ  [LATIN CAPITAL LETTER E WITH BREVE]
-"\u0114" => "E"
-
-# Ė  [LATIN CAPITAL LETTER E WITH DOT ABOVE]
-"\u0116" => "E"
-
-# Ę  [LATIN CAPITAL LETTER E WITH OGONEK]
-"\u0118" => "E"
-
-# Ě  [LATIN CAPITAL LETTER E WITH CARON]
-"\u011A" => "E"
-
-# Ǝ  [LATIN CAPITAL LETTER REVERSED E]
-"\u018E" => "E"
-
-# Ɛ  [LATIN CAPITAL LETTER OPEN E]
-"\u0190" => "E"
-
-# Ȅ  [LATIN CAPITAL LETTER E WITH DOUBLE GRAVE]
-"\u0204" => "E"
-
-# Ȇ  [LATIN CAPITAL LETTER E WITH INVERTED BREVE]
-"\u0206" => "E"
-
-# Ȩ  [LATIN CAPITAL LETTER E WITH CEDILLA]
-"\u0228" => "E"
-
-# Ɇ  [LATIN CAPITAL LETTER E WITH STROKE]
-"\u0246" => "E"
-
-# ᴇ  [LATIN LETTER SMALL CAPITAL E]
-"\u1D07" => "E"
-
-# Ḕ  [LATIN CAPITAL LETTER E WITH MACRON AND GRAVE]
-"\u1E14" => "E"
-
-# Ḗ  [LATIN CAPITAL LETTER E WITH MACRON AND ACUTE]
-"\u1E16" => "E"
-
-# Ḙ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX BELOW]
-"\u1E18" => "E"
-
-# Ḛ  [LATIN CAPITAL LETTER E WITH TILDE BELOW]
-"\u1E1A" => "E"
-
-# Ḝ  [LATIN CAPITAL LETTER E WITH CEDILLA AND BREVE]
-"\u1E1C" => "E"
-
-# Ẹ  [LATIN CAPITAL LETTER E WITH DOT BELOW]
-"\u1EB8" => "E"
-
-# Ẻ  [LATIN CAPITAL LETTER E WITH HOOK ABOVE]
-"\u1EBA" => "E"
-
-# Ẽ  [LATIN CAPITAL LETTER E WITH TILDE]
-"\u1EBC" => "E"
-
-# Ế  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND ACUTE]
-"\u1EBE" => "E"
-
-# Ề  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND GRAVE]
-"\u1EC0" => "E"
-
-# Ể  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EC2" => "E"
-
-# Ễ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE]
-"\u1EC4" => "E"
-
-# Ệ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EC6" => "E"
-
-# Ⓔ  [CIRCLED LATIN CAPITAL LETTER E]
-"\u24BA" => "E"
-
-# ⱻ  [LATIN LETTER SMALL CAPITAL TURNED E]
-"\u2C7B" => "E"
-
-# E  [FULLWIDTH LATIN CAPITAL LETTER E]
-"\uFF25" => "E"
-
-# è  [LATIN SMALL LETTER E WITH GRAVE]
-"\u00E8" => "e"
-
-# é  [LATIN SMALL LETTER E WITH ACUTE]
-"\u00E9" => "e"
-
-# ê  [LATIN SMALL LETTER E WITH CIRCUMFLEX]
-"\u00EA" => "e"
-
-# ë  [LATIN SMALL LETTER E WITH DIAERESIS]
-"\u00EB" => "e"
-
-# ē  [LATIN SMALL LETTER E WITH MACRON]
-"\u0113" => "e"
-
-# ĕ  [LATIN SMALL LETTER E WITH BREVE]
-"\u0115" => "e"
-
-# ė  [LATIN SMALL LETTER E WITH DOT ABOVE]
-"\u0117" => "e"
-
-# ę  [LATIN SMALL LETTER E WITH OGONEK]
-"\u0119" => "e"
-
-# ě  [LATIN SMALL LETTER E WITH CARON]
-"\u011B" => "e"
-
-# ǝ  [LATIN SMALL LETTER TURNED E]
-"\u01DD" => "e"
-
-# ȅ  [LATIN SMALL LETTER E WITH DOUBLE GRAVE]
-"\u0205" => "e"
-
-# ȇ  [LATIN SMALL LETTER E WITH INVERTED BREVE]
-"\u0207" => "e"
-
-# ȩ  [LATIN SMALL LETTER E WITH CEDILLA]
-"\u0229" => "e"
-
-# ɇ  [LATIN SMALL LETTER E WITH STROKE]
-"\u0247" => "e"
-
-# ɘ  [LATIN SMALL LETTER REVERSED E]
-"\u0258" => "e"
-
-# ɛ  [LATIN SMALL LETTER OPEN E]
-"\u025B" => "e"
-
-# ɜ  [LATIN SMALL LETTER REVERSED OPEN E]
-"\u025C" => "e"
-
-# ɝ  [LATIN SMALL LETTER REVERSED OPEN E WITH HOOK]
-"\u025D" => "e"
-
-# ɞ  [LATIN SMALL LETTER CLOSED REVERSED OPEN E]
-"\u025E" => "e"
-
-# ʚ  [LATIN SMALL LETTER CLOSED OPEN E]
-"\u029A" => "e"
-
-# ᴈ  [LATIN SMALL LETTER TURNED OPEN E]
-"\u1D08" => "e"
-
-# ᶒ  [LATIN SMALL LETTER E WITH RETROFLEX HOOK]
-"\u1D92" => "e"
-
-# ᶓ  [LATIN SMALL LETTER OPEN E WITH RETROFLEX HOOK]
-"\u1D93" => "e"
-
-# ᶔ  [LATIN SMALL LETTER REVERSED OPEN E WITH RETROFLEX HOOK]
-"\u1D94" => "e"
-
-# ḕ  [LATIN SMALL LETTER E WITH MACRON AND GRAVE]
-"\u1E15" => "e"
-
-# ḗ  [LATIN SMALL LETTER E WITH MACRON AND ACUTE]
-"\u1E17" => "e"
-
-# ḙ  [LATIN SMALL LETTER E WITH CIRCUMFLEX BELOW]
-"\u1E19" => "e"
-
-# ḛ  [LATIN SMALL LETTER E WITH TILDE BELOW]
-"\u1E1B" => "e"
-
-# ḝ  [LATIN SMALL LETTER E WITH CEDILLA AND BREVE]
-"\u1E1D" => "e"
-
-# ẹ  [LATIN SMALL LETTER E WITH DOT BELOW]
-"\u1EB9" => "e"
-
-# ẻ  [LATIN SMALL LETTER E WITH HOOK ABOVE]
-"\u1EBB" => "e"
-
-# ẽ  [LATIN SMALL LETTER E WITH TILDE]
-"\u1EBD" => "e"
-
-# ế  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND ACUTE]
-"\u1EBF" => "e"
-
-# ề  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND GRAVE]
-"\u1EC1" => "e"
-
-# ể  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EC3" => "e"
-
-# ễ  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE]
-"\u1EC5" => "e"
-
-# ệ  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EC7" => "e"
-
-# ₑ  [LATIN SUBSCRIPT SMALL LETTER E]
-"\u2091" => "e"
-
-# ⓔ  [CIRCLED LATIN SMALL LETTER E]
-"\u24D4" => "e"
-
-# ⱸ  [LATIN SMALL LETTER E WITH NOTCH]
-"\u2C78" => "e"
-
-# e  [FULLWIDTH LATIN SMALL LETTER E]
-"\uFF45" => "e"
-
-# ⒠  [PARENTHESIZED LATIN SMALL LETTER E]
-"\u24A0" => "(e)"
-
-# Ƒ  [LATIN CAPITAL LETTER F WITH HOOK]
-"\u0191" => "F"
-
-# Ḟ  [LATIN CAPITAL LETTER F WITH DOT ABOVE]
-"\u1E1E" => "F"
-
-# Ⓕ  [CIRCLED LATIN CAPITAL LETTER F]
-"\u24BB" => "F"
-
-# ꜰ  [LATIN LETTER SMALL CAPITAL F]
-"\uA730" => "F"
-
-# Ꝼ  [LATIN CAPITAL LETTER INSULAR F]
-"\uA77B" => "F"
-
-# ꟻ  [LATIN EPIGRAPHIC LETTER REVERSED F]
-"\uA7FB" => "F"
-
-# F  [FULLWIDTH LATIN CAPITAL LETTER F]
-"\uFF26" => "F"
-
-# ƒ  [LATIN SMALL LETTER F WITH HOOK]
-"\u0192" => "f"
-
-# ᵮ  [LATIN SMALL LETTER F WITH MIDDLE TILDE]
-"\u1D6E" => "f"
-
-# ᶂ  [LATIN SMALL LETTER F WITH PALATAL HOOK]
-"\u1D82" => "f"
-
-# ḟ  [LATIN SMALL LETTER F WITH DOT ABOVE]
-"\u1E1F" => "f"
-
-# ẛ  [LATIN SMALL LETTER LONG S WITH DOT ABOVE]
-"\u1E9B" => "f"
-
-# ⓕ  [CIRCLED LATIN SMALL LETTER F]
-"\u24D5" => "f"
-
-# ꝼ  [LATIN SMALL LETTER INSULAR F]
-"\uA77C" => "f"
-
-# f  [FULLWIDTH LATIN SMALL LETTER F]
-"\uFF46" => "f"
-
-# ⒡  [PARENTHESIZED LATIN SMALL LETTER F]
-"\u24A1" => "(f)"
-
-# ff  [LATIN SMALL LIGATURE FF]
-"\uFB00" => "ff"
-
-# ffi  [LATIN SMALL LIGATURE FFI]
-"\uFB03" => "ffi"
-
-# ffl  [LATIN SMALL LIGATURE FFL]
-"\uFB04" => "ffl"
-
-# fi  [LATIN SMALL LIGATURE FI]
-"\uFB01" => "fi"
-
-# fl  [LATIN SMALL LIGATURE FL]
-"\uFB02" => "fl"
-
-# Ĝ  [LATIN CAPITAL LETTER G WITH CIRCUMFLEX]
-"\u011C" => "G"
-
-# Ğ  [LATIN CAPITAL LETTER G WITH BREVE]
-"\u011E" => "G"
-
-# Ġ  [LATIN CAPITAL LETTER G WITH DOT ABOVE]
-"\u0120" => "G"
-
-# Ģ  [LATIN CAPITAL LETTER G WITH CEDILLA]
-"\u0122" => "G"
-
-# Ɠ  [LATIN CAPITAL LETTER G WITH HOOK]
-"\u0193" => "G"
-
-# Ǥ  [LATIN CAPITAL LETTER G WITH STROKE]
-"\u01E4" => "G"
-
-# ǥ  [LATIN SMALL LETTER G WITH STROKE]
-"\u01E5" => "G"
-
-# Ǧ  [LATIN CAPITAL LETTER G WITH CARON]
-"\u01E6" => "G"
-
-# ǧ  [LATIN SMALL LETTER G WITH CARON]
-"\u01E7" => "G"
-
-# Ǵ  [LATIN CAPITAL LETTER G WITH ACUTE]
-"\u01F4" => "G"
-
-# ɢ  [LATIN LETTER SMALL CAPITAL G]
-"\u0262" => "G"
-
-# ʛ  [LATIN LETTER SMALL CAPITAL G WITH HOOK]
-"\u029B" => "G"
-
-# Ḡ  [LATIN CAPITAL LETTER G WITH MACRON]
-"\u1E20" => "G"
-
-# Ⓖ  [CIRCLED LATIN CAPITAL LETTER G]
-"\u24BC" => "G"
-
-# Ᵹ  [LATIN CAPITAL LETTER INSULAR G]
-"\uA77D" => "G"
-
-# Ꝿ  [LATIN CAPITAL LETTER TURNED INSULAR G]
-"\uA77E" => "G"
-
-# G  [FULLWIDTH LATIN CAPITAL LETTER G]
-"\uFF27" => "G"
-
-# ĝ  [LATIN SMALL LETTER G WITH CIRCUMFLEX]
-"\u011D" => "g"
-
-# ğ  [LATIN SMALL LETTER G WITH BREVE]
-"\u011F" => "g"
-
-# ġ  [LATIN SMALL LETTER G WITH DOT ABOVE]
-"\u0121" => "g"
-
-# ģ  [LATIN SMALL LETTER G WITH CEDILLA]
-"\u0123" => "g"
-
-# ǵ  [LATIN SMALL LETTER G WITH ACUTE]
-"\u01F5" => "g"
-
-# ɠ  [LATIN SMALL LETTER G WITH HOOK]
-"\u0260" => "g"
-
-# ɡ  [LATIN SMALL LETTER SCRIPT G]
-"\u0261" => "g"
-
-# ᵷ  [LATIN SMALL LETTER TURNED G]
-"\u1D77" => "g"
-
-# ᵹ  [LATIN SMALL LETTER INSULAR G]
-"\u1D79" => "g"
-
-# ᶃ  [LATIN SMALL LETTER G WITH PALATAL HOOK]
-"\u1D83" => "g"
-
-# ḡ  [LATIN SMALL LETTER G WITH MACRON]
-"\u1E21" => "g"
-
-# ⓖ  [CIRCLED LATIN SMALL LETTER G]
-"\u24D6" => "g"
-
-# ꝿ  [LATIN SMALL LETTER TURNED INSULAR G]
-"\uA77F" => "g"
-
-# g  [FULLWIDTH LATIN SMALL LETTER G]
-"\uFF47" => "g"
-
-# ⒢  [PARENTHESIZED LATIN SMALL LETTER G]
-"\u24A2" => "(g)"
-
-# Ĥ  [LATIN CAPITAL LETTER H WITH CIRCUMFLEX]
-"\u0124" => "H"
-
-# Ħ  [LATIN CAPITAL LETTER H WITH STROKE]
-"\u0126" => "H"
-
-# Ȟ  [LATIN CAPITAL LETTER H WITH CARON]
-"\u021E" => "H"
-
-# ʜ  [LATIN LETTER SMALL CAPITAL H]
-"\u029C" => "H"
-
-# Ḣ  [LATIN CAPITAL LETTER H WITH DOT ABOVE]
-"\u1E22" => "H"
-
-# Ḥ  [LATIN CAPITAL LETTER H WITH DOT BELOW]
-"\u1E24" => "H"
-
-# Ḧ  [LATIN CAPITAL LETTER H WITH DIAERESIS]
-"\u1E26" => "H"
-
-# Ḩ  [LATIN CAPITAL LETTER H WITH CEDILLA]
-"\u1E28" => "H"
-
-# Ḫ  [LATIN CAPITAL LETTER H WITH BREVE BELOW]
-"\u1E2A" => "H"
-
-# Ⓗ  [CIRCLED LATIN CAPITAL LETTER H]
-"\u24BD" => "H"
-
-# Ⱨ  [LATIN CAPITAL LETTER H WITH DESCENDER]
-"\u2C67" => "H"
-
-# Ⱶ  [LATIN CAPITAL LETTER HALF H]
-"\u2C75" => "H"
-
-# H  [FULLWIDTH LATIN CAPITAL LETTER H]
-"\uFF28" => "H"
-
-# ĥ  [LATIN SMALL LETTER H WITH CIRCUMFLEX]
-"\u0125" => "h"
-
-# ħ  [LATIN SMALL LETTER H WITH STROKE]
-"\u0127" => "h"
-
-# ȟ  [LATIN SMALL LETTER H WITH CARON]
-"\u021F" => "h"
-
-# ɥ  [LATIN SMALL LETTER TURNED H]
-"\u0265" => "h"
-
-# ɦ  [LATIN SMALL LETTER H WITH HOOK]
-"\u0266" => "h"
-
-# ʮ  [LATIN SMALL LETTER TURNED H WITH FISHHOOK]
-"\u02AE" => "h"
-
-# ʯ  [LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL]
-"\u02AF" => "h"
-
-# ḣ  [LATIN SMALL LETTER H WITH DOT ABOVE]
-"\u1E23" => "h"
-
-# ḥ  [LATIN SMALL LETTER H WITH DOT BELOW]
-"\u1E25" => "h"
-
-# ḧ  [LATIN SMALL LETTER H WITH DIAERESIS]
-"\u1E27" => "h"
-
-# ḩ  [LATIN SMALL LETTER H WITH CEDILLA]
-"\u1E29" => "h"
-
-# ḫ  [LATIN SMALL LETTER H WITH BREVE BELOW]
-"\u1E2B" => "h"
-
-# ẖ  [LATIN SMALL LETTER H WITH LINE BELOW]
-"\u1E96" => "h"
-
-# ⓗ  [CIRCLED LATIN SMALL LETTER H]
-"\u24D7" => "h"
-
-# ⱨ  [LATIN SMALL LETTER H WITH DESCENDER]
-"\u2C68" => "h"
-
-# ⱶ  [LATIN SMALL LETTER HALF H]
-"\u2C76" => "h"
-
-# h  [FULLWIDTH LATIN SMALL LETTER H]
-"\uFF48" => "h"
-
-# Ƕ  http://en.wikipedia.org/wiki/Hwair  [LATIN CAPITAL LETTER HWAIR]
-"\u01F6" => "HV"
-
-# ⒣  [PARENTHESIZED LATIN SMALL LETTER H]
-"\u24A3" => "(h)"
-
-# ƕ  [LATIN SMALL LETTER HV]
-"\u0195" => "hv"
-
-# Ì  [LATIN CAPITAL LETTER I WITH GRAVE]
-"\u00CC" => "I"
-
-# Í  [LATIN CAPITAL LETTER I WITH ACUTE]
-"\u00CD" => "I"
-
-# Î  [LATIN CAPITAL LETTER I WITH CIRCUMFLEX]
-"\u00CE" => "I"
-
-# Ï  [LATIN CAPITAL LETTER I WITH DIAERESIS]
-"\u00CF" => "I"
-
-# Ĩ  [LATIN CAPITAL LETTER I WITH TILDE]
-"\u0128" => "I"
-
-# Ī  [LATIN CAPITAL LETTER I WITH MACRON]
-"\u012A" => "I"
-
-# Ĭ  [LATIN CAPITAL LETTER I WITH BREVE]
-"\u012C" => "I"
-
-# Į  [LATIN CAPITAL LETTER I WITH OGONEK]
-"\u012E" => "I"
-
-# İ  [LATIN CAPITAL LETTER I WITH DOT ABOVE]
-"\u0130" => "I"
-
-# Ɩ  [LATIN CAPITAL LETTER IOTA]
-"\u0196" => "I"
-
-# Ɨ  [LATIN CAPITAL LETTER I WITH STROKE]
-"\u0197" => "I"
-
-# Ǐ  [LATIN CAPITAL LETTER I WITH CARON]
-"\u01CF" => "I"
-
-# Ȉ  [LATIN CAPITAL LETTER I WITH DOUBLE GRAVE]
-"\u0208" => "I"
-
-# Ȋ  [LATIN CAPITAL LETTER I WITH INVERTED BREVE]
-"\u020A" => "I"
-
-# ɪ  [LATIN LETTER SMALL CAPITAL I]
-"\u026A" => "I"
-
-# ᵻ  [LATIN SMALL CAPITAL LETTER I WITH STROKE]
-"\u1D7B" => "I"
-
-# Ḭ  [LATIN CAPITAL LETTER I WITH TILDE BELOW]
-"\u1E2C" => "I"
-
-# Ḯ  [LATIN CAPITAL LETTER I WITH DIAERESIS AND ACUTE]
-"\u1E2E" => "I"
-
-# Ỉ  [LATIN CAPITAL LETTER I WITH HOOK ABOVE]
-"\u1EC8" => "I"
-
-# Ị  [LATIN CAPITAL LETTER I WITH DOT BELOW]
-"\u1ECA" => "I"
-
-# Ⓘ  [CIRCLED LATIN CAPITAL LETTER I]
-"\u24BE" => "I"
-
-# ꟾ  [LATIN EPIGRAPHIC LETTER I LONGA]
-"\uA7FE" => "I"
-
-# I  [FULLWIDTH LATIN CAPITAL LETTER I]
-"\uFF29" => "I"
-
-# ì  [LATIN SMALL LETTER I WITH GRAVE]
-"\u00EC" => "i"
-
-# í  [LATIN SMALL LETTER I WITH ACUTE]
-"\u00ED" => "i"
-
-# î  [LATIN SMALL LETTER I WITH CIRCUMFLEX]
-"\u00EE" => "i"
-
-# ï  [LATIN SMALL LETTER I WITH DIAERESIS]
-"\u00EF" => "i"
-
-# ĩ  [LATIN SMALL LETTER I WITH TILDE]
-"\u0129" => "i"
-
-# ī  [LATIN SMALL LETTER I WITH MACRON]
-"\u012B" => "i"
-
-# ĭ  [LATIN SMALL LETTER I WITH BREVE]
-"\u012D" => "i"
-
-# į  [LATIN SMALL LETTER I WITH OGONEK]
-"\u012F" => "i"
-
-# ı  [LATIN SMALL LETTER DOTLESS I]
-"\u0131" => "i"
-
-# ǐ  [LATIN SMALL LETTER I WITH CARON]
-"\u01D0" => "i"
-
-# ȉ  [LATIN SMALL LETTER I WITH DOUBLE GRAVE]
-"\u0209" => "i"
-
-# ȋ  [LATIN SMALL LETTER I WITH INVERTED BREVE]
-"\u020B" => "i"
-
-# ɨ  [LATIN SMALL LETTER I WITH STROKE]
-"\u0268" => "i"
-
-# ᴉ  [LATIN SMALL LETTER TURNED I]
-"\u1D09" => "i"
-
-# ᵢ  [LATIN SUBSCRIPT SMALL LETTER I]
-"\u1D62" => "i"
-
-# ᵼ  [LATIN SMALL LETTER IOTA WITH STROKE]
-"\u1D7C" => "i"
-
-# ᶖ  [LATIN SMALL LETTER I WITH RETROFLEX HOOK]
-"\u1D96" => "i"
-
-# ḭ  [LATIN SMALL LETTER I WITH TILDE BELOW]
-"\u1E2D" => "i"
-
-# ḯ  [LATIN SMALL LETTER I WITH DIAERESIS AND ACUTE]
-"\u1E2F" => "i"
-
-# ỉ  [LATIN SMALL LETTER I WITH HOOK ABOVE]
-"\u1EC9" => "i"
-
-# ị  [LATIN SMALL LETTER I WITH DOT BELOW]
-"\u1ECB" => "i"
-
-# ⁱ  [SUPERSCRIPT LATIN SMALL LETTER I]
-"\u2071" => "i"
-
-# ⓘ  [CIRCLED LATIN SMALL LETTER I]
-"\u24D8" => "i"
-
-# i  [FULLWIDTH LATIN SMALL LETTER I]
-"\uFF49" => "i"
-
-# IJ  [LATIN CAPITAL LIGATURE IJ]
-"\u0132" => "IJ"
-
-# ⒤  [PARENTHESIZED LATIN SMALL LETTER I]
-"\u24A4" => "(i)"
-
-# ij  [LATIN SMALL LIGATURE IJ]
-"\u0133" => "ij"
-
-# Ĵ  [LATIN CAPITAL LETTER J WITH CIRCUMFLEX]
-"\u0134" => "J"
-
-# Ɉ  [LATIN CAPITAL LETTER J WITH STROKE]
-"\u0248" => "J"
-
-# ᴊ  [LATIN LETTER SMALL CAPITAL J]
-"\u1D0A" => "J"
-
-# Ⓙ  [CIRCLED LATIN CAPITAL LETTER J]
-"\u24BF" => "J"
-
-# J  [FULLWIDTH LATIN CAPITAL LETTER J]
-"\uFF2A" => "J"
-
-# ĵ  [LATIN SMALL LETTER J WITH CIRCUMFLEX]
-"\u0135" => "j"
-
-# ǰ  [LATIN SMALL LETTER J WITH CARON]
-"\u01F0" => "j"
-
-# ȷ  [LATIN SMALL LETTER DOTLESS J]
-"\u0237" => "j"
-
-# ɉ  [LATIN SMALL LETTER J WITH STROKE]
-"\u0249" => "j"
-
-# ɟ  [LATIN SMALL LETTER DOTLESS J WITH STROKE]
-"\u025F" => "j"
-
-# ʄ  [LATIN SMALL LETTER DOTLESS J WITH STROKE AND HOOK]
-"\u0284" => "j"
-
-# ʝ  [LATIN SMALL LETTER J WITH CROSSED-TAIL]
-"\u029D" => "j"
-
-# ⓙ  [CIRCLED LATIN SMALL LETTER J]
-"\u24D9" => "j"
-
-# ⱼ  [LATIN SUBSCRIPT SMALL LETTER J]
-"\u2C7C" => "j"
-
-# j  [FULLWIDTH LATIN SMALL LETTER J]
-"\uFF4A" => "j"
-
-# ⒥  [PARENTHESIZED LATIN SMALL LETTER J]
-"\u24A5" => "(j)"
-
-# Ķ  [LATIN CAPITAL LETTER K WITH CEDILLA]
-"\u0136" => "K"
-
-# Ƙ  [LATIN CAPITAL LETTER K WITH HOOK]
-"\u0198" => "K"
-
-# Ǩ  [LATIN CAPITAL LETTER K WITH CARON]
-"\u01E8" => "K"
-
-# ᴋ  [LATIN LETTER SMALL CAPITAL K]
-"\u1D0B" => "K"
-
-# Ḱ  [LATIN CAPITAL LETTER K WITH ACUTE]
-"\u1E30" => "K"
-
-# Ḳ  [LATIN CAPITAL LETTER K WITH DOT BELOW]
-"\u1E32" => "K"
-
-# Ḵ  [LATIN CAPITAL LETTER K WITH LINE BELOW]
-"\u1E34" => "K"
-
-# Ⓚ  [CIRCLED LATIN CAPITAL LETTER K]
-"\u24C0" => "K"
-
-# Ⱪ  [LATIN CAPITAL LETTER K WITH DESCENDER]
-"\u2C69" => "K"
-
-# Ꝁ  [LATIN CAPITAL LETTER K WITH STROKE]
-"\uA740" => "K"
-
-# Ꝃ  [LATIN CAPITAL LETTER K WITH DIAGONAL STROKE]
-"\uA742" => "K"
-
-# Ꝅ  [LATIN CAPITAL LETTER K WITH STROKE AND DIAGONAL STROKE]
-"\uA744" => "K"
-
-# K  [FULLWIDTH LATIN CAPITAL LETTER K]
-"\uFF2B" => "K"
-
-# ķ  [LATIN SMALL LETTER K WITH CEDILLA]
-"\u0137" => "k"
-
-# ƙ  [LATIN SMALL LETTER K WITH HOOK]
-"\u0199" => "k"
-
-# ǩ  [LATIN SMALL LETTER K WITH CARON]
-"\u01E9" => "k"
-
-# ʞ  [LATIN SMALL LETTER TURNED K]
-"\u029E" => "k"
-
-# ᶄ  [LATIN SMALL LETTER K WITH PALATAL HOOK]
-"\u1D84" => "k"
-
-# ḱ  [LATIN SMALL LETTER K WITH ACUTE]
-"\u1E31" => "k"
-
-# ḳ  [LATIN SMALL LETTER K WITH DOT BELOW]
-"\u1E33" => "k"
-
-# ḵ  [LATIN SMALL LETTER K WITH LINE BELOW]
-"\u1E35" => "k"
-
-# ⓚ  [CIRCLED LATIN SMALL LETTER K]
-"\u24DA" => "k"
-
-# ⱪ  [LATIN SMALL LETTER K WITH DESCENDER]
-"\u2C6A" => "k"
-
-# ꝁ  [LATIN SMALL LETTER K WITH STROKE]
-"\uA741" => "k"
-
-# ꝃ  [LATIN SMALL LETTER K WITH DIAGONAL STROKE]
-"\uA743" => "k"
-
-# ꝅ  [LATIN SMALL LETTER K WITH STROKE AND DIAGONAL STROKE]
-"\uA745" => "k"
-
-# k  [FULLWIDTH LATIN SMALL LETTER K]
-"\uFF4B" => "k"
-
-# ⒦  [PARENTHESIZED LATIN SMALL LETTER K]
-"\u24A6" => "(k)"
-
-# Ĺ  [LATIN CAPITAL LETTER L WITH ACUTE]
-"\u0139" => "L"
-
-# Ļ  [LATIN CAPITAL LETTER L WITH CEDILLA]
-"\u013B" => "L"
-
-# Ľ  [LATIN CAPITAL LETTER L WITH CARON]
-"\u013D" => "L"
-
-# Ŀ  [LATIN CAPITAL LETTER L WITH MIDDLE DOT]
-"\u013F" => "L"
-
-# Ł  [LATIN CAPITAL LETTER L WITH STROKE]
-"\u0141" => "L"
-
-# Ƚ  [LATIN CAPITAL LETTER L WITH BAR]
-"\u023D" => "L"
-
-# ʟ  [LATIN LETTER SMALL CAPITAL L]
-"\u029F" => "L"
-
-# ᴌ  [LATIN LETTER SMALL CAPITAL L WITH STROKE]
-"\u1D0C" => "L"
-
-# Ḷ  [LATIN CAPITAL LETTER L WITH DOT BELOW]
-"\u1E36" => "L"
-
-# Ḹ  [LATIN CAPITAL LETTER L WITH DOT BELOW AND MACRON]
-"\u1E38" => "L"
-
-# Ḻ  [LATIN CAPITAL LETTER L WITH LINE BELOW]
-"\u1E3A" => "L"
-
-# Ḽ  [LATIN CAPITAL LETTER L WITH CIRCUMFLEX BELOW]
-"\u1E3C" => "L"
-
-# Ⓛ  [CIRCLED LATIN CAPITAL LETTER L]
-"\u24C1" => "L"
-
-# Ⱡ  [LATIN CAPITAL LETTER L WITH DOUBLE BAR]
-"\u2C60" => "L"
-
-# Ɫ  [LATIN CAPITAL LETTER L WITH MIDDLE TILDE]
-"\u2C62" => "L"
-
-# Ꝇ  [LATIN CAPITAL LETTER BROKEN L]
-"\uA746" => "L"
-
-# Ꝉ  [LATIN CAPITAL LETTER L WITH HIGH STROKE]
-"\uA748" => "L"
-
-# Ꞁ  [LATIN CAPITAL LETTER TURNED L]
-"\uA780" => "L"
-
-# L  [FULLWIDTH LATIN CAPITAL LETTER L]
-"\uFF2C" => "L"
-
-# ĺ  [LATIN SMALL LETTER L WITH ACUTE]
-"\u013A" => "l"
-
-# ļ  [LATIN SMALL LETTER L WITH CEDILLA]
-"\u013C" => "l"
-
-# ľ  [LATIN SMALL LETTER L WITH CARON]
-"\u013E" => "l"
-
-# ŀ  [LATIN SMALL LETTER L WITH MIDDLE DOT]
-"\u0140" => "l"
-
-# ł  [LATIN SMALL LETTER L WITH STROKE]
-"\u0142" => "l"
-
-# ƚ  [LATIN SMALL LETTER L WITH BAR]
-"\u019A" => "l"
-
-# ȴ  [LATIN SMALL LETTER L WITH CURL]
-"\u0234" => "l"
-
-# ɫ  [LATIN SMALL LETTER L WITH MIDDLE TILDE]
-"\u026B" => "l"
-
-# ɬ  [LATIN SMALL LETTER L WITH BELT]
-"\u026C" => "l"
-
-# ɭ  [LATIN SMALL LETTER L WITH RETROFLEX HOOK]
-"\u026D" => "l"
-
-# ᶅ  [LATIN SMALL LETTER L WITH PALATAL HOOK]
-"\u1D85" => "l"
-
-# ḷ  [LATIN SMALL LETTER L WITH DOT BELOW]
-"\u1E37" => "l"
-
-# ḹ  [LATIN SMALL LETTER L WITH DOT BELOW AND MACRON]
-"\u1E39" => "l"
-
-# ḻ  [LATIN SMALL LETTER L WITH LINE BELOW]
-"\u1E3B" => "l"
-
-# ḽ  [LATIN SMALL LETTER L WITH CIRCUMFLEX BELOW]
-"\u1E3D" => "l"
-
-# ⓛ  [CIRCLED LATIN SMALL LETTER L]
-"\u24DB" => "l"
-
-# ⱡ  [LATIN SMALL LETTER L WITH DOUBLE BAR]
-"\u2C61" => "l"
-
-# ꝇ  [LATIN SMALL LETTER BROKEN L]
-"\uA747" => "l"
-
-# ꝉ  [LATIN SMALL LETTER L WITH HIGH STROKE]
-"\uA749" => "l"
-
-# ꞁ  [LATIN SMALL LETTER TURNED L]
-"\uA781" => "l"
-
-# l  [FULLWIDTH LATIN SMALL LETTER L]
-"\uFF4C" => "l"
-
-# LJ  [LATIN CAPITAL LETTER LJ]
-"\u01C7" => "LJ"
-
-# Ỻ  [LATIN CAPITAL LETTER MIDDLE-WELSH LL]
-"\u1EFA" => "LL"
-
-# Lj  [LATIN CAPITAL LETTER L WITH SMALL LETTER J]
-"\u01C8" => "Lj"
-
-# ⒧  [PARENTHESIZED LATIN SMALL LETTER L]
-"\u24A7" => "(l)"
-
-# lj  [LATIN SMALL LETTER LJ]
-"\u01C9" => "lj"
-
-# ỻ  [LATIN SMALL LETTER MIDDLE-WELSH LL]
-"\u1EFB" => "ll"
-
-# ʪ  [LATIN SMALL LETTER LS DIGRAPH]
-"\u02AA" => "ls"
-
-# ʫ  [LATIN SMALL LETTER LZ DIGRAPH]
-"\u02AB" => "lz"
-
-# Ɯ  [LATIN CAPITAL LETTER TURNED M]
-"\u019C" => "M"
-
-# ᴍ  [LATIN LETTER SMALL CAPITAL M]
-"\u1D0D" => "M"
-
-# Ḿ  [LATIN CAPITAL LETTER M WITH ACUTE]
-"\u1E3E" => "M"
-
-# Ṁ  [LATIN CAPITAL LETTER M WITH DOT ABOVE]
-"\u1E40" => "M"
-
-# Ṃ  [LATIN CAPITAL LETTER M WITH DOT BELOW]
-"\u1E42" => "M"
-
-# Ⓜ  [CIRCLED LATIN CAPITAL LETTER M]
-"\u24C2" => "M"
-
-# Ɱ  [LATIN CAPITAL LETTER M WITH HOOK]
-"\u2C6E" => "M"
-
-# ꟽ  [LATIN EPIGRAPHIC LETTER INVERTED M]
-"\uA7FD" => "M"
-
-# ꟿ  [LATIN EPIGRAPHIC LETTER ARCHAIC M]
-"\uA7FF" => "M"
-
-# M  [FULLWIDTH LATIN CAPITAL LETTER M]
-"\uFF2D" => "M"
-
-# ɯ  [LATIN SMALL LETTER TURNED M]
-"\u026F" => "m"
-
-# ɰ  [LATIN SMALL LETTER TURNED M WITH LONG LEG]
-"\u0270" => "m"
-
-# ɱ  [LATIN SMALL LETTER M WITH HOOK]
-"\u0271" => "m"
-
-# ᵯ  [LATIN SMALL LETTER M WITH MIDDLE TILDE]
-"\u1D6F" => "m"
-
-# ᶆ  [LATIN SMALL LETTER M WITH PALATAL HOOK]
-"\u1D86" => "m"
-
-# ḿ  [LATIN SMALL LETTER M WITH ACUTE]
-"\u1E3F" => "m"
-
-# ṁ  [LATIN SMALL LETTER M WITH DOT ABOVE]
-"\u1E41" => "m"
-
-# ṃ  [LATIN SMALL LETTER M WITH DOT BELOW]
-"\u1E43" => "m"
-
-# ⓜ  [CIRCLED LATIN SMALL LETTER M]
-"\u24DC" => "m"
-
-# m  [FULLWIDTH LATIN SMALL LETTER M]
-"\uFF4D" => "m"
-
-# ⒨  [PARENTHESIZED LATIN SMALL LETTER M]
-"\u24A8" => "(m)"
-
-# Ñ  [LATIN CAPITAL LETTER N WITH TILDE]
-"\u00D1" => "N"
-
-# Ń  [LATIN CAPITAL LETTER N WITH ACUTE]
-"\u0143" => "N"
-
-# Ņ  [LATIN CAPITAL LETTER N WITH CEDILLA]
-"\u0145" => "N"
-
-# Ň  [LATIN CAPITAL LETTER N WITH CARON]
-"\u0147" => "N"
-
-# Ŋ  http://en.wikipedia.org/wiki/Eng_(letter)  [LATIN CAPITAL LETTER ENG]
-"\u014A" => "N"
-
-# Ɲ  [LATIN CAPITAL LETTER N WITH LEFT HOOK]
-"\u019D" => "N"
-
-# Ǹ  [LATIN CAPITAL LETTER N WITH GRAVE]
-"\u01F8" => "N"
-
-# Ƞ  [LATIN CAPITAL LETTER N WITH LONG RIGHT LEG]
-"\u0220" => "N"
-
-# ɴ  [LATIN LETTER SMALL CAPITAL N]
-"\u0274" => "N"
-
-# ᴎ  [LATIN LETTER SMALL CAPITAL REVERSED N]
-"\u1D0E" => "N"
-
-# Ṅ  [LATIN CAPITAL LETTER N WITH DOT ABOVE]
-"\u1E44" => "N"
-
-# Ṇ  [LATIN CAPITAL LETTER N WITH DOT BELOW]
-"\u1E46" => "N"
-
-# Ṉ  [LATIN CAPITAL LETTER N WITH LINE BELOW]
-"\u1E48" => "N"
-
-# Ṋ  [LATIN CAPITAL LETTER N WITH CIRCUMFLEX BELOW]
-"\u1E4A" => "N"
-
-# Ⓝ  [CIRCLED LATIN CAPITAL LETTER N]
-"\u24C3" => "N"
-
-# N  [FULLWIDTH LATIN CAPITAL LETTER N]
-"\uFF2E" => "N"
-
-# ñ  [LATIN SMALL LETTER N WITH TILDE]
-"\u00F1" => "n"
-
-# ń  [LATIN SMALL LETTER N WITH ACUTE]
-"\u0144" => "n"
-
-# ņ  [LATIN SMALL LETTER N WITH CEDILLA]
-"\u0146" => "n"
-
-# ň  [LATIN SMALL LETTER N WITH CARON]
-"\u0148" => "n"
-
-# ʼn  [LATIN SMALL LETTER N PRECEDED BY APOSTROPHE]
-"\u0149" => "n"
-
-# ŋ  http://en.wikipedia.org/wiki/Eng_(letter)  [LATIN SMALL LETTER ENG]
-"\u014B" => "n"
-
-# ƞ  [LATIN SMALL LETTER N WITH LONG RIGHT LEG]
-"\u019E" => "n"
-
-# ǹ  [LATIN SMALL LETTER N WITH GRAVE]
-"\u01F9" => "n"
-
-# ȵ  [LATIN SMALL LETTER N WITH CURL]
-"\u0235" => "n"
-
-# ɲ  [LATIN SMALL LETTER N WITH LEFT HOOK]
-"\u0272" => "n"
-
-# ɳ  [LATIN SMALL LETTER N WITH RETROFLEX HOOK]
-"\u0273" => "n"
-
-# ᵰ  [LATIN SMALL LETTER N WITH MIDDLE TILDE]
-"\u1D70" => "n"
-
-# ᶇ  [LATIN SMALL LETTER N WITH PALATAL HOOK]
-"\u1D87" => "n"
-
-# ṅ  [LATIN SMALL LETTER N WITH DOT ABOVE]
-"\u1E45" => "n"
-
-# ṇ  [LATIN SMALL LETTER N WITH DOT BELOW]
-"\u1E47" => "n"
-
-# ṉ  [LATIN SMALL LETTER N WITH LINE BELOW]
-"\u1E49" => "n"
-
-# ṋ  [LATIN SMALL LETTER N WITH CIRCUMFLEX BELOW]
-"\u1E4B" => "n"
-
-# ⁿ  [SUPERSCRIPT LATIN SMALL LETTER N]
-"\u207F" => "n"
-
-# ⓝ  [CIRCLED LATIN SMALL LETTER N]
-"\u24DD" => "n"
-
-# n  [FULLWIDTH LATIN SMALL LETTER N]
-"\uFF4E" => "n"
-
-# NJ  [LATIN CAPITAL LETTER NJ]
-"\u01CA" => "NJ"
-
-# Nj  [LATIN CAPITAL LETTER N WITH SMALL LETTER J]
-"\u01CB" => "Nj"
-
-# ⒩  [PARENTHESIZED LATIN SMALL LETTER N]
-"\u24A9" => "(n)"
-
-# nj  [LATIN SMALL LETTER NJ]
-"\u01CC" => "nj"
-
-# Ò  [LATIN CAPITAL LETTER O WITH GRAVE]
-"\u00D2" => "O"
-
-# Ó  [LATIN CAPITAL LETTER O WITH ACUTE]
-"\u00D3" => "O"
-
-# Ô  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX]
-"\u00D4" => "O"
-
-# Õ  [LATIN CAPITAL LETTER O WITH TILDE]
-"\u00D5" => "O"
-
-# Ö  [LATIN CAPITAL LETTER O WITH DIAERESIS]
-"\u00D6" => "O"
-
-# Ø  [LATIN CAPITAL LETTER O WITH STROKE]
-"\u00D8" => "O"
-
-# Ō  [LATIN CAPITAL LETTER O WITH MACRON]
-"\u014C" => "O"
-
-# Ŏ  [LATIN CAPITAL LETTER O WITH BREVE]
-"\u014E" => "O"
-
-# Ő  [LATIN CAPITAL LETTER O WITH DOUBLE ACUTE]
-"\u0150" => "O"
-
-# Ɔ  [LATIN CAPITAL LETTER OPEN O]
-"\u0186" => "O"
-
-# Ɵ  [LATIN CAPITAL LETTER O WITH MIDDLE TILDE]
-"\u019F" => "O"
-
-# Ơ  [LATIN CAPITAL LETTER O WITH HORN]
-"\u01A0" => "O"
-
-# Ǒ  [LATIN CAPITAL LETTER O WITH CARON]
-"\u01D1" => "O"
-
-# Ǫ  [LATIN CAPITAL LETTER O WITH OGONEK]
-"\u01EA" => "O"
-
-# Ǭ  [LATIN CAPITAL LETTER O WITH OGONEK AND MACRON]
-"\u01EC" => "O"
-
-# Ǿ  [LATIN CAPITAL LETTER O WITH STROKE AND ACUTE]
-"\u01FE" => "O"
-
-# Ȍ  [LATIN CAPITAL LETTER O WITH DOUBLE GRAVE]
-"\u020C" => "O"
-
-# Ȏ  [LATIN CAPITAL LETTER O WITH INVERTED BREVE]
-"\u020E" => "O"
-
-# Ȫ  [LATIN CAPITAL LETTER O WITH DIAERESIS AND MACRON]
-"\u022A" => "O"
-
-# Ȭ  [LATIN CAPITAL LETTER O WITH TILDE AND MACRON]
-"\u022C" => "O"
-
-# Ȯ  [LATIN CAPITAL LETTER O WITH DOT ABOVE]
-"\u022E" => "O"
-
-# Ȱ  [LATIN CAPITAL LETTER O WITH DOT ABOVE AND MACRON]
-"\u0230" => "O"
-
-# ᴏ  [LATIN LETTER SMALL CAPITAL O]
-"\u1D0F" => "O"
-
-# ᴐ  [LATIN LETTER SMALL CAPITAL OPEN O]
-"\u1D10" => "O"
-
-# Ṍ  [LATIN CAPITAL LETTER O WITH TILDE AND ACUTE]
-"\u1E4C" => "O"
-
-# Ṏ  [LATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS]
-"\u1E4E" => "O"
-
-# Ṑ  [LATIN CAPITAL LETTER O WITH MACRON AND GRAVE]
-"\u1E50" => "O"
-
-# Ṓ  [LATIN CAPITAL LETTER O WITH MACRON AND ACUTE]
-"\u1E52" => "O"
-
-# Ọ  [LATIN CAPITAL LETTER O WITH DOT BELOW]
-"\u1ECC" => "O"
-
-# Ỏ  [LATIN CAPITAL LETTER O WITH HOOK ABOVE]
-"\u1ECE" => "O"
-
-# Ố  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND ACUTE]
-"\u1ED0" => "O"
-
-# Ồ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND GRAVE]
-"\u1ED2" => "O"
-
-# Ổ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1ED4" => "O"
-
-# Ỗ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE]
-"\u1ED6" => "O"
-
-# Ộ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND DOT BELOW]
-"\u1ED8" => "O"
-
-# Ớ  [LATIN CAPITAL LETTER O WITH HORN AND ACUTE]
-"\u1EDA" => "O"
-
-# Ờ  [LATIN CAPITAL LETTER O WITH HORN AND GRAVE]
-"\u1EDC" => "O"
-
-# Ở  [LATIN CAPITAL LETTER O WITH HORN AND HOOK ABOVE]
-"\u1EDE" => "O"
-
-# Ỡ  [LATIN CAPITAL LETTER O WITH HORN AND TILDE]
-"\u1EE0" => "O"
-
-# Ợ  [LATIN CAPITAL LETTER O WITH HORN AND DOT BELOW]
-"\u1EE2" => "O"
-
-# Ⓞ  [CIRCLED LATIN CAPITAL LETTER O]
-"\u24C4" => "O"
-
-# Ꝋ  [LATIN CAPITAL LETTER O WITH LONG STROKE OVERLAY]
-"\uA74A" => "O"
-
-# Ꝍ  [LATIN CAPITAL LETTER O WITH LOOP]
-"\uA74C" => "O"
-
-# O  [FULLWIDTH LATIN CAPITAL LETTER O]
-"\uFF2F" => "O"
-
-# ò  [LATIN SMALL LETTER O WITH GRAVE]
-"\u00F2" => "o"
-
-# ó  [LATIN SMALL LETTER O WITH ACUTE]
-"\u00F3" => "o"
-
-# ô  [LATIN SMALL LETTER O WITH CIRCUMFLEX]
-"\u00F4" => "o"
-
-# õ  [LATIN SMALL LETTER O WITH TILDE]
-"\u00F5" => "o"
-
-# ö  [LATIN SMALL LETTER O WITH DIAERESIS]
-"\u00F6" => "o"
-
-# ø  [LATIN SMALL LETTER O WITH STROKE]
-"\u00F8" => "o"
-
-# ō  [LATIN SMALL LETTER O WITH MACRON]
-"\u014D" => "o"
-
-# ŏ  [LATIN SMALL LETTER O WITH BREVE]
-"\u014F" => "o"
-
-# ő  [LATIN SMALL LETTER O WITH DOUBLE ACUTE]
-"\u0151" => "o"
-
-# ơ  [LATIN SMALL LETTER O WITH HORN]
-"\u01A1" => "o"
-
-# ǒ  [LATIN SMALL LETTER O WITH CARON]
-"\u01D2" => "o"
-
-# ǫ  [LATIN SMALL LETTER O WITH OGONEK]
-"\u01EB" => "o"
-
-# ǭ  [LATIN SMALL LETTER O WITH OGONEK AND MACRON]
-"\u01ED" => "o"
-
-# ǿ  [LATIN SMALL LETTER O WITH STROKE AND ACUTE]
-"\u01FF" => "o"
-
-# ȍ  [LATIN SMALL LETTER O WITH DOUBLE GRAVE]
-"\u020D" => "o"
-
-# ȏ  [LATIN SMALL LETTER O WITH INVERTED BREVE]
-"\u020F" => "o"
-
-# ȫ  [LATIN SMALL LETTER O WITH DIAERESIS AND MACRON]
-"\u022B" => "o"
-
-# ȭ  [LATIN SMALL LETTER O WITH TILDE AND MACRON]
-"\u022D" => "o"
-
-# ȯ  [LATIN SMALL LETTER O WITH DOT ABOVE]
-"\u022F" => "o"
-
-# ȱ  [LATIN SMALL LETTER O WITH DOT ABOVE AND MACRON]
-"\u0231" => "o"
-
-# ɔ  [LATIN SMALL LETTER OPEN O]
-"\u0254" => "o"
-
-# ɵ  [LATIN SMALL LETTER BARRED O]
-"\u0275" => "o"
-
-# ᴖ  [LATIN SMALL LETTER TOP HALF O]
-"\u1D16" => "o"
-
-# ᴗ  [LATIN SMALL LETTER BOTTOM HALF O]
-"\u1D17" => "o"
-
-# ᶗ  [LATIN SMALL LETTER OPEN O WITH RETROFLEX HOOK]
-"\u1D97" => "o"
-
-# ṍ  [LATIN SMALL LETTER O WITH TILDE AND ACUTE]
-"\u1E4D" => "o"
-
-# ṏ  [LATIN SMALL LETTER O WITH TILDE AND DIAERESIS]
-"\u1E4F" => "o"
-
-# ṑ  [LATIN SMALL LETTER O WITH MACRON AND GRAVE]
-"\u1E51" => "o"
-
-# ṓ  [LATIN SMALL LETTER O WITH MACRON AND ACUTE]
-"\u1E53" => "o"
-
-# ọ  [LATIN SMALL LETTER O WITH DOT BELOW]
-"\u1ECD" => "o"
-
-# ỏ  [LATIN SMALL LETTER O WITH HOOK ABOVE]
-"\u1ECF" => "o"
-
-# ố  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND ACUTE]
-"\u1ED1" => "o"
-
-# ồ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND GRAVE]
-"\u1ED3" => "o"
-
-# ổ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1ED5" => "o"
-
-# ỗ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE]
-"\u1ED7" => "o"
-
-# ộ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW]
-"\u1ED9" => "o"
-
-# ớ  [LATIN SMALL LETTER O WITH HORN AND ACUTE]
-"\u1EDB" => "o"
-
-# ờ  [LATIN SMALL LETTER O WITH HORN AND GRAVE]
-"\u1EDD" => "o"
-
-# ở  [LATIN SMALL LETTER O WITH HORN AND HOOK ABOVE]
-"\u1EDF" => "o"
-
-# ỡ  [LATIN SMALL LETTER O WITH HORN AND TILDE]
-"\u1EE1" => "o"
-
-# ợ  [LATIN SMALL LETTER O WITH HORN AND DOT BELOW]
-"\u1EE3" => "o"
-
-# ₒ  [LATIN SUBSCRIPT SMALL LETTER O]
-"\u2092" => "o"
-
-# ⓞ  [CIRCLED LATIN SMALL LETTER O]
-"\u24DE" => "o"
-
-# ⱺ  [LATIN SMALL LETTER O WITH LOW RING INSIDE]
-"\u2C7A" => "o"
-
-# ꝋ  [LATIN SMALL LETTER O WITH LONG STROKE OVERLAY]
-"\uA74B" => "o"
-
-# ꝍ  [LATIN SMALL LETTER O WITH LOOP]
-"\uA74D" => "o"
-
-# o  [FULLWIDTH LATIN SMALL LETTER O]
-"\uFF4F" => "o"
-
-# Π [LATIN CAPITAL LIGATURE OE]
-"\u0152" => "OE"
-
-# ɶ  [LATIN LETTER SMALL CAPITAL OE]
-"\u0276" => "OE"
-
-# Ꝏ  [LATIN CAPITAL LETTER OO]
-"\uA74E" => "OO"
-
-# Ȣ  http://en.wikipedia.org/wiki/OU  [LATIN CAPITAL LETTER OU]
-"\u0222" => "OU"
-
-# ᴕ  [LATIN LETTER SMALL CAPITAL OU]
-"\u1D15" => "OU"
-
-# ⒪  [PARENTHESIZED LATIN SMALL LETTER O]
-"\u24AA" => "(o)"
-
-# œ  [LATIN SMALL LIGATURE OE]
-"\u0153" => "oe"
-
-# ᴔ  [LATIN SMALL LETTER TURNED OE]
-"\u1D14" => "oe"
-
-# ꝏ  [LATIN SMALL LETTER OO]
-"\uA74F" => "oo"
-
-# ȣ  http://en.wikipedia.org/wiki/OU  [LATIN SMALL LETTER OU]
-"\u0223" => "ou"
-
-# Ƥ  [LATIN CAPITAL LETTER P WITH HOOK]
-"\u01A4" => "P"
-
-# ᴘ  [LATIN LETTER SMALL CAPITAL P]
-"\u1D18" => "P"
-
-# Ṕ  [LATIN CAPITAL LETTER P WITH ACUTE]
-"\u1E54" => "P"
-
-# Ṗ  [LATIN CAPITAL LETTER P WITH DOT ABOVE]
-"\u1E56" => "P"
-
-# Ⓟ  [CIRCLED LATIN CAPITAL LETTER P]
-"\u24C5" => "P"
-
-# Ᵽ  [LATIN CAPITAL LETTER P WITH STROKE]
-"\u2C63" => "P"
-
-# Ꝑ  [LATIN CAPITAL LETTER P WITH STROKE THROUGH DESCENDER]
-"\uA750" => "P"
-
-# Ꝓ  [LATIN CAPITAL LETTER P WITH FLOURISH]
-"\uA752" => "P"
-
-# Ꝕ  [LATIN CAPITAL LETTER P WITH SQUIRREL TAIL]
-"\uA754" => "P"
-
-# P  [FULLWIDTH LATIN CAPITAL LETTER P]
-"\uFF30" => "P"
-
-# ƥ  [LATIN SMALL LETTER P WITH HOOK]
-"\u01A5" => "p"
-
-# ᵱ  [LATIN SMALL LETTER P WITH MIDDLE TILDE]
-"\u1D71" => "p"
-
-# ᵽ  [LATIN SMALL LETTER P WITH STROKE]
-"\u1D7D" => "p"
-
-# ᶈ  [LATIN SMALL LETTER P WITH PALATAL HOOK]
-"\u1D88" => "p"
-
-# ṕ  [LATIN SMALL LETTER P WITH ACUTE]
-"\u1E55" => "p"
-
-# ṗ  [LATIN SMALL LETTER P WITH DOT ABOVE]
-"\u1E57" => "p"
-
-# ⓟ  [CIRCLED LATIN SMALL LETTER P]
-"\u24DF" => "p"
-
-# ꝑ  [LATIN SMALL LETTER P WITH STROKE THROUGH DESCENDER]
-"\uA751" => "p"
-
-# ꝓ  [LATIN SMALL LETTER P WITH FLOURISH]
-"\uA753" => "p"
-
-# ꝕ  [LATIN SMALL LETTER P WITH SQUIRREL TAIL]
-"\uA755" => "p"
-
-# ꟼ  [LATIN EPIGRAPHIC LETTER REVERSED P]
-"\uA7FC" => "p"
-
-# p  [FULLWIDTH LATIN SMALL LETTER P]
-"\uFF50" => "p"
-
-# ⒫  [PARENTHESIZED LATIN SMALL LETTER P]
-"\u24AB" => "(p)"
-
-# Ɋ  [LATIN CAPITAL LETTER SMALL Q WITH HOOK TAIL]
-"\u024A" => "Q"
-
-# Ⓠ  [CIRCLED LATIN CAPITAL LETTER Q]
-"\u24C6" => "Q"
-
-# Ꝗ  [LATIN CAPITAL LETTER Q WITH STROKE THROUGH DESCENDER]
-"\uA756" => "Q"
-
-# Ꝙ  [LATIN CAPITAL LETTER Q WITH DIAGONAL STROKE]
-"\uA758" => "Q"
-
-# Q  [FULLWIDTH LATIN CAPITAL LETTER Q]
-"\uFF31" => "Q"
-
-# ĸ  http://en.wikipedia.org/wiki/Kra_(letter)  [LATIN SMALL LETTER KRA]
-"\u0138" => "q"
-
-# ɋ  [LATIN SMALL LETTER Q WITH HOOK TAIL]
-"\u024B" => "q"
-
-# ʠ  [LATIN SMALL LETTER Q WITH HOOK]
-"\u02A0" => "q"
-
-# ⓠ  [CIRCLED LATIN SMALL LETTER Q]
-"\u24E0" => "q"
-
-# ꝗ  [LATIN SMALL LETTER Q WITH STROKE THROUGH DESCENDER]
-"\uA757" => "q"
-
-# ꝙ  [LATIN SMALL LETTER Q WITH DIAGONAL STROKE]
-"\uA759" => "q"
-
-# q  [FULLWIDTH LATIN SMALL LETTER Q]
-"\uFF51" => "q"
-
-# ⒬  [PARENTHESIZED LATIN SMALL LETTER Q]
-"\u24AC" => "(q)"
-
-# ȹ  [LATIN SMALL LETTER QP DIGRAPH]
-"\u0239" => "qp"
-
-# Ŕ  [LATIN CAPITAL LETTER R WITH ACUTE]
-"\u0154" => "R"
-
-# Ŗ  [LATIN CAPITAL LETTER R WITH CEDILLA]
-"\u0156" => "R"
-
-# Ř  [LATIN CAPITAL LETTER R WITH CARON]
-"\u0158" => "R"
-
-# Ȓ  [LATIN CAPITAL LETTER R WITH DOUBLE GRAVE]
-"\u0210" => "R"
-
-# Ȓ  [LATIN CAPITAL LETTER R WITH INVERTED BREVE]
-"\u0212" => "R"
-
-# Ɍ  [LATIN CAPITAL LETTER R WITH STROKE]
-"\u024C" => "R"
-
-# ʀ  [LATIN LETTER SMALL CAPITAL R]
-"\u0280" => "R"
-
-# ʁ  [LATIN LETTER SMALL CAPITAL INVERTED R]
-"\u0281" => "R"
-
-# ᴙ  [LATIN LETTER SMALL CAPITAL REVERSED R]
-"\u1D19" => "R"
-
-# ᴚ  [LATIN LETTER SMALL CAPITAL TURNED R]
-"\u1D1A" => "R"
-
-# Ṙ  [LATIN CAPITAL LETTER R WITH DOT ABOVE]
-"\u1E58" => "R"
-
-# Ṛ  [LATIN CAPITAL LETTER R WITH DOT BELOW]
-"\u1E5A" => "R"
-
-# Ṝ  [LATIN CAPITAL LETTER R WITH DOT BELOW AND MACRON]
-"\u1E5C" => "R"
-
-# Ṟ  [LATIN CAPITAL LETTER R WITH LINE BELOW]
-"\u1E5E" => "R"
-
-# Ⓡ  [CIRCLED LATIN CAPITAL LETTER R]
-"\u24C7" => "R"
-
-# Ɽ  [LATIN CAPITAL LETTER R WITH TAIL]
-"\u2C64" => "R"
-
-# Ꝛ  [LATIN CAPITAL LETTER R ROTUNDA]
-"\uA75A" => "R"
-
-# Ꞃ  [LATIN CAPITAL LETTER INSULAR R]
-"\uA782" => "R"
-
-# R  [FULLWIDTH LATIN CAPITAL LETTER R]
-"\uFF32" => "R"
-
-# ŕ  [LATIN SMALL LETTER R WITH ACUTE]
-"\u0155" => "r"
-
-# ŗ  [LATIN SMALL LETTER R WITH CEDILLA]
-"\u0157" => "r"
-
-# ř  [LATIN SMALL LETTER R WITH CARON]
-"\u0159" => "r"
-
-# ȑ  [LATIN SMALL LETTER R WITH DOUBLE GRAVE]
-"\u0211" => "r"
-
-# ȓ  [LATIN SMALL LETTER R WITH INVERTED BREVE]
-"\u0213" => "r"
-
-# ɍ  [LATIN SMALL LETTER R WITH STROKE]
-"\u024D" => "r"
-
-# ɼ  [LATIN SMALL LETTER R WITH LONG LEG]
-"\u027C" => "r"
-
-# ɽ  [LATIN SMALL LETTER R WITH TAIL]
-"\u027D" => "r"
-
-# ɾ  [LATIN SMALL LETTER R WITH FISHHOOK]
-"\u027E" => "r"
-
-# ɿ  [LATIN SMALL LETTER REVERSED R WITH FISHHOOK]
-"\u027F" => "r"
-
-# ᵣ  [LATIN SUBSCRIPT SMALL LETTER R]
-"\u1D63" => "r"
-
-# ᵲ  [LATIN SMALL LETTER R WITH MIDDLE TILDE]
-"\u1D72" => "r"
-
-# ᵳ  [LATIN SMALL LETTER R WITH FISHHOOK AND MIDDLE TILDE]
-"\u1D73" => "r"
-
-# ᶉ  [LATIN SMALL LETTER R WITH PALATAL HOOK]
-"\u1D89" => "r"
-
-# ṙ  [LATIN SMALL LETTER R WITH DOT ABOVE]
-"\u1E59" => "r"
-
-# ṛ  [LATIN SMALL LETTER R WITH DOT BELOW]
-"\u1E5B" => "r"
-
-# ṝ  [LATIN SMALL LETTER R WITH DOT BELOW AND MACRON]
-"\u1E5D" => "r"
-
-# ṟ  [LATIN SMALL LETTER R WITH LINE BELOW]
-"\u1E5F" => "r"
-
-# ⓡ  [CIRCLED LATIN SMALL LETTER R]
-"\u24E1" => "r"
-
-# ꝛ  [LATIN SMALL LETTER R ROTUNDA]
-"\uA75B" => "r"
-
-# ꞃ  [LATIN SMALL LETTER INSULAR R]
-"\uA783" => "r"
-
-# r  [FULLWIDTH LATIN SMALL LETTER R]
-"\uFF52" => "r"
-
-# ⒭  [PARENTHESIZED LATIN SMALL LETTER R]
-"\u24AD" => "(r)"
-
-# Ś  [LATIN CAPITAL LETTER S WITH ACUTE]
-"\u015A" => "S"
-
-# Ŝ  [LATIN CAPITAL LETTER S WITH CIRCUMFLEX]
-"\u015C" => "S"
-
-# Ş  [LATIN CAPITAL LETTER S WITH CEDILLA]
-"\u015E" => "S"
-
-# Š  [LATIN CAPITAL LETTER S WITH CARON]
-"\u0160" => "S"
-
-# Ș  [LATIN CAPITAL LETTER S WITH COMMA BELOW]
-"\u0218" => "S"
-
-# Ṡ  [LATIN CAPITAL LETTER S WITH DOT ABOVE]
-"\u1E60" => "S"
-
-# Ṣ  [LATIN CAPITAL LETTER S WITH DOT BELOW]
-"\u1E62" => "S"
-
-# Ṥ  [LATIN CAPITAL LETTER S WITH ACUTE AND DOT ABOVE]
-"\u1E64" => "S"
-
-# Ṧ  [LATIN CAPITAL LETTER S WITH CARON AND DOT ABOVE]
-"\u1E66" => "S"
-
-# Ṩ  [LATIN CAPITAL LETTER S WITH DOT BELOW AND DOT ABOVE]
-"\u1E68" => "S"
-
-# Ⓢ  [CIRCLED LATIN CAPITAL LETTER S]
-"\u24C8" => "S"
-
-# ꜱ  [LATIN LETTER SMALL CAPITAL S]
-"\uA731" => "S"
-
-# ꞅ  [LATIN SMALL LETTER INSULAR S]
-"\uA785" => "S"
-
-# S  [FULLWIDTH LATIN CAPITAL LETTER S]
-"\uFF33" => "S"
-
-# ś  [LATIN SMALL LETTER S WITH ACUTE]
-"\u015B" => "s"
-
-# ŝ  [LATIN SMALL LETTER S WITH CIRCUMFLEX]
-"\u015D" => "s"
-
-# ş  [LATIN SMALL LETTER S WITH CEDILLA]
-"\u015F" => "s"
-
-# š  [LATIN SMALL LETTER S WITH CARON]
-"\u0161" => "s"
-
-# ſ  http://en.wikipedia.org/wiki/Long_S  [LATIN SMALL LETTER LONG S]
-"\u017F" => "s"
-
-# ș  [LATIN SMALL LETTER S WITH COMMA BELOW]
-"\u0219" => "s"
-
-# ȿ  [LATIN SMALL LETTER S WITH SWASH TAIL]
-"\u023F" => "s"
-
-# ʂ  [LATIN SMALL LETTER S WITH HOOK]
-"\u0282" => "s"
-
-# ᵴ  [LATIN SMALL LETTER S WITH MIDDLE TILDE]
-"\u1D74" => "s"
-
-# ᶊ  [LATIN SMALL LETTER S WITH PALATAL HOOK]
-"\u1D8A" => "s"
-
-# ṡ  [LATIN SMALL LETTER S WITH DOT ABOVE]
-"\u1E61" => "s"
-
-# ṣ  [LATIN SMALL LETTER S WITH DOT BELOW]
-"\u1E63" => "s"
-
-# ṥ  [LATIN SMALL LETTER S WITH ACUTE AND DOT ABOVE]
-"\u1E65" => "s"
-
-# ṧ  [LATIN SMALL LETTER S WITH CARON AND DOT ABOVE]
-"\u1E67" => "s"
-
-# ṩ  [LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE]
-"\u1E69" => "s"
-
-# ẜ  [LATIN SMALL LETTER LONG S WITH DIAGONAL STROKE]
-"\u1E9C" => "s"
-
-# ẝ  [LATIN SMALL LETTER LONG S WITH HIGH STROKE]
-"\u1E9D" => "s"
-
-# ⓢ  [CIRCLED LATIN SMALL LETTER S]
-"\u24E2" => "s"
-
-# Ꞅ  [LATIN CAPITAL LETTER INSULAR S]
-"\uA784" => "s"
-
-# s  [FULLWIDTH LATIN SMALL LETTER S]
-"\uFF53" => "s"
-
-# ẞ  [LATIN CAPITAL LETTER SHARP S]
-"\u1E9E" => "SS"
-
-# ⒮  [PARENTHESIZED LATIN SMALL LETTER S]
-"\u24AE" => "(s)"
-
-# ß  [LATIN SMALL LETTER SHARP S]
-"\u00DF" => "ss"
-
-# st  [LATIN SMALL LIGATURE ST]
-"\uFB06" => "st"
-
-# Ţ  [LATIN CAPITAL LETTER T WITH CEDILLA]
-"\u0162" => "T"
-
-# Ť  [LATIN CAPITAL LETTER T WITH CARON]
-"\u0164" => "T"
-
-# Ŧ  [LATIN CAPITAL LETTER T WITH STROKE]
-"\u0166" => "T"
-
-# Ƭ  [LATIN CAPITAL LETTER T WITH HOOK]
-"\u01AC" => "T"
-
-# Ʈ  [LATIN CAPITAL LETTER T WITH RETROFLEX HOOK]
-"\u01AE" => "T"
-
-# Ț  [LATIN CAPITAL LETTER T WITH COMMA BELOW]
-"\u021A" => "T"
-
-# Ⱦ  [LATIN CAPITAL LETTER T WITH DIAGONAL STROKE]
-"\u023E" => "T"
-
-# ᴛ  [LATIN LETTER SMALL CAPITAL T]
-"\u1D1B" => "T"
-
-# Ṫ  [LATIN CAPITAL LETTER T WITH DOT ABOVE]
-"\u1E6A" => "T"
-
-# Ṭ  [LATIN CAPITAL LETTER T WITH DOT BELOW]
-"\u1E6C" => "T"
-
-# Ṯ  [LATIN CAPITAL LETTER T WITH LINE BELOW]
-"\u1E6E" => "T"
-
-# Ṱ  [LATIN CAPITAL LETTER T WITH CIRCUMFLEX BELOW]
-"\u1E70" => "T"
-
-# Ⓣ  [CIRCLED LATIN CAPITAL LETTER T]
-"\u24C9" => "T"
-
-# Ꞇ  [LATIN CAPITAL LETTER INSULAR T]
-"\uA786" => "T"
-
-# T  [FULLWIDTH LATIN CAPITAL LETTER T]
-"\uFF34" => "T"
-
-# ţ  [LATIN SMALL LETTER T WITH CEDILLA]
-"\u0163" => "t"
-
-# ť  [LATIN SMALL LETTER T WITH CARON]
-"\u0165" => "t"
-
-# ŧ  [LATIN SMALL LETTER T WITH STROKE]
-"\u0167" => "t"
-
-# ƫ  [LATIN SMALL LETTER T WITH PALATAL HOOK]
-"\u01AB" => "t"
-
-# ƭ  [LATIN SMALL LETTER T WITH HOOK]
-"\u01AD" => "t"
-
-# ț  [LATIN SMALL LETTER T WITH COMMA BELOW]
-"\u021B" => "t"
-
-# ȶ  [LATIN SMALL LETTER T WITH CURL]
-"\u0236" => "t"
-
-# ʇ  [LATIN SMALL LETTER TURNED T]
-"\u0287" => "t"
-
-# ʈ  [LATIN SMALL LETTER T WITH RETROFLEX HOOK]
-"\u0288" => "t"
-
-# ᵵ  [LATIN SMALL LETTER T WITH MIDDLE TILDE]
-"\u1D75" => "t"
-
-# ṫ  [LATIN SMALL LETTER T WITH DOT ABOVE]
-"\u1E6B" => "t"
-
-# ṭ  [LATIN SMALL LETTER T WITH DOT BELOW]
-"\u1E6D" => "t"
-
-# ṯ  [LATIN SMALL LETTER T WITH LINE BELOW]
-"\u1E6F" => "t"
-
-# ṱ  [LATIN SMALL LETTER T WITH CIRCUMFLEX BELOW]
-"\u1E71" => "t"
-
-# ẗ  [LATIN SMALL LETTER T WITH DIAERESIS]
-"\u1E97" => "t"
-
-# ⓣ  [CIRCLED LATIN SMALL LETTER T]
-"\u24E3" => "t"
-
-# ⱦ  [LATIN SMALL LETTER T WITH DIAGONAL STROKE]
-"\u2C66" => "t"
-
-# t  [FULLWIDTH LATIN SMALL LETTER T]
-"\uFF54" => "t"
-
-# Þ  [LATIN CAPITAL LETTER THORN]
-"\u00DE" => "TH"
-
-# Ꝧ  [LATIN CAPITAL LETTER THORN WITH STROKE THROUGH DESCENDER]
-"\uA766" => "TH"
-
-# Ꜩ  [LATIN CAPITAL LETTER TZ]
-"\uA728" => "TZ"
-
-# ⒯  [PARENTHESIZED LATIN SMALL LETTER T]
-"\u24AF" => "(t)"
-
-# ʨ  [LATIN SMALL LETTER TC DIGRAPH WITH CURL]
-"\u02A8" => "tc"
-
-# þ  [LATIN SMALL LETTER THORN]
-"\u00FE" => "th"
-
-# ᵺ  [LATIN SMALL LETTER TH WITH STRIKETHROUGH]
-"\u1D7A" => "th"
-
-# ꝧ  [LATIN SMALL LETTER THORN WITH STROKE THROUGH DESCENDER]
-"\uA767" => "th"
-
-# ʦ  [LATIN SMALL LETTER TS DIGRAPH]
-"\u02A6" => "ts"
-
-# ꜩ  [LATIN SMALL LETTER TZ]
-"\uA729" => "tz"
-
-# Ù  [LATIN CAPITAL LETTER U WITH GRAVE]
-"\u00D9" => "U"
-
-# Ú  [LATIN CAPITAL LETTER U WITH ACUTE]
-"\u00DA" => "U"
-
-# Û  [LATIN CAPITAL LETTER U WITH CIRCUMFLEX]
-"\u00DB" => "U"
-
-# Ü  [LATIN CAPITAL LETTER U WITH DIAERESIS]
-"\u00DC" => "U"
-
-# Ũ  [LATIN CAPITAL LETTER U WITH TILDE]
-"\u0168" => "U"
-
-# Ū  [LATIN CAPITAL LETTER U WITH MACRON]
-"\u016A" => "U"
-
-# Ŭ  [LATIN CAPITAL LETTER U WITH BREVE]
-"\u016C" => "U"
-
-# Ů  [LATIN CAPITAL LETTER U WITH RING ABOVE]
-"\u016E" => "U"
-
-# Ű  [LATIN CAPITAL LETTER U WITH DOUBLE ACUTE]
-"\u0170" => "U"
-
-# Ų  [LATIN CAPITAL LETTER U WITH OGONEK]
-"\u0172" => "U"
-
-# Ư  [LATIN CAPITAL LETTER U WITH HORN]
-"\u01AF" => "U"
-
-# Ǔ  [LATIN CAPITAL LETTER U WITH CARON]
-"\u01D3" => "U"
-
-# Ǖ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND MACRON]
-"\u01D5" => "U"
-
-# Ǘ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND ACUTE]
-"\u01D7" => "U"
-
-# Ǚ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND CARON]
-"\u01D9" => "U"
-
-# Ǜ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND GRAVE]
-"\u01DB" => "U"
-
-# Ȕ  [LATIN CAPITAL LETTER U WITH DOUBLE GRAVE]
-"\u0214" => "U"
-
-# Ȗ  [LATIN CAPITAL LETTER U WITH INVERTED BREVE]
-"\u0216" => "U"
-
-# Ʉ  [LATIN CAPITAL LETTER U BAR]
-"\u0244" => "U"
-
-# ᴜ  [LATIN LETTER SMALL CAPITAL U]
-"\u1D1C" => "U"
-
-# ᵾ  [LATIN SMALL CAPITAL LETTER U WITH STROKE]
-"\u1D7E" => "U"
-
-# Ṳ  [LATIN CAPITAL LETTER U WITH DIAERESIS BELOW]
-"\u1E72" => "U"
-
-# Ṵ  [LATIN CAPITAL LETTER U WITH TILDE BELOW]
-"\u1E74" => "U"
-
-# Ṷ  [LATIN CAPITAL LETTER U WITH CIRCUMFLEX BELOW]
-"\u1E76" => "U"
-
-# Ṹ  [LATIN CAPITAL LETTER U WITH TILDE AND ACUTE]
-"\u1E78" => "U"
-
-# Ṻ  [LATIN CAPITAL LETTER U WITH MACRON AND DIAERESIS]
-"\u1E7A" => "U"
-
-# Ụ  [LATIN CAPITAL LETTER U WITH DOT BELOW]
-"\u1EE4" => "U"
-
-# Ủ  [LATIN CAPITAL LETTER U WITH HOOK ABOVE]
-"\u1EE6" => "U"
-
-# Ứ  [LATIN CAPITAL LETTER U WITH HORN AND ACUTE]
-"\u1EE8" => "U"
-
-# Ừ  [LATIN CAPITAL LETTER U WITH HORN AND GRAVE]
-"\u1EEA" => "U"
-
-# Ử  [LATIN CAPITAL LETTER U WITH HORN AND HOOK ABOVE]
-"\u1EEC" => "U"
-
-# Ữ  [LATIN CAPITAL LETTER U WITH HORN AND TILDE]
-"\u1EEE" => "U"
-
-# Ự  [LATIN CAPITAL LETTER U WITH HORN AND DOT BELOW]
-"\u1EF0" => "U"
-
-# Ⓤ  [CIRCLED LATIN CAPITAL LETTER U]
-"\u24CA" => "U"
-
-# U  [FULLWIDTH LATIN CAPITAL LETTER U]
-"\uFF35" => "U"
-
-# ù  [LATIN SMALL LETTER U WITH GRAVE]
-"\u00F9" => "u"
-
-# ú  [LATIN SMALL LETTER U WITH ACUTE]
-"\u00FA" => "u"
-
-# û  [LATIN SMALL LETTER U WITH CIRCUMFLEX]
-"\u00FB" => "u"
-
-# ü  [LATIN SMALL LETTER U WITH DIAERESIS]
-"\u00FC" => "u"
-
-# ũ  [LATIN SMALL LETTER U WITH TILDE]
-"\u0169" => "u"
-
-# ū  [LATIN SMALL LETTER U WITH MACRON]
-"\u016B" => "u"
-
-# ŭ  [LATIN SMALL LETTER U WITH BREVE]
-"\u016D" => "u"
-
-# ů  [LATIN SMALL LETTER U WITH RING ABOVE]
-"\u016F" => "u"
-
-# ű  [LATIN SMALL LETTER U WITH DOUBLE ACUTE]
-"\u0171" => "u"
-
-# ų  [LATIN SMALL LETTER U WITH OGONEK]
-"\u0173" => "u"
-
-# ư  [LATIN SMALL LETTER U WITH HORN]
-"\u01B0" => "u"
-
-# ǔ  [LATIN SMALL LETTER U WITH CARON]
-"\u01D4" => "u"
-
-# ǖ  [LATIN SMALL LETTER U WITH DIAERESIS AND MACRON]
-"\u01D6" => "u"
-
-# ǘ  [LATIN SMALL LETTER U WITH DIAERESIS AND ACUTE]
-"\u01D8" => "u"
-
-# ǚ  [LATIN SMALL LETTER U WITH DIAERESIS AND CARON]
-"\u01DA" => "u"
-
-# ǜ  [LATIN SMALL LETTER U WITH DIAERESIS AND GRAVE]
-"\u01DC" => "u"
-
-# ȕ  [LATIN SMALL LETTER U WITH DOUBLE GRAVE]
-"\u0215" => "u"
-
-# ȗ  [LATIN SMALL LETTER U WITH INVERTED BREVE]
-"\u0217" => "u"
-
-# ʉ  [LATIN SMALL LETTER U BAR]
-"\u0289" => "u"
-
-# ᵤ  [LATIN SUBSCRIPT SMALL LETTER U]
-"\u1D64" => "u"
-
-# ᶙ  [LATIN SMALL LETTER U WITH RETROFLEX HOOK]
-"\u1D99" => "u"
-
-# ṳ  [LATIN SMALL LETTER U WITH DIAERESIS BELOW]
-"\u1E73" => "u"
-
-# ṵ  [LATIN SMALL LETTER U WITH TILDE BELOW]
-"\u1E75" => "u"
-
-# ṷ  [LATIN SMALL LETTER U WITH CIRCUMFLEX BELOW]
-"\u1E77" => "u"
-
-# ṹ  [LATIN SMALL LETTER U WITH TILDE AND ACUTE]
-"\u1E79" => "u"
-
-# ṻ  [LATIN SMALL LETTER U WITH MACRON AND DIAERESIS]
-"\u1E7B" => "u"
-
-# ụ  [LATIN SMALL LETTER U WITH DOT BELOW]
-"\u1EE5" => "u"
-
-# ủ  [LATIN SMALL LETTER U WITH HOOK ABOVE]
-"\u1EE7" => "u"
-
-# ứ  [LATIN SMALL LETTER U WITH HORN AND ACUTE]
-"\u1EE9" => "u"
-
-# ừ  [LATIN SMALL LETTER U WITH HORN AND GRAVE]
-"\u1EEB" => "u"
-
-# ử  [LATIN SMALL LETTER U WITH HORN AND HOOK ABOVE]
-"\u1EED" => "u"
-
-# ữ  [LATIN SMALL LETTER U WITH HORN AND TILDE]
-"\u1EEF" => "u"
-
-# ự  [LATIN SMALL LETTER U WITH HORN AND DOT BELOW]
-"\u1EF1" => "u"
-
-# ⓤ  [CIRCLED LATIN SMALL LETTER U]
-"\u24E4" => "u"
-
-# u  [FULLWIDTH LATIN SMALL LETTER U]
-"\uFF55" => "u"
-
-# ⒰  [PARENTHESIZED LATIN SMALL LETTER U]
-"\u24B0" => "(u)"
-
-# ᵫ  [LATIN SMALL LETTER UE]
-"\u1D6B" => "ue"
-
-# Ʋ  [LATIN CAPITAL LETTER V WITH HOOK]
-"\u01B2" => "V"
-
-# Ʌ  [LATIN CAPITAL LETTER TURNED V]
-"\u0245" => "V"
-
-# ᴠ  [LATIN LETTER SMALL CAPITAL V]
-"\u1D20" => "V"
-
-# Ṽ  [LATIN CAPITAL LETTER V WITH TILDE]
-"\u1E7C" => "V"
-
-# Ṿ  [LATIN CAPITAL LETTER V WITH DOT BELOW]
-"\u1E7E" => "V"
-
-# Ỽ  [LATIN CAPITAL LETTER MIDDLE-WELSH V]
-"\u1EFC" => "V"
-
-# Ⓥ  [CIRCLED LATIN CAPITAL LETTER V]
-"\u24CB" => "V"
-
-# Ꝟ  [LATIN CAPITAL LETTER V WITH DIAGONAL STROKE]
-"\uA75E" => "V"
-
-# Ꝩ  [LATIN CAPITAL LETTER VEND]
-"\uA768" => "V"
-
-# V  [FULLWIDTH LATIN CAPITAL LETTER V]
-"\uFF36" => "V"
-
-# ʋ  [LATIN SMALL LETTER V WITH HOOK]
-"\u028B" => "v"
-
-# ʌ  [LATIN SMALL LETTER TURNED V]
-"\u028C" => "v"
-
-# ᵥ  [LATIN SUBSCRIPT SMALL LETTER V]
-"\u1D65" => "v"
-
-# ᶌ  [LATIN SMALL LETTER V WITH PALATAL HOOK]
-"\u1D8C" => "v"
-
-# ṽ  [LATIN SMALL LETTER V WITH TILDE]
-"\u1E7D" => "v"
-
-# ṿ  [LATIN SMALL LETTER V WITH DOT BELOW]
-"\u1E7F" => "v"
-
-# ⓥ  [CIRCLED LATIN SMALL LETTER V]
-"\u24E5" => "v"
-
-# ⱱ  [LATIN SMALL LETTER V WITH RIGHT HOOK]
-"\u2C71" => "v"
-
-# ⱴ  [LATIN SMALL LETTER V WITH CURL]
-"\u2C74" => "v"
-
-# ꝟ  [LATIN SMALL LETTER V WITH DIAGONAL STROKE]
-"\uA75F" => "v"
-
-# v  [FULLWIDTH LATIN SMALL LETTER V]
-"\uFF56" => "v"
-
-# Ꝡ  [LATIN CAPITAL LETTER VY]
-"\uA760" => "VY"
-
-# ⒱  [PARENTHESIZED LATIN SMALL LETTER V]
-"\u24B1" => "(v)"
-
-# ꝡ  [LATIN SMALL LETTER VY]
-"\uA761" => "vy"
-
-# Ŵ  [LATIN CAPITAL LETTER W WITH CIRCUMFLEX]
-"\u0174" => "W"
-
-# Ƿ  http://en.wikipedia.org/wiki/Wynn  [LATIN CAPITAL LETTER WYNN]
-"\u01F7" => "W"
-
-# ᴡ  [LATIN LETTER SMALL CAPITAL W]
-"\u1D21" => "W"
-
-# Ẁ  [LATIN CAPITAL LETTER W WITH GRAVE]
-"\u1E80" => "W"
-
-# Ẃ  [LATIN CAPITAL LETTER W WITH ACUTE]
-"\u1E82" => "W"
-
-# Ẅ  [LATIN CAPITAL LETTER W WITH DIAERESIS]
-"\u1E84" => "W"
-
-# Ẇ  [LATIN CAPITAL LETTER W WITH DOT ABOVE]
-"\u1E86" => "W"
-
-# Ẉ  [LATIN CAPITAL LETTER W WITH DOT BELOW]
-"\u1E88" => "W"
-
-# Ⓦ  [CIRCLED LATIN CAPITAL LETTER W]
-"\u24CC" => "W"
-
-# Ⱳ  [LATIN CAPITAL LETTER W WITH HOOK]
-"\u2C72" => "W"
-
-# W  [FULLWIDTH LATIN CAPITAL LETTER W]
-"\uFF37" => "W"
-
-# ŵ  [LATIN SMALL LETTER W WITH CIRCUMFLEX]
-"\u0175" => "w"
-
-# ƿ  http://en.wikipedia.org/wiki/Wynn  [LATIN LETTER WYNN]
-"\u01BF" => "w"
-
-# ʍ  [LATIN SMALL LETTER TURNED W]
-"\u028D" => "w"
-
-# ẁ  [LATIN SMALL LETTER W WITH GRAVE]
-"\u1E81" => "w"
-
-# ẃ  [LATIN SMALL LETTER W WITH ACUTE]
-"\u1E83" => "w"
-
-# ẅ  [LATIN SMALL LETTER W WITH DIAERESIS]
-"\u1E85" => "w"
-
-# ẇ  [LATIN SMALL LETTER W WITH DOT ABOVE]
-"\u1E87" => "w"
-
-# ẉ  [LATIN SMALL LETTER W WITH DOT BELOW]
-"\u1E89" => "w"
-
-# ẘ  [LATIN SMALL LETTER W WITH RING ABOVE]
-"\u1E98" => "w"
-
-# ⓦ  [CIRCLED LATIN SMALL LETTER W]
-"\u24E6" => "w"
-
-# ⱳ  [LATIN SMALL LETTER W WITH HOOK]
-"\u2C73" => "w"
-
-# w  [FULLWIDTH LATIN SMALL LETTER W]
-"\uFF57" => "w"
-
-# ⒲  [PARENTHESIZED LATIN SMALL LETTER W]
-"\u24B2" => "(w)"
-
-# Ẋ  [LATIN CAPITAL LETTER X WITH DOT ABOVE]
-"\u1E8A" => "X"
-
-# Ẍ  [LATIN CAPITAL LETTER X WITH DIAERESIS]
-"\u1E8C" => "X"
-
-# Ⓧ  [CIRCLED LATIN CAPITAL LETTER X]
-"\u24CD" => "X"
-
-# X  [FULLWIDTH LATIN CAPITAL LETTER X]
-"\uFF38" => "X"
-
-# ᶍ  [LATIN SMALL LETTER X WITH PALATAL HOOK]
-"\u1D8D" => "x"
-
-# ẋ  [LATIN SMALL LETTER X WITH DOT ABOVE]
-"\u1E8B" => "x"
-
-# ẍ  [LATIN SMALL LETTER X WITH DIAERESIS]
-"\u1E8D" => "x"
-
-# ₓ  [LATIN SUBSCRIPT SMALL LETTER X]
-"\u2093" => "x"
-
-# ⓧ  [CIRCLED LATIN SMALL LETTER X]
-"\u24E7" => "x"
-
-# x  [FULLWIDTH LATIN SMALL LETTER X]
-"\uFF58" => "x"
-
-# ⒳  [PARENTHESIZED LATIN SMALL LETTER X]
-"\u24B3" => "(x)"
-
-# Ý  [LATIN CAPITAL LETTER Y WITH ACUTE]
-"\u00DD" => "Y"
-
-# Ŷ  [LATIN CAPITAL LETTER Y WITH CIRCUMFLEX]
-"\u0176" => "Y"
-
-# Ÿ  [LATIN CAPITAL LETTER Y WITH DIAERESIS]
-"\u0178" => "Y"
-
-# Ƴ  [LATIN CAPITAL LETTER Y WITH HOOK]
-"\u01B3" => "Y"
-
-# Ȳ  [LATIN CAPITAL LETTER Y WITH MACRON]
-"\u0232" => "Y"
-
-# Ɏ  [LATIN CAPITAL LETTER Y WITH STROKE]
-"\u024E" => "Y"
-
-# ʏ  [LATIN LETTER SMALL CAPITAL Y]
-"\u028F" => "Y"
-
-# Ẏ  [LATIN CAPITAL LETTER Y WITH DOT ABOVE]
-"\u1E8E" => "Y"
-
-# Ỳ  [LATIN CAPITAL LETTER Y WITH GRAVE]
-"\u1EF2" => "Y"
-
-# Ỵ  [LATIN CAPITAL LETTER Y WITH DOT BELOW]
-"\u1EF4" => "Y"
-
-# Ỷ  [LATIN CAPITAL LETTER Y WITH HOOK ABOVE]
-"\u1EF6" => "Y"
-
-# Ỹ  [LATIN CAPITAL LETTER Y WITH TILDE]
-"\u1EF8" => "Y"
-
-# Ỿ  [LATIN CAPITAL LETTER Y WITH LOOP]
-"\u1EFE" => "Y"
-
-# Ⓨ  [CIRCLED LATIN CAPITAL LETTER Y]
-"\u24CE" => "Y"
-
-# Y  [FULLWIDTH LATIN CAPITAL LETTER Y]
-"\uFF39" => "Y"
-
-# ý  [LATIN SMALL LETTER Y WITH ACUTE]
-"\u00FD" => "y"
-
-# ÿ  [LATIN SMALL LETTER Y WITH DIAERESIS]
-"\u00FF" => "y"
-
-# ŷ  [LATIN SMALL LETTER Y WITH CIRCUMFLEX]
-"\u0177" => "y"
-
-# ƴ  [LATIN SMALL LETTER Y WITH HOOK]
-"\u01B4" => "y"
-
-# ȳ  [LATIN SMALL LETTER Y WITH MACRON]
-"\u0233" => "y"
-
-# ɏ  [LATIN SMALL LETTER Y WITH STROKE]
-"\u024F" => "y"
-
-# ʎ  [LATIN SMALL LETTER TURNED Y]
-"\u028E" => "y"
-
-# ẏ  [LATIN SMALL LETTER Y WITH DOT ABOVE]
-"\u1E8F" => "y"
-
-# ẙ  [LATIN SMALL LETTER Y WITH RING ABOVE]
-"\u1E99" => "y"
-
-# ỳ  [LATIN SMALL LETTER Y WITH GRAVE]
-"\u1EF3" => "y"
-
-# ỵ  [LATIN SMALL LETTER Y WITH DOT BELOW]
-"\u1EF5" => "y"
-
-# ỷ  [LATIN SMALL LETTER Y WITH HOOK ABOVE]
-"\u1EF7" => "y"
-
-# ỹ  [LATIN SMALL LETTER Y WITH TILDE]
-"\u1EF9" => "y"
-
-# ỿ  [LATIN SMALL LETTER Y WITH LOOP]
-"\u1EFF" => "y"
-
-# ⓨ  [CIRCLED LATIN SMALL LETTER Y]
-"\u24E8" => "y"
-
-# y  [FULLWIDTH LATIN SMALL LETTER Y]
-"\uFF59" => "y"
-
-# ⒴  [PARENTHESIZED LATIN SMALL LETTER Y]
-"\u24B4" => "(y)"
-
-# Ź  [LATIN CAPITAL LETTER Z WITH ACUTE]
-"\u0179" => "Z"
-
-# Ż  [LATIN CAPITAL LETTER Z WITH DOT ABOVE]
-"\u017B" => "Z"
-
-# Ž  [LATIN CAPITAL LETTER Z WITH CARON]
-"\u017D" => "Z"
-
-# Ƶ  [LATIN CAPITAL LETTER Z WITH STROKE]
-"\u01B5" => "Z"
-
-# Ȝ  http://en.wikipedia.org/wiki/Yogh  [LATIN CAPITAL LETTER YOGH]
-"\u021C" => "Z"
-
-# Ȥ  [LATIN CAPITAL LETTER Z WITH HOOK]
-"\u0224" => "Z"
-
-# ᴢ  [LATIN LETTER SMALL CAPITAL Z]
-"\u1D22" => "Z"
-
-# Ẑ  [LATIN CAPITAL LETTER Z WITH CIRCUMFLEX]
-"\u1E90" => "Z"
-
-# Ẓ  [LATIN CAPITAL LETTER Z WITH DOT BELOW]
-"\u1E92" => "Z"
-
-# Ẕ  [LATIN CAPITAL LETTER Z WITH LINE BELOW]
-"\u1E94" => "Z"
-
-# Ⓩ  [CIRCLED LATIN CAPITAL LETTER Z]
-"\u24CF" => "Z"
-
-# Ⱬ  [LATIN CAPITAL LETTER Z WITH DESCENDER]
-"\u2C6B" => "Z"
-
-# Ꝣ  [LATIN CAPITAL LETTER VISIGOTHIC Z]
-"\uA762" => "Z"
-
-# Z  [FULLWIDTH LATIN CAPITAL LETTER Z]
-"\uFF3A" => "Z"
-
-# ź  [LATIN SMALL LETTER Z WITH ACUTE]
-"\u017A" => "z"
-
-# ż  [LATIN SMALL LETTER Z WITH DOT ABOVE]
-"\u017C" => "z"
-
-# ž  [LATIN SMALL LETTER Z WITH CARON]
-"\u017E" => "z"
-
-# ƶ  [LATIN SMALL LETTER Z WITH STROKE]
-"\u01B6" => "z"
-
-# ȝ  http://en.wikipedia.org/wiki/Yogh  [LATIN SMALL LETTER YOGH]
-"\u021D" => "z"
-
-# ȥ  [LATIN SMALL LETTER Z WITH HOOK]
-"\u0225" => "z"
-
-# ɀ  [LATIN SMALL LETTER Z WITH SWASH TAIL]
-"\u0240" => "z"
-
-# ʐ  [LATIN SMALL LETTER Z WITH RETROFLEX HOOK]
-"\u0290" => "z"
-
-# ʑ  [LATIN SMALL LETTER Z WITH CURL]
-"\u0291" => "z"
-
-# ᵶ  [LATIN SMALL LETTER Z WITH MIDDLE TILDE]
-"\u1D76" => "z"
-
-# ᶎ  [LATIN SMALL LETTER Z WITH PALATAL HOOK]
-"\u1D8E" => "z"
-
-# ẑ  [LATIN SMALL LETTER Z WITH CIRCUMFLEX]
-"\u1E91" => "z"
-
-# ẓ  [LATIN SMALL LETTER Z WITH DOT BELOW]
-"\u1E93" => "z"
-
-# ẕ  [LATIN SMALL LETTER Z WITH LINE BELOW]
-"\u1E95" => "z"
-
-# ⓩ  [CIRCLED LATIN SMALL LETTER Z]
-"\u24E9" => "z"
-
-# ⱬ  [LATIN SMALL LETTER Z WITH DESCENDER]
-"\u2C6C" => "z"
-
-# ꝣ  [LATIN SMALL LETTER VISIGOTHIC Z]
-"\uA763" => "z"
-
-# z  [FULLWIDTH LATIN SMALL LETTER Z]
-"\uFF5A" => "z"
-
-# ⒵  [PARENTHESIZED LATIN SMALL LETTER Z]
-"\u24B5" => "(z)"
-
-# ⁰  [SUPERSCRIPT ZERO]
-"\u2070" => "0"
-
-# ₀  [SUBSCRIPT ZERO]
-"\u2080" => "0"
-
-# ⓪  [CIRCLED DIGIT ZERO]
-"\u24EA" => "0"
-
-# ⓿  [NEGATIVE CIRCLED DIGIT ZERO]
-"\u24FF" => "0"
-
-# 0  [FULLWIDTH DIGIT ZERO]
-"\uFF10" => "0"
-
-# ¹  [SUPERSCRIPT ONE]
-"\u00B9" => "1"
-
-# ₁  [SUBSCRIPT ONE]
-"\u2081" => "1"
-
-# ①  [CIRCLED DIGIT ONE]
-"\u2460" => "1"
-
-# ⓵  [DOUBLE CIRCLED DIGIT ONE]
-"\u24F5" => "1"
-
-# ❶  [DINGBAT NEGATIVE CIRCLED DIGIT ONE]
-"\u2776" => "1"
-
-# ➀  [DINGBAT CIRCLED SANS-SERIF DIGIT ONE]
-"\u2780" => "1"
-
-# ➊  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT ONE]
-"\u278A" => "1"
-
-# 1  [FULLWIDTH DIGIT ONE]
-"\uFF11" => "1"
-
-# ⒈  [DIGIT ONE FULL STOP]
-"\u2488" => "1."
-
-# ⑴  [PARENTHESIZED DIGIT ONE]
-"\u2474" => "(1)"
-
-# ²  [SUPERSCRIPT TWO]
-"\u00B2" => "2"
-
-# ₂  [SUBSCRIPT TWO]
-"\u2082" => "2"
-
-# ②  [CIRCLED DIGIT TWO]
-"\u2461" => "2"
-
-# ⓶  [DOUBLE CIRCLED DIGIT TWO]
-"\u24F6" => "2"
-
-# ❷  [DINGBAT NEGATIVE CIRCLED DIGIT TWO]
-"\u2777" => "2"
-
-# ➁  [DINGBAT CIRCLED SANS-SERIF DIGIT TWO]
-"\u2781" => "2"
-
-# ➋  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT TWO]
-"\u278B" => "2"
-
-# 2  [FULLWIDTH DIGIT TWO]
-"\uFF12" => "2"
-
-# ⒉  [DIGIT TWO FULL STOP]
-"\u2489" => "2."
-
-# ⑵  [PARENTHESIZED DIGIT TWO]
-"\u2475" => "(2)"
-
-# ³  [SUPERSCRIPT THREE]
-"\u00B3" => "3"
-
-# ₃  [SUBSCRIPT THREE]
-"\u2083" => "3"
-
-# ③  [CIRCLED DIGIT THREE]
-"\u2462" => "3"
-
-# ⓷  [DOUBLE CIRCLED DIGIT THREE]
-"\u24F7" => "3"
-
-# ❸  [DINGBAT NEGATIVE CIRCLED DIGIT THREE]
-"\u2778" => "3"
-
-# ➂  [DINGBAT CIRCLED SANS-SERIF DIGIT THREE]
-"\u2782" => "3"
-
-# ➌  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT THREE]
-"\u278C" => "3"
-
-# 3  [FULLWIDTH DIGIT THREE]
-"\uFF13" => "3"
-
-# ⒊  [DIGIT THREE FULL STOP]
-"\u248A" => "3."
-
-# ⑶  [PARENTHESIZED DIGIT THREE]
-"\u2476" => "(3)"
-
-# ⁴  [SUPERSCRIPT FOUR]
-"\u2074" => "4"
-
-# ₄  [SUBSCRIPT FOUR]
-"\u2084" => "4"
-
-# ④  [CIRCLED DIGIT FOUR]
-"\u2463" => "4"
-
-# ⓸  [DOUBLE CIRCLED DIGIT FOUR]
-"\u24F8" => "4"
-
-# ❹  [DINGBAT NEGATIVE CIRCLED DIGIT FOUR]
-"\u2779" => "4"
-
-# ➃  [DINGBAT CIRCLED SANS-SERIF DIGIT FOUR]
-"\u2783" => "4"
-
-# ➍  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT FOUR]
-"\u278D" => "4"
-
-# 4  [FULLWIDTH DIGIT FOUR]
-"\uFF14" => "4"
-
-# ⒋  [DIGIT FOUR FULL STOP]
-"\u248B" => "4."
-
-# ⑷  [PARENTHESIZED DIGIT FOUR]
-"\u2477" => "(4)"
-
-# ⁵  [SUPERSCRIPT FIVE]
-"\u2075" => "5"
-
-# ₅  [SUBSCRIPT FIVE]
-"\u2085" => "5"
-
-# ⑤  [CIRCLED DIGIT FIVE]
-"\u2464" => "5"
-
-# ⓹  [DOUBLE CIRCLED DIGIT FIVE]
-"\u24F9" => "5"
-
-# ❺  [DINGBAT NEGATIVE CIRCLED DIGIT FIVE]
-"\u277A" => "5"
-
-# ➄  [DINGBAT CIRCLED SANS-SERIF DIGIT FIVE]
-"\u2784" => "5"
-
-# ➎  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT FIVE]
-"\u278E" => "5"
-
-# 5  [FULLWIDTH DIGIT FIVE]
-"\uFF15" => "5"
-
-# ⒌  [DIGIT FIVE FULL STOP]
-"\u248C" => "5."
-
-# ⑸  [PARENTHESIZED DIGIT FIVE]
-"\u2478" => "(5)"
-
-# ⁶  [SUPERSCRIPT SIX]
-"\u2076" => "6"
-
-# ₆  [SUBSCRIPT SIX]
-"\u2086" => "6"
-
-# ⑥  [CIRCLED DIGIT SIX]
-"\u2465" => "6"
-
-# ⓺  [DOUBLE CIRCLED DIGIT SIX]
-"\u24FA" => "6"
-
-# ❻  [DINGBAT NEGATIVE CIRCLED DIGIT SIX]
-"\u277B" => "6"
-
-# ➅  [DINGBAT CIRCLED SANS-SERIF DIGIT SIX]
-"\u2785" => "6"
-
-# ➏  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT SIX]
-"\u278F" => "6"
-
-# 6  [FULLWIDTH DIGIT SIX]
-"\uFF16" => "6"
-
-# ⒍  [DIGIT SIX FULL STOP]
-"\u248D" => "6."
-
-# ⑹  [PARENTHESIZED DIGIT SIX]
-"\u2479" => "(6)"
-
-# ⁷  [SUPERSCRIPT SEVEN]
-"\u2077" => "7"
-
-# ₇  [SUBSCRIPT SEVEN]
-"\u2087" => "7"
-
-# ⑦  [CIRCLED DIGIT SEVEN]
-"\u2466" => "7"
-
-# ⓻  [DOUBLE CIRCLED DIGIT SEVEN]
-"\u24FB" => "7"
-
-# ❼  [DINGBAT NEGATIVE CIRCLED DIGIT SEVEN]
-"\u277C" => "7"
-
-# ➆  [DINGBAT CIRCLED SANS-SERIF DIGIT SEVEN]
-"\u2786" => "7"
-
-# ➐  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT SEVEN]
-"\u2790" => "7"
-
-# 7  [FULLWIDTH DIGIT SEVEN]
-"\uFF17" => "7"
-
-# ⒎  [DIGIT SEVEN FULL STOP]
-"\u248E" => "7."
-
-# ⑺  [PARENTHESIZED DIGIT SEVEN]
-"\u247A" => "(7)"
-
-# ⁸  [SUPERSCRIPT EIGHT]
-"\u2078" => "8"
-
-# ₈  [SUBSCRIPT EIGHT]
-"\u2088" => "8"
-
-# ⑧  [CIRCLED DIGIT EIGHT]
-"\u2467" => "8"
-
-# ⓼  [DOUBLE CIRCLED DIGIT EIGHT]
-"\u24FC" => "8"
-
-# ❽  [DINGBAT NEGATIVE CIRCLED DIGIT EIGHT]
-"\u277D" => "8"
-
-# ➇  [DINGBAT CIRCLED SANS-SERIF DIGIT EIGHT]
-"\u2787" => "8"
-
-# ➑  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT EIGHT]
-"\u2791" => "8"
-
-# 8  [FULLWIDTH DIGIT EIGHT]
-"\uFF18" => "8"
-
-# ⒏  [DIGIT EIGHT FULL STOP]
-"\u248F" => "8."
-
-# ⑻  [PARENTHESIZED DIGIT EIGHT]
-"\u247B" => "(8)"
-
-# ⁹  [SUPERSCRIPT NINE]
-"\u2079" => "9"
-
-# ₉  [SUBSCRIPT NINE]
-"\u2089" => "9"
-
-# ⑨  [CIRCLED DIGIT NINE]
-"\u2468" => "9"
-
-# ⓽  [DOUBLE CIRCLED DIGIT NINE]
-"\u24FD" => "9"
-
-# ❾  [DINGBAT NEGATIVE CIRCLED DIGIT NINE]
-"\u277E" => "9"
-
-# ➈  [DINGBAT CIRCLED SANS-SERIF DIGIT NINE]
-"\u2788" => "9"
-
-# ➒  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT NINE]
-"\u2792" => "9"
-
-# 9  [FULLWIDTH DIGIT NINE]
-"\uFF19" => "9"
-
-# ⒐  [DIGIT NINE FULL STOP]
-"\u2490" => "9."
-
-# ⑼  [PARENTHESIZED DIGIT NINE]
-"\u247C" => "(9)"
-
-# ⑩  [CIRCLED NUMBER TEN]
-"\u2469" => "10"
-
-# ⓾  [DOUBLE CIRCLED NUMBER TEN]
-"\u24FE" => "10"
-
-# ❿  [DINGBAT NEGATIVE CIRCLED NUMBER TEN]
-"\u277F" => "10"
-
-# ➉  [DINGBAT CIRCLED SANS-SERIF NUMBER TEN]
-"\u2789" => "10"
-
-# ➓  [DINGBAT NEGATIVE CIRCLED SANS-SERIF NUMBER TEN]
-"\u2793" => "10"
-
-# ⒑  [NUMBER TEN FULL STOP]
-"\u2491" => "10."
-
-# ⑽  [PARENTHESIZED NUMBER TEN]
-"\u247D" => "(10)"
-
-# ⑪  [CIRCLED NUMBER ELEVEN]
-"\u246A" => "11"
-
-# ⓫  [NEGATIVE CIRCLED NUMBER ELEVEN]
-"\u24EB" => "11"
-
-# ⒒  [NUMBER ELEVEN FULL STOP]
-"\u2492" => "11."
-
-# ⑾  [PARENTHESIZED NUMBER ELEVEN]
-"\u247E" => "(11)"
-
-# ⑫  [CIRCLED NUMBER TWELVE]
-"\u246B" => "12"
-
-# ⓬  [NEGATIVE CIRCLED NUMBER TWELVE]
-"\u24EC" => "12"
-
-# ⒓  [NUMBER TWELVE FULL STOP]
-"\u2493" => "12."
-
-# ⑿  [PARENTHESIZED NUMBER TWELVE]
-"\u247F" => "(12)"
-
-# ⑬  [CIRCLED NUMBER THIRTEEN]
-"\u246C" => "13"
-
-# ⓭  [NEGATIVE CIRCLED NUMBER THIRTEEN]
-"\u24ED" => "13"
-
-# ⒔  [NUMBER THIRTEEN FULL STOP]
-"\u2494" => "13."
-
-# ⒀  [PARENTHESIZED NUMBER THIRTEEN]
-"\u2480" => "(13)"
-
-# ⑭  [CIRCLED NUMBER FOURTEEN]
-"\u246D" => "14"
-
-# ⓮  [NEGATIVE CIRCLED NUMBER FOURTEEN]
-"\u24EE" => "14"
-
-# ⒕  [NUMBER FOURTEEN FULL STOP]
-"\u2495" => "14."
-
-# ⒁  [PARENTHESIZED NUMBER FOURTEEN]
-"\u2481" => "(14)"
-
-# ⑮  [CIRCLED NUMBER FIFTEEN]
-"\u246E" => "15"
-
-# ⓯  [NEGATIVE CIRCLED NUMBER FIFTEEN]
-"\u24EF" => "15"
-
-# ⒖  [NUMBER FIFTEEN FULL STOP]
-"\u2496" => "15."
-
-# ⒂  [PARENTHESIZED NUMBER FIFTEEN]
-"\u2482" => "(15)"
-
-# ⑯  [CIRCLED NUMBER SIXTEEN]
-"\u246F" => "16"
-
-# ⓰  [NEGATIVE CIRCLED NUMBER SIXTEEN]
-"\u24F0" => "16"
-
-# ⒗  [NUMBER SIXTEEN FULL STOP]
-"\u2497" => "16."
-
-# ⒃  [PARENTHESIZED NUMBER SIXTEEN]
-"\u2483" => "(16)"
-
-# ⑰  [CIRCLED NUMBER SEVENTEEN]
-"\u2470" => "17"
-
-# ⓱  [NEGATIVE CIRCLED NUMBER SEVENTEEN]
-"\u24F1" => "17"
-
-# ⒘  [NUMBER SEVENTEEN FULL STOP]
-"\u2498" => "17."
-
-# ⒄  [PARENTHESIZED NUMBER SEVENTEEN]
-"\u2484" => "(17)"
-
-# ⑱  [CIRCLED NUMBER EIGHTEEN]
-"\u2471" => "18"
-
-# ⓲  [NEGATIVE CIRCLED NUMBER EIGHTEEN]
-"\u24F2" => "18"
-
-# ⒙  [NUMBER EIGHTEEN FULL STOP]
-"\u2499" => "18."
-
-# ⒅  [PARENTHESIZED NUMBER EIGHTEEN]
-"\u2485" => "(18)"
-
-# ⑲  [CIRCLED NUMBER NINETEEN]
-"\u2472" => "19"
-
-# ⓳  [NEGATIVE CIRCLED NUMBER NINETEEN]
-"\u24F3" => "19"
-
-# ⒚  [NUMBER NINETEEN FULL STOP]
-"\u249A" => "19."
-
-# ⒆  [PARENTHESIZED NUMBER NINETEEN]
-"\u2486" => "(19)"
-
-# ⑳  [CIRCLED NUMBER TWENTY]
-"\u2473" => "20"
-
-# ⓴  [NEGATIVE CIRCLED NUMBER TWENTY]
-"\u24F4" => "20"
-
-# ⒛  [NUMBER TWENTY FULL STOP]
-"\u249B" => "20."
-
-# ⒇  [PARENTHESIZED NUMBER TWENTY]
-"\u2487" => "(20)"
-
-# «  [LEFT-POINTING DOUBLE ANGLE QUOTATION MARK]
-"\u00AB" => "\""
-
-# »  [RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK]
-"\u00BB" => "\""
-
-# “  [LEFT DOUBLE QUOTATION MARK]
-"\u201C" => "\""
-
-# ”  [RIGHT DOUBLE QUOTATION MARK]
-"\u201D" => "\""
-
-# „  [DOUBLE LOW-9 QUOTATION MARK]
-"\u201E" => "\""
-
-# ″  [DOUBLE PRIME]
-"\u2033" => "\""
-
-# ‶  [REVERSED DOUBLE PRIME]
-"\u2036" => "\""
-
-# ❝  [HEAVY DOUBLE TURNED COMMA QUOTATION MARK ORNAMENT]
-"\u275D" => "\""
-
-# ❞  [HEAVY DOUBLE COMMA QUOTATION MARK ORNAMENT]
-"\u275E" => "\""
-
-# ❮  [HEAVY LEFT-POINTING ANGLE QUOTATION MARK ORNAMENT]
-"\u276E" => "\""
-
-# ❯  [HEAVY RIGHT-POINTING ANGLE QUOTATION MARK ORNAMENT]
-"\u276F" => "\""
-
-# "  [FULLWIDTH QUOTATION MARK]
-"\uFF02" => "\""
-
-# ‘  [LEFT SINGLE QUOTATION MARK]
-"\u2018" => "\'"
-
-# ’  [RIGHT SINGLE QUOTATION MARK]
-"\u2019" => "\'"
-
-# ‚  [SINGLE LOW-9 QUOTATION MARK]
-"\u201A" => "\'"
-
-# ‛  [SINGLE HIGH-REVERSED-9 QUOTATION MARK]
-"\u201B" => "\'"
-
-# ′  [PRIME]
-"\u2032" => "\'"
-
-# ‵  [REVERSED PRIME]
-"\u2035" => "\'"
-
-# ‹  [SINGLE LEFT-POINTING ANGLE QUOTATION MARK]
-"\u2039" => "\'"
-
-# ›  [SINGLE RIGHT-POINTING ANGLE QUOTATION MARK]
-"\u203A" => "\'"
-
-# ❛  [HEAVY SINGLE TURNED COMMA QUOTATION MARK ORNAMENT]
-"\u275B" => "\'"
-
-# ❜  [HEAVY SINGLE COMMA QUOTATION MARK ORNAMENT]
-"\u275C" => "\'"
-
-# '  [FULLWIDTH APOSTROPHE]
-"\uFF07" => "\'"
-
-# ‐  [HYPHEN]
-"\u2010" => "-"
-
-# ‑  [NON-BREAKING HYPHEN]
-"\u2011" => "-"
-
-# ‒  [FIGURE DASH]
-"\u2012" => "-"
-
-# –  [EN DASH]
-"\u2013" => "-"
-
-# —  [EM DASH]
-"\u2014" => "-"
-
-# ⁻  [SUPERSCRIPT MINUS]
-"\u207B" => "-"
-
-# ₋  [SUBSCRIPT MINUS]
-"\u208B" => "-"
-
-# -  [FULLWIDTH HYPHEN-MINUS]
-"\uFF0D" => "-"
-
-# ⁅  [LEFT SQUARE BRACKET WITH QUILL]
-"\u2045" => "["
-
-# ❲  [LIGHT LEFT TORTOISE SHELL BRACKET ORNAMENT]
-"\u2772" => "["
-
-# [  [FULLWIDTH LEFT SQUARE BRACKET]
-"\uFF3B" => "["
-
-# ⁆  [RIGHT SQUARE BRACKET WITH QUILL]
-"\u2046" => "]"
-
-# ❳  [LIGHT RIGHT TORTOISE SHELL BRACKET ORNAMENT]
-"\u2773" => "]"
-
-# ]  [FULLWIDTH RIGHT SQUARE BRACKET]
-"\uFF3D" => "]"
-
-# ⁽  [SUPERSCRIPT LEFT PARENTHESIS]
-"\u207D" => "("
-
-# ₍  [SUBSCRIPT LEFT PARENTHESIS]
-"\u208D" => "("
-
-# ❨  [MEDIUM LEFT PARENTHESIS ORNAMENT]
-"\u2768" => "("
-
-# ❪  [MEDIUM FLATTENED LEFT PARENTHESIS ORNAMENT]
-"\u276A" => "("
-
-# (  [FULLWIDTH LEFT PARENTHESIS]
-"\uFF08" => "("
-
-# ⸨  [LEFT DOUBLE PARENTHESIS]
-"\u2E28" => "(("
-
-# ⁾  [SUPERSCRIPT RIGHT PARENTHESIS]
-"\u207E" => ")"
-
-# ₎  [SUBSCRIPT RIGHT PARENTHESIS]
-"\u208E" => ")"
-
-# ❩  [MEDIUM RIGHT PARENTHESIS ORNAMENT]
-"\u2769" => ")"
-
-# ❫  [MEDIUM FLATTENED RIGHT PARENTHESIS ORNAMENT]
-"\u276B" => ")"
-
-# )  [FULLWIDTH RIGHT PARENTHESIS]
-"\uFF09" => ")"
-
-# ⸩  [RIGHT DOUBLE PARENTHESIS]
-"\u2E29" => "))"
-
-# ❬  [MEDIUM LEFT-POINTING ANGLE BRACKET ORNAMENT]
-"\u276C" => "<"
-
-# ❰  [HEAVY LEFT-POINTING ANGLE BRACKET ORNAMENT]
-"\u2770" => "<"
-
-# <  [FULLWIDTH LESS-THAN SIGN]
-"\uFF1C" => "<"
-
-# ❭  [MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT]
-"\u276D" => ">"
-
-# ❱  [HEAVY RIGHT-POINTING ANGLE BRACKET ORNAMENT]
-"\u2771" => ">"
-
-# >  [FULLWIDTH GREATER-THAN SIGN]
-"\uFF1E" => ">"
-
-# ❴  [MEDIUM LEFT CURLY BRACKET ORNAMENT]
-"\u2774" => "{"
-
-# {  [FULLWIDTH LEFT CURLY BRACKET]
-"\uFF5B" => "{"
-
-# ❵  [MEDIUM RIGHT CURLY BRACKET ORNAMENT]
-"\u2775" => "}"
-
-# }  [FULLWIDTH RIGHT CURLY BRACKET]
-"\uFF5D" => "}"
-
-# ⁺  [SUPERSCRIPT PLUS SIGN]
-"\u207A" => "+"
-
-# ₊  [SUBSCRIPT PLUS SIGN]
-"\u208A" => "+"
-
-# +  [FULLWIDTH PLUS SIGN]
-"\uFF0B" => "+"
-
-# ⁼  [SUPERSCRIPT EQUALS SIGN]
-"\u207C" => "="
-
-# ₌  [SUBSCRIPT EQUALS SIGN]
-"\u208C" => "="
-
-# =  [FULLWIDTH EQUALS SIGN]
-"\uFF1D" => "="
-
-# !  [FULLWIDTH EXCLAMATION MARK]
-"\uFF01" => "!"
-
-# ‼  [DOUBLE EXCLAMATION MARK]
-"\u203C" => "!!"
-
-# ⁉  [EXCLAMATION QUESTION MARK]
-"\u2049" => "!?"
-
-# #  [FULLWIDTH NUMBER SIGN]
-"\uFF03" => "#"
-
-# $  [FULLWIDTH DOLLAR SIGN]
-"\uFF04" => "$"
-
-# ⁒  [COMMERCIAL MINUS SIGN]
-"\u2052" => "%"
-
-# %  [FULLWIDTH PERCENT SIGN]
-"\uFF05" => "%"
-
-# &  [FULLWIDTH AMPERSAND]
-"\uFF06" => "&"
-
-# ⁎  [LOW ASTERISK]
-"\u204E" => "*"
-
-# *  [FULLWIDTH ASTERISK]
-"\uFF0A" => "*"
-
-# ,  [FULLWIDTH COMMA]
-"\uFF0C" => ","
-
-# .  [FULLWIDTH FULL STOP]
-"\uFF0E" => "."
-
-# ⁄  [FRACTION SLASH]
-"\u2044" => "/"
-
-# /  [FULLWIDTH SOLIDUS]
-"\uFF0F" => "/"
-
-# :  [FULLWIDTH COLON]
-"\uFF1A" => ":"
-
-# ⁏  [REVERSED SEMICOLON]
-"\u204F" => ";"
-
-# ;  [FULLWIDTH SEMICOLON]
-"\uFF1B" => ";"
-
-# ?  [FULLWIDTH QUESTION MARK]
-"\uFF1F" => "?"
-
-# ⁇  [DOUBLE QUESTION MARK]
-"\u2047" => "??"
-
-# ⁈  [QUESTION EXCLAMATION MARK]
-"\u2048" => "?!"
-
-# @  [FULLWIDTH COMMERCIAL AT]
-"\uFF20" => "@"
-
-# \  [FULLWIDTH REVERSE SOLIDUS]
-"\uFF3C" => "\\"
-
-# ‸  [CARET]
-"\u2038" => "^"
-
-# ^  [FULLWIDTH CIRCUMFLEX ACCENT]
-"\uFF3E" => "^"
-
-# _  [FULLWIDTH LOW LINE]
-"\uFF3F" => "_"
-
-# ⁓  [SWUNG DASH]
-"\u2053" => "~"
-
-# ~  [FULLWIDTH TILDE]
-"\uFF5E" => "~"
-
-################################################################
-# Below is the Perl script used to generate the above mappings #
-# from ASCIIFoldingFilter.java:                                #
-################################################################
-#
-# #!/usr/bin/perl
-#
-# use warnings;
-# use strict;
-# 
-# my @source_chars = ();
-# my @source_char_descriptions = ();
-# my $target = '';
-# 
-# while (<>) {
-#   if (/case\s+'(\\u[A-F0-9]+)':\s*\/\/\s*(.*)/i) {
-#     push @source_chars, $1;
-#	  push @source_char_descriptions, $2;
-#	  next;
-#   }
-#   if (/output\[[^\]]+\]\s*=\s*'(\\'|\\\\|.)'/) {
-#     $target .= $1;
-#     next;
-#   }
-#   if (/break;/) {
-#     $target = "\\\"" if ($target eq '"');
-#     for my $source_char_num (0..$#source_chars) {
-#	    print "# $source_char_descriptions[$source_char_num]\n";
-#	    print "\"$source_chars[$source_char_num]\" => \"$target\"\n\n";
-#	  }
-#	  @source_chars = ();
-#	  @source_char_descriptions = ();
-#	  $target = '';
-#   }
-# }
diff --git a/solr/example/example-DIH/solr/mail/conf/mapping-ISOLatin1Accent.txt b/solr/example/example-DIH/solr/mail/conf/mapping-ISOLatin1Accent.txt
deleted file mode 100644
index ede7742..0000000
--- a/solr/example/example-DIH/solr/mail/conf/mapping-ISOLatin1Accent.txt
+++ /dev/null
@@ -1,246 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Syntax:
-#   "source" => "target"
-#     "source".length() > 0 (source cannot be empty.)
-#     "target".length() >= 0 (target can be empty.)
-
-# example:
-#   "À" => "A"
-#   "\u00C0" => "A"
-#   "\u00C0" => "\u0041"
-#   "ß" => "ss"
-#   "\t" => " "
-#   "\n" => ""
-
-# À => A
-"\u00C0" => "A"
-
-# Á => A
-"\u00C1" => "A"
-
-# Â => A
-"\u00C2" => "A"
-
-# Ã => A
-"\u00C3" => "A"
-
-# Ä => A
-"\u00C4" => "A"
-
-# Å => A
-"\u00C5" => "A"
-
-# Æ => AE
-"\u00C6" => "AE"
-
-# Ç => C
-"\u00C7" => "C"
-
-# È => E
-"\u00C8" => "E"
-
-# É => E
-"\u00C9" => "E"
-
-# Ê => E
-"\u00CA" => "E"
-
-# Ë => E
-"\u00CB" => "E"
-
-# Ì => I
-"\u00CC" => "I"
-
-# Í => I
-"\u00CD" => "I"
-
-# Î => I
-"\u00CE" => "I"
-
-# Ï => I
-"\u00CF" => "I"
-
-# IJ => IJ
-"\u0132" => "IJ"
-
-# Ð => D
-"\u00D0" => "D"
-
-# Ñ => N
-"\u00D1" => "N"
-
-# Ò => O
-"\u00D2" => "O"
-
-# Ó => O
-"\u00D3" => "O"
-
-# Ô => O
-"\u00D4" => "O"
-
-# Õ => O
-"\u00D5" => "O"
-
-# Ö => O
-"\u00D6" => "O"
-
-# Ø => O
-"\u00D8" => "O"
-
-# Π=> OE
-"\u0152" => "OE"
-
-# Þ
-"\u00DE" => "TH"
-
-# Ù => U
-"\u00D9" => "U"
-
-# Ú => U
-"\u00DA" => "U"
-
-# Û => U
-"\u00DB" => "U"
-
-# Ü => U
-"\u00DC" => "U"
-
-# Ý => Y
-"\u00DD" => "Y"
-
-# Ÿ => Y
-"\u0178" => "Y"
-
-# à => a
-"\u00E0" => "a"
-
-# á => a
-"\u00E1" => "a"
-
-# â => a
-"\u00E2" => "a"
-
-# ã => a
-"\u00E3" => "a"
-
-# ä => a
-"\u00E4" => "a"
-
-# å => a
-"\u00E5" => "a"
-
-# æ => ae
-"\u00E6" => "ae"
-
-# ç => c
-"\u00E7" => "c"
-
-# è => e
-"\u00E8" => "e"
-
-# é => e
-"\u00E9" => "e"
-
-# ê => e
-"\u00EA" => "e"
-
-# ë => e
-"\u00EB" => "e"
-
-# ì => i
-"\u00EC" => "i"
-
-# í => i
-"\u00ED" => "i"
-
-# î => i
-"\u00EE" => "i"
-
-# ï => i
-"\u00EF" => "i"
-
-# ij => ij
-"\u0133" => "ij"
-
-# ð => d
-"\u00F0" => "d"
-
-# ñ => n
-"\u00F1" => "n"
-
-# ò => o
-"\u00F2" => "o"
-
-# ó => o
-"\u00F3" => "o"
-
-# ô => o
-"\u00F4" => "o"
-
-# õ => o
-"\u00F5" => "o"
-
-# ö => o
-"\u00F6" => "o"
-
-# ø => o
-"\u00F8" => "o"
-
-# œ => oe
-"\u0153" => "oe"
-
-# ß => ss
-"\u00DF" => "ss"
-
-# þ => th
-"\u00FE" => "th"
-
-# ù => u
-"\u00F9" => "u"
-
-# ú => u
-"\u00FA" => "u"
-
-# û => u
-"\u00FB" => "u"
-
-# ü => u
-"\u00FC" => "u"
-
-# ý => y
-"\u00FD" => "y"
-
-# ÿ => y
-"\u00FF" => "y"
-
-# ff => ff
-"\uFB00" => "ff"
-
-# fi => fi
-"\uFB01" => "fi"
-
-# fl => fl
-"\uFB02" => "fl"
-
-# ffi => ffi
-"\uFB03" => "ffi"
-
-# ffl => ffl
-"\uFB04" => "ffl"
-
-# ſt => ft
-"\uFB05" => "ft"
-
-# st => st
-"\uFB06" => "st"
diff --git a/solr/example/example-DIH/solr/mail/conf/protwords.txt b/solr/example/example-DIH/solr/mail/conf/protwords.txt
deleted file mode 100644
index 1dfc0ab..0000000
--- a/solr/example/example-DIH/solr/mail/conf/protwords.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-# Use a protected word file to protect against the stemmer reducing two
-# unrelated words to the same base word.
-
-# Some non-words that normally won't be encountered,
-# just to test that they won't be stemmed.
-dontstems
-zwhacky
-
diff --git a/solr/example/example-DIH/solr/mail/conf/solrconfig.xml b/solr/example/example-DIH/solr/mail/conf/solrconfig.xml
deleted file mode 100644
index 91b9957..0000000
--- a/solr/example/example-DIH/solr/mail/conf/solrconfig.xml
+++ /dev/null
@@ -1,1345 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
-     For more details about configurations options that may appear in
-     this file, see http://wiki.apache.org/solr/SolrConfigXml.
--->
-<config>
-  <!-- In all configuration below, a prefix of "solr." for class names
-       is an alias that causes solr to search appropriate packages,
-       including org.apache.solr.(search|update|request|core|analysis)
-
-       You may also specify a fully qualified Java classname if you
-       have your own custom plugins.
-    -->
-
-  <!-- Controls what version of Lucene various components of Solr
-       adhere to.  Generally, you want to use the latest version to
-       get all bug fixes and improvements. It is highly recommended
-       that you fully re-index after changing this setting as it can
-       affect both how text is indexed and queried.
-  -->
-  <luceneMatchVersion>9.0.0</luceneMatchVersion>
-
-  <!-- <lib/> directives can be used to instruct Solr to load any Jars
-       identified and use them to resolve any "plugins" specified in
-       your solrconfig.xml or schema.xml (ie: Analyzers, Request
-       Handlers, etc...).
-
-       All directories and paths are resolved relative to the
-       instanceDir.
-
-       Please note that <lib/> directives are processed in the order
-       that they appear in your solrconfig.xml file, and are "stacked"
-       on top of each other when building a ClassLoader - so if you have
-       plugin jars with dependencies on other jars, the "lower level"
-       dependency jars should be loaded first.
-
-       If a "./lib" directory exists in your instanceDir, all files
-       found in it are included as if you had used the following
-       syntax...
-
-              <lib dir="./lib" />
-    -->
-
-  <!-- A 'dir' option by itself adds any files found in the directory
-       to the classpath, this is useful for including all jars in a
-       directory.
-
-       When a 'regex' is specified in addition to a 'dir', only the
-       files in that directory which completely match the regex
-       (anchored on both ends) will be included.
-
-       If a 'dir' option (with or without a regex) is used and nothing
-       is found that matches, a warning will be logged.
-
-       The examples below can be used to load some solr-contribs along
-       with their external dependencies.
-    -->
-  <lib dir="${solr.install.dir:../../../..}/contrib/dataimporthandler/lib/" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/dataimporthandler-extras/lib/" regex=".*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
-
-  <!-- an exact 'path' can be used instead of a 'dir' to specify a
-       specific jar file.  This will cause a serious error to be logged
-       if it can't be loaded.
-    -->
-  <!--
-     <lib path="../a-jar-that-does-not-exist.jar" />
-  -->
-
-  <!-- Data Directory
-
-       Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.  If
-       replication is in use, this should match the replication
-       configuration.
-    -->
-  <dataDir>${solr.data.dir:}</dataDir>
-
-
-  <!-- The DirectoryFactory to use for indexes.
-
-       solr.StandardDirectoryFactory is filesystem
-       based and tries to pick the best implementation for the current
-       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,
-       wraps solr.StandardDirectoryFactory and caches small files in memory
-       for better NRT performance.
-
-       One can force a particular implementation via solr.MMapDirectoryFactory
-       or solr.NIOFSDirectoryFactory.
-
-       solr.RAMDirectoryFactory is memory based and not persistent.
-    -->
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <!-- The CodecFactory for defining the format of the inverted index.
-       The default implementation is SchemaCodecFactory, which is the official Lucene
-       index format, but hooks into the schema to provide per-field customization of
-       the postings lists and per-document values in the fieldType element
-       (postingsFormat/docValuesFormat). Note that most of the alternative implementations
-       are experimental, so if you choose to customize the index format, it's a good
-       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
-       before upgrading to a newer version to avoid unnecessary reindexing.
-  -->
-  <codecFactory class="solr.SchemaCodecFactory"/>
-
-  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-       Index Config - These settings control low-level behavior of indexing
-       Most example settings here show the default value, but are commented
-       out, to more easily see where customizations have been made.
-
-       Note: This replaces <indexDefaults> and <mainIndex> from older versions
-       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <indexConfig>
-    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
-         LimitTokenCountFilterFactory in your fieldType definition. E.g.
-     <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-    -->
-    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
-    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->
-
-    <!-- Expert: Enabling compound file will use less files for the index,
-         using fewer file descriptors on the expense of performance decrease.
-         Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
-    <!-- <useCompoundFile>false</useCompoundFile> -->
-
-    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene
-         indexing for buffering added documents and deletions before they are
-         flushed to the Directory.
-         maxBufferedDocs sets a limit on the number of documents buffered
-         before flushing.
-         If both ramBufferSizeMB and maxBufferedDocs is set, then
-         Lucene will flush based on whichever limit is hit first.
-         The default is 100 MB.  -->
-    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
-    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
-
-    <!-- Expert: Merge Policy
-         The Merge Policy in Lucene controls how merging of segments is done.
-         The default since Solr/Lucene 3.3 is TieredMergePolicy.
-         The default since Lucene 2.3 was the LogByteSizeMergePolicy,
-         Even older versions of Lucene used LogDocMergePolicy.
-     -->
-    <!--
-        <mergePolicyFactory class="solr.TieredMergePolicyFactory">
-          <int name="maxMergeAtOnce">10</int>
-          <int name="segmentsPerTier">10</int>
-        </mergePolicyFactory>
-     -->
-
-    <!-- Expert: Merge Scheduler
-         The Merge Scheduler in Lucene controls how merges are
-         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)
-         can perform merges in the background using separate threads.
-         The SerialMergeScheduler (Lucene 2.2 default) does not.
-     -->
-    <!--
-       <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-       -->
-
-    <!-- LockFactory
-
-         This option specifies which Lucene LockFactory implementation
-         to use.
-
-         single = SingleInstanceLockFactory - suggested for a
-                  read-only index or when there is no possibility of
-                  another process trying to modify the index.
-         native = NativeFSLockFactory - uses OS native file locking.
-                  Do not use when multiple solr webapps in the same
-                  JVM are attempting to share a single index.
-         simple = SimpleFSLockFactory  - uses a plain file for locking
-
-         Defaults: 'native' is default for Solr3.6 and later, otherwise
-                   'simple' is the default
-
-         More details on the nuances of each LockFactory...
-         http://wiki.apache.org/lucene-java/AvailableLockFactories
-    -->
-    <lockType>${solr.lock.type:native}</lockType>
-
-    <!-- Commit Deletion Policy
-         Custom deletion policies can be specified here. The class must
-         implement org.apache.lucene.index.IndexDeletionPolicy.
-
-         The default Solr IndexDeletionPolicy implementation supports
-         deleting index commit points on number of commits, age of
-         commit point and optimized status.
-
-         The latest commit point should always be preserved regardless
-         of the criteria.
-    -->
-    <!--
-    <deletionPolicy class="solr.SolrDeletionPolicy">
-    -->
-      <!-- The number of commit points to be kept -->
-      <!-- <str name="maxCommitsToKeep">1</str> -->
-      <!-- The number of optimized commit points to be kept -->
-      <!-- <str name="maxOptimizedCommitsToKeep">0</str> -->
-      <!--
-          Delete all commit points once they have reached the given age.
-          Supports DateMathParser syntax e.g.
-        -->
-      <!--
-         <str name="maxCommitAge">30MINUTES</str>
-         <str name="maxCommitAge">1DAY</str>
-      -->
-    <!--
-    </deletionPolicy>
-    -->
-
-    <!-- Lucene Infostream
-
-         To aid in advanced debugging, Lucene provides an "InfoStream"
-         of detailed information when indexing.
-
-         Setting the value to true will instruct the underlying Lucene
-         IndexWriter to write its info stream to solr's log. By default,
-         this is enabled here, and controlled through log4j2.xml
-      -->
-     <infoStream>true</infoStream>
-  </indexConfig>
-
-
-  <!-- JMX
-
-       This example enables JMX if and only if an existing MBeanServer
-       is found, use this if you want to configure JMX through JVM
-       parameters. Remove this to disable exposing Solr configuration
-       and statistics to JMX.
-
-       For more details see http://wiki.apache.org/solr/SolrJmx
-    -->
-  <jmx />
-  <!-- If you want to connect to a particular server, specify the
-       agentId
-    -->
-  <!-- <jmx agentId="myAgent" /> -->
-  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->
-  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
-    -->
-
-  <!-- The default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- Enables a transaction log, used for real-time get, durability, and
-         and solr cloud replica recovery.  The log can grow as big as
-         uncommitted changes to the index, so use of a hard autoCommit
-         is recommended (see below).
-         "dir" - the target directory for transaction logs, defaults to the
-                solr data directory.  -->
-    <updateLog>
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-
-    <!-- AutoCommit
-
-         Perform a hard commit automatically under certain conditions.
-         Instead of enabling autoCommit, consider using "commitWithin"
-         when adding documents.
-
-         http://wiki.apache.org/solr/UpdateXmlMessages
-
-         maxDocs - Maximum number of documents to add since the last
-                   commit before automatically triggering a new commit.
-
-         maxTime - Maximum amount of time in ms that is allowed to pass
-                   since a document was added before automatically
-                   triggering a new commit.
-         openSearcher - if false, the commit causes recent index changes
-           to be flushed to stable storage, but does not cause a new
-           searcher to be opened to make those changes visible.
-
-         If the updateLog is enabled, then it's highly recommended to
-         have some sort of hard autoCommit to limit the log size.
-      -->
-     <autoCommit>
-       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
-       <openSearcher>false</openSearcher>
-     </autoCommit>
-
-    <!-- softAutoCommit is like autoCommit except it causes a
-         'soft' commit which only ensures that changes are visible
-         but does not ensure that data is synced to disk.  This is
-         faster and more near-realtime friendly than a hard commit.
-      -->
-
-     <autoSoftCommit>
-       <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
-     </autoSoftCommit>
-
-    <!-- Update Related Event Listeners
-
-         Various IndexWriter related events can trigger Listeners to
-         take actions.
-
-         postCommit - fired after every commit or optimize command
-         postOptimize - fired after every optimize command
-      -->
-
-  </updateHandler>
-
-  <!-- IndexReaderFactory
-
-       Use the following format to specify a custom IndexReaderFactory,
-       which allows for alternate IndexReader implementations.
-
-       ** Experimental Feature **
-
-       Please note - Using a custom IndexReaderFactory may prevent
-       certain other features from working. The API to
-       IndexReaderFactory may change without warning or may even be
-       removed from future releases if the problems cannot be
-       resolved.
-
-
-       ** Features that may not work with custom IndexReaderFactory **
-
-       The ReplicationHandler assumes a disk-resident index. Using a
-       custom IndexReader implementation may cause incompatibility
-       with ReplicationHandler and may cause replication to not work
-       correctly. See SOLR-1366 for details.
-
-    -->
-  <!--
-  <indexReaderFactory name="IndexReaderFactory" class="package.class">
-    <str name="someArg">Some Value</str>
-  </indexReaderFactory >
-  -->
-
-  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-       Query section - these settings control query time things like caches
-       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <query>
-    <!-- Max Boolean Clauses
-
-         Maximum number of clauses in each BooleanQuery,  an exception
-         is thrown if exceeded.
-
-         ** WARNING **
-
-         This option actually modifies a global Lucene property that
-         will affect all SolrCores.  If multiple solrconfig.xml files
-         disagree on this property, the value at any given moment will
-         be based on the last SolrCore to be initialized.
-
-      -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-
-    <!-- Solr Internal Query Caches
-        Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
-    -->
-
-    <!-- Filter Cache
-
-         Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.  When a
-         new searcher is opened, its caches may be prepopulated or
-         "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.
-
-         Parameters:
-           class - the SolrCache implementation
-           size - the maximum number of entries in the cache
-           initialSize - the initial capacity (number of entries) of
-               the cache.  (see java.util.HashMap)
-           autowarmCount - the number of entries to prepopulate from
-               and old cache.
-      -->
-    <filterCache class="solr.CaffeineCache"
-                 size="512"
-                 initialSize="512"
-                 autowarmCount="0"/>
-
-    <!-- Query Result Cache
-
-         Caches results of searches - ordered lists of document ids
-         (DocList) based on a query, a sort, and the range of documents requested.
-      -->
-    <queryResultCache class="solr.CaffeineCache"
-                     size="512"
-                     initialSize="512"
-                     autowarmCount="0"/>
-
-    <!-- Document Cache
-
-         Caches Lucene Document objects (the stored fields for each
-         document).  Since Lucene internal document ids are transient,
-         this cache will not be autowarmed.
-      -->
-    <documentCache class="solr.CaffeineCache"
-                   size="512"
-                   initialSize="512"
-                   autowarmCount="0"/>
-
-    <!-- custom cache currently used by block join -->
-    <cache name="perSegFilter"
-      class="solr.search.CaffeineCache"
-      size="10"
-      initialSize="0"
-      autowarmCount="10"
-      regenerator="solr.NoOpRegenerator" />
-
-    <!-- Field Value Cache
-
-         Cache used to hold field values that are quickly accessible
-         by document id.  The fieldValueCache is created by default
-         even if not configured here.
-      -->
-    <!--
-       <fieldValueCache class="solr.CaffeineCache"
-                        size="512"
-                        autowarmCount="128"
-                        showItems="32" />
-      -->
-
-    <!-- Custom Cache
-
-         Example of a generic cache.  These caches may be accessed by
-         name through SolrIndexSearcher.getCache(),cacheLookup(), and
-         cacheInsert().  The purpose is to enable easy caching of
-         user/application level data.  The regenerator argument should
-         be specified as an implementation of solr.CacheRegenerator
-         if autowarming is desired.
-      -->
-    <!--
-       <cache name="myUserCache"
-              class="solr.CaffeineCache"
-              size="4096"
-              initialSize="1024"
-              autowarmCount="1024"
-              regenerator="com.mycompany.MyRegenerator"
-              />
-      -->
-
-
-    <!-- Lazy Field Loading
-
-         If true, stored fields that are not requested will be loaded
-         lazily.  This can result in a significant speed improvement
-         if the usual case is to not load all stored fields,
-         especially if the skipped fields are large compressed text
-         fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-   <!-- Use Filter For Sorted Query
-
-        A possible optimization that attempts to use a filter to
-        satisfy a search.  If the requested sort does not include
-        score, then the filterCache will be checked for a filter
-        matching the query. If found, the filter will be used as the
-        source of document ids, and then the sort will be applied to
-        that.
-
-        For most situations, this will not be useful unless you
-        frequently get the same search repeatedly with different sort
-        options, and none of them ever use "score"
-     -->
-   <!--
-      <useFilterForSortedQuery>true</useFilterForSortedQuery>
-     -->
-
-   <!-- Result Window Size
-
-        An optimization for use with the queryResultCache.  When a search
-        is requested, a superset of the requested number of document ids
-        are collected.  For example, if a search for a particular query
-        requests matching documents 10 through 19, and queryWindowSize is 50,
-        then documents 0 through 49 will be collected and cached.  Any further
-        requests in that range can be satisfied via the cache.
-     -->
-   <queryResultWindowSize>20</queryResultWindowSize>
-
-   <!-- Maximum number of documents to cache for any entry in the
-        queryResultCache.
-     -->
-   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-   <!-- Query Related Event Listeners
-
-        Various IndexSearcher related events can trigger Listeners to
-        take actions.
-
-        newSearcher - fired whenever a new searcher is being prepared
-        and there is a current searcher handling requests (aka
-        registered).  It can be used to prime certain caches to
-        prevent long request times for certain requests.
-
-        firstSearcher - fired whenever a new searcher is being
-        prepared but there is no current registered searcher to handle
-        requests or to gain autowarming data from.
-
-
-     -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence.
-      -->
-    <listener event="newSearcher" class="solr.QuerySenderListener">
-      <arr name="queries">
-        <!--
-           <lst><str name="q">solr</str><str name="sort">price asc</str></lst>
-           <lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
-          -->
-      </arr>
-    </listener>
-    <listener event="firstSearcher" class="solr.QuerySenderListener">
-      <arr name="queries">
-        <lst>
-          <str name="q">static firstSearcher warming in solrconfig.xml</str>
-        </lst>
-      </arr>
-    </listener>
-
-    <!-- Use Cold Searcher
-
-         If a search request comes in and there is no current
-         registered searcher, then immediately register the still
-         warming searcher and use it.  If "false" then all requests
-         will block until the first searcher is done warming.
-      -->
-    <useColdSearcher>false</useColdSearcher>
-
-  </query>
-
-
-  <!-- Request Dispatcher
-
-       This section contains instructions for how the SolrDispatchFilter
-       should behave when processing requests for this SolrCore.
-    -->
-  <requestDispatcher>
-    <!-- Request Parsing
-
-         These settings indicate how Solr Requests may be parsed, and
-         what restrictions may be placed on the ContentStreams from
-         those requests
-
-         enableRemoteStreaming - enables use of the stream.file
-         and stream.url parameters for specifying remote streams.
-
-         multipartUploadLimitInKB - specifies the max size (in KiB) of
-         Multipart File Uploads that Solr will allow in a Request.
-
-         formdataUploadLimitInKB - specifies the max size (in KiB) of
-         form data (application/x-www-form-urlencoded) sent via
-         POST. You can use POST to pass request parameters not
-         fitting into the URL.
-
-         addHttpRequestToContext - if set to true, it will instruct
-         the requestParsers to include the original HttpServletRequest
-         object in the context map of the SolrQueryRequest under the
-         key "httpRequest". It will not be used by any of the existing
-         Solr components, but may be useful when developing custom
-         plugins.
-
-         *** WARNING ***
-         Before enabling remote streaming, you should make sure your
-         system has authentication enabled.
-
-    <requestParsers enableRemoteStreaming="false"
-                    multipartUploadLimitInKB="-1"
-                    formdataUploadLimitInKB="-1"
-                    addHttpRequestToContext="false"/>
-      -->
-
-    <!-- HTTP Caching
-
-         Set HTTP caching related parameters (for proxy caches and clients).
-
-         The options below instruct Solr not to output any HTTP Caching
-         related headers
-      -->
-    <httpCaching never304="true" />
-    <!-- If you include a <cacheControl> directive, it will be used to
-         generate a Cache-Control header (as well as an Expires header
-         if the value contains "max-age=")
-
-         By default, no Cache-Control header is generated.
-
-         You can use the <cacheControl> option even if you have set
-         never304="true"
-      -->
-    <!--
-       <httpCaching never304="true" >
-         <cacheControl>max-age=30, public</cacheControl>
-       </httpCaching>
-      -->
-    <!-- To enable Solr to respond with automatically generated HTTP
-         Caching headers, and to response to Cache Validation requests
-         correctly, set the value of never304="false"
-
-         This will cause Solr to generate Last-Modified and ETag
-         headers based on the properties of the Index.
-
-         The following options can also be specified to affect the
-         values of these headers...
-
-         lastModFrom - the default value is "openTime" which means the
-         Last-Modified value (and validation against If-Modified-Since
-         requests) will all be relative to when the current Searcher
-         was opened.  You can change it to lastModFrom="dirLastMod" if
-         you want the value to exactly correspond to when the physical
-         index was last modified.
-
-         etagSeed="..." is an option you can change to force the ETag
-         header (and validation against If-None-Match requests) to be
-         different even if the index has not changed (ie: when making
-         significant changes to your config file)
-
-         (lastModifiedFrom and etagSeed are both ignored if you use
-         the never304="true" option)
-      -->
-    <!--
-       <httpCaching lastModifiedFrom="openTime"
-                    etagSeed="Solr">
-         <cacheControl>max-age=30, public</cacheControl>
-       </httpCaching>
-      -->
-  </requestDispatcher>
-
-  <!-- Request Handlers
-
-       http://wiki.apache.org/solr/SolrRequestHandler
-
-       Incoming queries will be dispatched to a specific handler by name
-       based on the path specified in the request.
-
-       If a Request Handler is declared with startup="lazy", then it will
-       not be initialized until the first request that uses it.
-
-    -->
-
-  <requestHandler name="/dataimport" class="solr.DataImportHandler">
-    <lst name="defaults">
-      <str name="config">mail-data-config.xml</str>
-    </lst>
-  </requestHandler>
-
-  <!-- SearchHandler
-
-       http://wiki.apache.org/solr/SearchHandler
-
-       For processing Search Queries, the primary Request Handler
-       provided with Solr is "SearchHandler" It delegates to a sequent
-       of SearchComponents (see below) and supports distributed
-       queries across multiple shards
-    -->
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters can be specified, these
-         will be overridden by parameters in the request
-      -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <int name="rows">10</int>
-       <str name="df">text</str>
-       <!-- Change from JSON to XML format (the default prior to Solr 7.0)
-          <str name="wt">xml</str> 
-         -->
-     </lst>
-    <!-- In addition to defaults, "appends" params can be specified
-         to identify values which should be appended to the list of
-         multi-val params from the query (or the existing "defaults").
-      -->
-    <!-- In this example, the param "fq=instock:true" would be appended to
-         any query time fq params the user may specify, as a mechanism for
-         partitioning the index, independent of any user selected filtering
-         that may also be desired (perhaps as a result of faceted searching).
-
-         NOTE: there is *absolutely* nothing a client can do to prevent these
-         "appends" values from being used, so don't use this mechanism
-         unless you are sure you always want it.
-      -->
-    <!--
-       <lst name="appends">
-         <str name="fq">inStock:true</str>
-       </lst>
-      -->
-    <!-- "invariants" are a way of letting the Solr maintainer lock down
-         the options available to Solr clients.  Any params values
-         specified here are used regardless of what values may be specified
-         in either the query, the "defaults", or the "appends" params.
-
-         In this example, the facet.field and facet.query params would
-         be fixed, limiting the facets clients can use.  Faceting is
-         not turned on by default - but if the client does specify
-         facet=true in the request, these are the only facets they
-         will be able to see counts for; regardless of what other
-         facet.field or facet.query params they may specify.
-
-         NOTE: there is *absolutely* nothing a client can do to prevent these
-         "invariants" values from being used, so don't use this mechanism
-         unless you are sure you always want it.
-      -->
-    <!--
-       <lst name="invariants">
-         <str name="facet.field">cat</str>
-         <str name="facet.field">manu_exact</str>
-         <str name="facet.query">price:[* TO 500]</str>
-         <str name="facet.query">price:[500 TO *]</str>
-       </lst>
-      -->
-    <!-- If the default list of SearchComponents is not desired, that
-         list can either be overridden completely, or components can be
-         prepended or appended to the default list.  (see below)
-      -->
-    <!--
-       <arr name="components">
-         <str>nameOfCustomComponent1</str>
-         <str>nameOfCustomComponent2</str>
-       </arr>
-      -->
-    </requestHandler>
-
-  <!-- A request handler that returns indented JSON by default -->
-  <requestHandler name="/query" class="solr.SearchHandler">
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <str name="wt">json</str>
-       <str name="indent">true</str>
-       <str name="df">text</str>
-     </lst>
-  </requestHandler>
-
-
-  <!-- A Robust Example
-
-       This example SearchHandler declaration shows off usage of the
-       SearchHandler with many defaults declared
-
-       Note that multiple instances of the same Request Handler
-       (SearchHandler) can be registered multiple times with different
-       names (and different init parameters)
-    -->
-  <requestHandler name="/browse" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-
-      <!-- VelocityResponseWriter settings -->
-      <str name="wt">velocity</str>
-      <str name="v.template">browse</str>
-      <str name="v.layout">layout</str>
-
-      <!-- Query settings -->
-      <str name="defType">edismax</str>
-      <str name="q.alt">*:*</str>
-      <str name="rows">10</str>
-      <str name="fl">*,score</str>
-
-      <!-- Faceting defaults -->
-      <str name="facet">on</str>
-      <str name="facet.mincount">1</str>
-    </lst>
-  </requestHandler>
-
-  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
-    <lst name="defaults">
-      <str name="df">content</str>
-    </lst>
-  </initParams>
-
-  <!-- Solr Cell Update Request Handler
-
-       http://wiki.apache.org/solr/ExtractingRequestHandler
-
-    -->
-  <requestHandler name="/update/extract"
-                  startup="lazy"
-                  class="solr.extraction.ExtractingRequestHandler" >
-    <lst name="defaults">
-      <str name="lowernames">true</str>
-      <str name="uprefix">ignored_</str>
-
-      <!-- capture link hrefs but ignore div attributes -->
-      <str name="captureAttr">true</str>
-      <str name="fmap.a">links</str>
-      <str name="fmap.div">ignored_</str>
-    </lst>
-  </requestHandler>
-
-  <!-- Search Components
-
-       Search components are registered to SolrCore and used by
-       instances of SearchHandler (which can access them by name)
-
-       By default, the following components are available:
-
-       <searchComponent name="query"     class="solr.QueryComponent" />
-       <searchComponent name="facet"     class="solr.FacetComponent" />
-       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
-       <searchComponent name="highlight" class="solr.HighlightComponent" />
-       <searchComponent name="stats"     class="solr.StatsComponent" />
-       <searchComponent name="debug"     class="solr.DebugComponent" />
-
-       Default configuration in a requestHandler would look like:
-
-       <arr name="components">
-         <str>query</str>
-         <str>facet</str>
-         <str>mlt</str>
-         <str>highlight</str>
-         <str>stats</str>
-         <str>debug</str>
-       </arr>
-
-       If you register a searchComponent to one of the standard names,
-       that will be used instead of the default.
-
-       To insert components before or after the 'standard' components, use:
-
-       <arr name="first-components">
-         <str>myFirstComponentName</str>
-       </arr>
-
-       <arr name="last-components">
-         <str>myLastComponentName</str>
-       </arr>
-
-       NOTE: The component registered with the name "debug" will
-       always be executed after the "last-components"
-
-     -->
-
-   <!-- Spell Check
-
-        The spell check component can return a list of alternative spelling
-        suggestions.
-
-        http://wiki.apache.org/solr/SpellCheckComponent
-     -->
-  <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
-
-    <str name="queryAnalyzerFieldType">text_general</str>
-
-    <!-- Multiple "Spell Checkers" can be declared and used by this
-         component
-      -->
-
-    <!-- a spellchecker built from a field of the main index -->
-    <lst name="spellchecker">
-      <str name="name">default</str>
-      <str name="field">text</str>
-      <str name="classname">solr.DirectSolrSpellChecker</str>
-      <!-- the spellcheck distance measure used, the default is the internal levenshtein -->
-      <str name="distanceMeasure">internal</str>
-      <!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
-      <float name="accuracy">0.5</float>
-      <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
-      <int name="maxEdits">2</int>
-      <!-- the minimum shared prefix when enumerating terms -->
-      <int name="minPrefix">1</int>
-      <!-- maximum number of inspections per result. -->
-      <int name="maxInspections">5</int>
-      <!-- minimum length of a query term to be considered for correction -->
-      <int name="minQueryLength">4</int>
-      <!-- maximum threshold of documents a query term can appear to be considered for correction -->
-      <float name="maxQueryFrequency">0.01</float>
-      <!-- uncomment this to require suggestions to occur in 1% of the documents
-        <float name="thresholdTokenFrequency">.01</float>
-      -->
-    </lst>
-
-    <!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage -->
-    <lst name="spellchecker">
-      <str name="name">wordbreak</str>
-      <str name="classname">solr.WordBreakSolrSpellChecker</str>
-      <str name="field">name</str>
-      <str name="combineWords">true</str>
-      <str name="breakWords">true</str>
-      <int name="maxChanges">10</int>
-    </lst>
-
-    <!-- a spellchecker that uses a different distance measure -->
-    <!--
-       <lst name="spellchecker">
-         <str name="name">jarowinkler</str>
-         <str name="field">spell</str>
-         <str name="classname">solr.DirectSolrSpellChecker</str>
-         <str name="distanceMeasure">
-           org.apache.lucene.search.spell.JaroWinklerDistance
-         </str>
-       </lst>
-     -->
-
-    <!-- a spellchecker that use an alternate comparator
-
-         comparatorClass be one of:
-          1. score (default)
-          2. freq (Frequency first, then score)
-          3. A fully qualified class name
-      -->
-    <!--
-       <lst name="spellchecker">
-         <str name="name">freq</str>
-         <str name="field">lowerfilt</str>
-         <str name="classname">solr.DirectSolrSpellChecker</str>
-         <str name="comparatorClass">freq</str>
-      -->
-
-    <!-- A spellchecker that reads the list of words from a file -->
-    <!--
-       <lst name="spellchecker">
-         <str name="classname">solr.FileBasedSpellChecker</str>
-         <str name="name">file</str>
-         <str name="sourceLocation">spellings.txt</str>
-         <str name="characterEncoding">UTF-8</str>
-         <str name="spellcheckIndexDir">spellcheckerFile</str>
-       </lst>
-      -->
-  </searchComponent>
-
-  <!-- A request handler for demonstrating the spellcheck component.
-
-       NOTE: This is purely as an example.  The whole purpose of the
-       SpellCheckComponent is to hook it into the request handler that
-       handles your normal user queries so that a separate request is
-       not needed to get suggestions.
-
-       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
-       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
-
-       See http://wiki.apache.org/solr/SpellCheckComponent for details
-       on the request parameters.
-    -->
-  <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="df">text</str>
-      <!-- Solr will use suggestions from both the 'default' spellchecker
-           and from the 'wordbreak' spellchecker and combine them.
-           collations (re-written queries) can include a combination of
-           corrections from both spellcheckers -->
-      <str name="spellcheck.dictionary">default</str>
-      <str name="spellcheck.dictionary">wordbreak</str>
-      <str name="spellcheck">on</str>
-      <str name="spellcheck.extendedResults">true</str>
-      <str name="spellcheck.count">10</str>
-      <str name="spellcheck.alternativeTermCount">5</str>
-      <str name="spellcheck.maxResultsForSuggest">5</str>
-      <str name="spellcheck.collate">true</str>
-      <str name="spellcheck.collateExtendedResults">true</str>
-      <str name="spellcheck.maxCollationTries">10</str>
-      <str name="spellcheck.maxCollations">5</str>
-    </lst>
-    <arr name="last-components">
-      <str>spellcheck</str>
-    </arr>
-  </requestHandler>
-
-  <searchComponent name="suggest" class="solr.SuggestComponent">
-    <lst name="suggester">
-      <str name="name">mySuggester</str>
-      <str name="lookupImpl">FuzzyLookupFactory</str>      <!-- org.apache.solr.spelling.suggest.fst -->
-      <str name="dictionaryImpl">DocumentDictionaryFactory</str>     <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
-      <str name="field">cat</str>
-      <str name="weightField">price</str>
-      <str name="suggestAnalyzerFieldType">string</str>
-    </lst>
-  </searchComponent>
-
-  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="suggest">true</str>
-      <str name="suggest.count">10</str>
-    </lst>
-    <arr name="components">
-      <str>suggest</str>
-    </arr>
-  </requestHandler>
-  <!-- Term Vector Component
-
-       http://wiki.apache.org/solr/TermVectorComponent
-    -->
-  <searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
-
-  <!-- A request handler for demonstrating the term vector component
-
-       This is purely as an example.
-
-       In reality you will likely want to add the component to your
-       already specified request handlers.
-    -->
-  <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="df">text</str>
-      <bool name="tv">true</bool>
-    </lst>
-    <arr name="last-components">
-      <str>tvComponent</str>
-    </arr>
-  </requestHandler>
-
-  <!-- Terms Component
-
-       http://wiki.apache.org/solr/TermsComponent
-
-       A component to return terms and document frequency of those
-       terms
-    -->
-  <searchComponent name="terms" class="solr.TermsComponent"/>
-
-  <!-- A request handler for demonstrating the terms component -->
-  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
-     <lst name="defaults">
-      <bool name="terms">true</bool>
-      <bool name="distrib">false</bool>
-    </lst>
-    <arr name="components">
-      <str>terms</str>
-    </arr>
-  </requestHandler>
-
-
-  <!-- Query Elevation Component
-
-       http://wiki.apache.org/solr/QueryElevationComponent
-
-       a search component that enables you to configure the top
-       results for a given query regardless of the normal lucene
-       scoring.
-    -->
-  <searchComponent name="elevator" class="solr.QueryElevationComponent" >
-    <!-- pick a fieldType to analyze queries -->
-    <str name="queryFieldType">string</str>
-    <str name="config-file">elevate.xml</str>
-  </searchComponent>
-
-  <!-- A request handler for demonstrating the elevator component -->
-  <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="df">text</str>
-    </lst>
-    <arr name="last-components">
-      <str>elevator</str>
-    </arr>
-  </requestHandler>
-
-  <!-- Highlighting Component
-
-       http://wiki.apache.org/solr/HighlightingParameters
-    -->
-  <searchComponent class="solr.HighlightComponent" name="highlight">
-    <highlighting>
-      <!-- Configure the standard fragmenter -->
-      <!-- This could most likely be commented out in the "default" case -->
-      <fragmenter name="gap"
-                  default="true"
-                  class="solr.highlight.GapFragmenter">
-        <lst name="defaults">
-          <int name="hl.fragsize">100</int>
-        </lst>
-      </fragmenter>
-
-      <!-- A regular-expression-based fragmenter
-           (for sentence extraction)
-        -->
-      <fragmenter name="regex"
-                  class="solr.highlight.RegexFragmenter">
-        <lst name="defaults">
-          <!-- slightly smaller fragsizes work better because of slop -->
-          <int name="hl.fragsize">70</int>
-          <!-- allow 50% slop on fragment sizes -->
-          <float name="hl.regex.slop">0.5</float>
-          <!-- a basic sentence pattern -->
-          <str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
-        </lst>
-      </fragmenter>
-
-      <!-- Configure the standard formatter -->
-      <formatter name="html"
-                 default="true"
-                 class="solr.highlight.HtmlFormatter">
-        <lst name="defaults">
-          <str name="hl.simple.pre"><![CDATA[<em>]]></str>
-          <str name="hl.simple.post"><![CDATA[</em>]]></str>
-        </lst>
-      </formatter>
-
-      <!-- Configure the standard encoder -->
-      <encoder name="html"
-               class="solr.highlight.HtmlEncoder" />
-
-      <!-- Configure the standard fragListBuilder -->
-      <fragListBuilder name="simple"
-                       class="solr.highlight.SimpleFragListBuilder"/>
-
-      <!-- Configure the single fragListBuilder -->
-      <fragListBuilder name="single"
-                       class="solr.highlight.SingleFragListBuilder"/>
-
-      <!-- Configure the weighted fragListBuilder -->
-      <fragListBuilder name="weighted"
-                       default="true"
-                       class="solr.highlight.WeightedFragListBuilder"/>
-
-      <!-- default tag FragmentsBuilder -->
-      <fragmentsBuilder name="default"
-                        default="true"
-                        class="solr.highlight.ScoreOrderFragmentsBuilder">
-        <!--
-        <lst name="defaults">
-          <str name="hl.multiValuedSeparatorChar">/</str>
-        </lst>
-        -->
-      </fragmentsBuilder>
-
-      <!-- multi-colored tag FragmentsBuilder -->
-      <fragmentsBuilder name="colored"
-                        class="solr.highlight.ScoreOrderFragmentsBuilder">
-        <lst name="defaults">
-          <str name="hl.tag.pre"><![CDATA[
-               <b style="background:yellow">,<b style="background:lawgreen">,
-               <b style="background:aquamarine">,<b style="background:magenta">,
-               <b style="background:palegreen">,<b style="background:coral">,
-               <b style="background:wheat">,<b style="background:khaki">,
-               <b style="background:lime">,<b style="background:deepskyblue">]]></str>
-          <str name="hl.tag.post"><![CDATA[</b>]]></str>
-        </lst>
-      </fragmentsBuilder>
-
-      <boundaryScanner name="default"
-                       default="true"
-                       class="solr.highlight.SimpleBoundaryScanner">
-        <lst name="defaults">
-          <str name="hl.bs.maxScan">10</str>
-          <str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
-        </lst>
-      </boundaryScanner>
-
-      <boundaryScanner name="breakIterator"
-                       class="solr.highlight.BreakIteratorBoundaryScanner">
-        <lst name="defaults">
-          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
-          <str name="hl.bs.type">WORD</str>
-          <!-- language and country are used when constructing Locale object.  -->
-          <!-- And the Locale object will be used when getting instance of BreakIterator -->
-          <str name="hl.bs.language">en</str>
-          <str name="hl.bs.country">US</str>
-        </lst>
-      </boundaryScanner>
-    </highlighting>
-  </searchComponent>
-
-  <!-- Update Processors
-
-       Chains of Update Processor Factories for dealing with Update
-       Requests can be declared, and then used by name in Update
-       Request Processors
-
-       http://wiki.apache.org/solr/UpdateRequestProcessor
-
-    -->
-  <!-- Deduplication
-
-       An example dedup update processor that creates the "id" field
-       on the fly based on the hash code of some other fields.  This
-       example has overwriteDupes set to false since we are using the
-       id field as the signatureField and Solr will maintain
-       uniqueness based on that anyway.
-
-    -->
-  <!--
-     <updateRequestProcessorChain name="dedupe">
-       <processor class="solr.processor.SignatureUpdateProcessorFactory">
-         <bool name="enabled">true</bool>
-         <str name="signatureField">id</str>
-         <bool name="overwriteDupes">false</bool>
-         <str name="fields">name,features,cat</str>
-         <str name="signatureClass">solr.processor.Lookup3Signature</str>
-       </processor>
-       <processor class="solr.LogUpdateProcessorFactory" />
-       <processor class="solr.RunUpdateProcessorFactory" />
-     </updateRequestProcessorChain>
-    -->
-
-  <!-- Language identification
-
-       This example update chain identifies the language of the incoming
-       documents using the langid contrib. The detected language is
-       written to field language_s. No field name mapping is done.
-       The fields used for detection are text, title, subject and description,
-       making this example suitable for detecting languages form full-text
-       rich documents injected via ExtractingRequestHandler.
-       See more about langId at http://wiki.apache.org/solr/LanguageDetection
-    -->
-    <!--
-     <updateRequestProcessorChain name="langid">
-       <processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
-         <str name="langid.fl">text,title,subject,description</str>
-         <str name="langid.langField">language_s</str>
-         <str name="langid.fallback">en</str>
-       </processor>
-       <processor class="solr.LogUpdateProcessorFactory" />
-       <processor class="solr.RunUpdateProcessorFactory" />
-     </updateRequestProcessorChain>
-    -->
-
-  <!-- Script update processor
-
-    This example hooks in an update processor implemented using JavaScript.
-
-    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor
-  -->
-  <!--
-    <updateRequestProcessorChain name="script">
-      <processor class="solr.StatelessScriptUpdateProcessorFactory">
-        <str name="script">update-script.js</str>
-        <lst name="params">
-          <str name="config_param">example config parameter</str>
-        </lst>
-      </processor>
-      <processor class="solr.RunUpdateProcessorFactory" />
-    </updateRequestProcessorChain>
-  -->
-
-  <!-- Response Writers
-
-       http://wiki.apache.org/solr/QueryResponseWriter
-
-       Request responses will be written using the writer specified by
-       the 'wt' request parameter matching the name of a registered
-       writer.
-
-       The "default" writer is the default and will be used if 'wt' is
-       not specified in the request.
-    -->
-  <!-- The following response writers are implicitly configured unless
-       overridden...
-    -->
-  <!--
-     <queryResponseWriter name="xml"
-                          default="true"
-                          class="solr.XMLResponseWriter" />
-     <queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
-     <queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
-     <queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
-     <queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
-     <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
-     <queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
-     <queryResponseWriter name="schema.xml" class="solr.SchemaXmlResponseWriter"/>
-    -->
-
-  <queryResponseWriter name="json" class="solr.JSONResponseWriter">
-     <!-- For the purposes of the tutorial, JSON responses are written as
-      plain text so that they are easy to read in *any* browser.
-      If you expect a MIME type of "application/json" just remove this override.
-     -->
-    <str name="content-type">text/plain; charset=UTF-8</str>
-  </queryResponseWriter>
-
-  <!--
-     Custom response writers can be declared as needed...
-    -->
-  <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy">
-    <str name="template.base.dir">${velocity.template.base.dir:}</str>
-  </queryResponseWriter>
-
-  <!-- XSLT response writer transforms the XML output by any xslt file found
-       in Solr's conf/xslt directory.  Changes to xslt files are checked for
-       every xsltCacheLifetimeSeconds.
-    -->
-  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
-    <int name="xsltCacheLifetimeSeconds">5</int>
-  </queryResponseWriter>
-
-  <!-- Query Parsers
-
-       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
-
-       Multiple QParserPlugins can be registered by name, and then
-       used in either the "defType" param for the QueryComponent (used
-       by SearchHandler) or in LocalParams
-    -->
-  <!-- example of registering a query parser -->
-  <!--
-     <queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
-    -->
-
-  <!-- Function Parsers
-
-       http://wiki.apache.org/solr/FunctionQuery
-
-       Multiple ValueSourceParsers can be registered by name, and then
-       used as function names when using the "func" QParser.
-    -->
-  <!-- example of registering a custom function parser  -->
-  <!--
-     <valueSourceParser name="myfunc"
-                        class="com.mycompany.MyValueSourceParser" />
-    -->
-
-
-  <!-- Document Transformers
-       http://wiki.apache.org/solr/DocTransformers
-    -->
-  <!--
-     Could be something like:
-     <transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
-       <int name="connection">jdbc://....</int>
-     </transformer>
-
-     To add a constant value to all docs, use:
-     <transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
-       <int name="value">5</int>
-     </transformer>
-
-     If you want the user to still be able to change it with _value:something_ use this:
-     <transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
-       <double name="defaultValue">5</double>
-     </transformer>
-
-      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The
-      EditorialMarkerFactory will do exactly that:
-     <transformer name="qecBooster" class="org.apache.solr.response.transform.EditorialMarkerFactory" />
-    -->
-
-</config>
diff --git a/solr/example/example-DIH/solr/mail/conf/spellings.txt b/solr/example/example-DIH/solr/mail/conf/spellings.txt
deleted file mode 100644
index d7ede6f..0000000
--- a/solr/example/example-DIH/solr/mail/conf/spellings.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-pizza
-history
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/mail/conf/stopwords.txt b/solr/example/example-DIH/solr/mail/conf/stopwords.txt
deleted file mode 100644
index ae1e83e..0000000
--- a/solr/example/example-DIH/solr/mail/conf/stopwords.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
diff --git a/solr/example/example-DIH/solr/mail/conf/synonyms.txt b/solr/example/example-DIH/solr/mail/conf/synonyms.txt
deleted file mode 100644
index eab4ee8..0000000
--- a/solr/example/example-DIH/solr/mail/conf/synonyms.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-#some test synonym mappings unlikely to appear in real input text
-aaafoo => aaabar
-bbbfoo => bbbfoo bbbbar
-cccfoo => cccbar cccbaz
-fooaaa,baraaa,bazaaa
-
-# Some synonym groups specific to this example
-GB,gib,gigabyte,gigabytes
-MB,mib,megabyte,megabytes
-Television, Televisions, TV, TVs
-#notice we use "gib" instead of "GiB" so any WordDelimiterGraphFilter coming
-#after us won't split it into two words.
-
-# Synonym mappings can be used for spelling correction too
-pixima => pixma
-
diff --git a/solr/example/example-DIH/solr/mail/conf/update-script.js b/solr/example/example-DIH/solr/mail/conf/update-script.js
deleted file mode 100644
index 49b07f9..0000000
--- a/solr/example/example-DIH/solr/mail/conf/update-script.js
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
-  This is a basic skeleton JavaScript update processor.
-
-  In order for this to be executed, it must be properly wired into solrconfig.xml; by default it is commented out in
-  the example solrconfig.xml and must be uncommented to be enabled.
-
-  See http://wiki.apache.org/solr/ScriptUpdateProcessor for more details.
-*/
-
-function processAdd(cmd) {
-
-  doc = cmd.solrDoc;  // org.apache.solr.common.SolrInputDocument
-  id = doc.getFieldValue("id");
-  logger.info("update-script#processAdd: id=" + id);
-
-// Set a field value:
-//  doc.setField("foo_s", "whatever");
-
-// Get a configuration parameter:
-//  config_param = params.get('config_param');  // "params" only exists if processor configured with <lst name="params">
-
-// Get a request parameter:
-// some_param = req.getParams().get("some_param")
-
-// Add a field of field names that match a pattern:
-//   - Potentially useful to determine the fields/attributes represented in a result set, via faceting on field_name_ss
-//  field_names = doc.getFieldNames().toArray();
-//  for(i=0; i < field_names.length; i++) {
-//    field_name = field_names[i];
-//    if (/attr_.*/.test(field_name)) { doc.addField("attribute_ss", field_names[i]); }
-//  }
-
-}
-
-function processDelete(cmd) {
-  // no-op
-}
-
-function processMergeIndexes(cmd) {
-  // no-op
-}
-
-function processCommit(cmd) {
-  // no-op
-}
-
-function processRollback(cmd) {
-  // no-op
-}
-
-function finish() {
-  // no-op
-}
diff --git a/solr/example/example-DIH/solr/mail/conf/xslt/example.xsl b/solr/example/example-DIH/solr/mail/conf/xslt/example.xsl
deleted file mode 100644
index b899270..0000000
--- a/solr/example/example-DIH/solr/mail/conf/xslt/example.xsl
+++ /dev/null
@@ -1,132 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to HTML
- -->
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'
->
-
-  <xsl:output media-type="text/html" encoding="UTF-8"/> 
-  
-  <xsl:variable name="title" select="concat('Solr search results (',response/result/@numFound,' documents)')"/>
-  
-  <xsl:template match='/'>
-    <html>
-      <head>
-        <title><xsl:value-of select="$title"/></title>
-        <xsl:call-template name="css"/>
-      </head>
-      <body>
-        <h1><xsl:value-of select="$title"/></h1>
-        <div class="note">
-          This has been formatted by the sample "example.xsl" transform -
-          use your own XSLT to get a nicer page
-        </div>
-        <xsl:apply-templates select="response/result/doc"/>
-      </body>
-    </html>
-  </xsl:template>
-  
-  <xsl:template match="doc">
-    <xsl:variable name="pos" select="position()"/>
-    <div class="doc">
-      <table width="100%">
-        <xsl:apply-templates>
-          <xsl:with-param name="pos"><xsl:value-of select="$pos"/></xsl:with-param>
-        </xsl:apply-templates>
-      </table>
-    </div>
-  </xsl:template>
-
-  <xsl:template match="doc/*[@name='score']" priority="100">
-    <xsl:param name="pos"></xsl:param>
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <xsl:value-of select="."/>
-
-        <xsl:if test="boolean(//lst[@name='explain'])">
-          <xsl:element name="a">
-            <!-- can't allow whitespace here -->
-            <xsl:attribute name="href">javascript:toggle("<xsl:value-of select="concat('exp-',$pos)" />");</xsl:attribute>?</xsl:element>
-          <br/>
-          <xsl:element name="div">
-            <xsl:attribute name="class">exp</xsl:attribute>
-            <xsl:attribute name="id">
-              <xsl:value-of select="concat('exp-',$pos)" />
-            </xsl:attribute>
-            <xsl:value-of select="//lst[@name='explain']/str[position()=$pos]"/>
-          </xsl:element>
-        </xsl:if>
-      </td>
-    </tr>
-  </xsl:template>
-
-  <xsl:template match="doc/arr" priority="100">
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <ul>
-        <xsl:for-each select="*">
-          <li><xsl:value-of select="."/></li>
-        </xsl:for-each>
-        </ul>
-      </td>
-    </tr>
-  </xsl:template>
-
-
-  <xsl:template match="doc/*">
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <xsl:value-of select="."/>
-      </td>
-    </tr>
-  </xsl:template>
-
-  <xsl:template match="*"/>
-  
-  <xsl:template name="css">
-    <script>
-      function toggle(id) {
-        var obj = document.getElementById(id);
-        obj.style.display = (obj.style.display != 'block') ? 'block' : 'none';
-      }
-    </script>
-    <style type="text/css">
-      body { font-family: "Lucida Grande", sans-serif }
-      td.name { font-style: italic; font-size:80%; }
-      td { vertical-align: top; }
-      ul { margin: 0px; margin-left: 1em; padding: 0px; }
-      .note { font-size:80%; }
-      .doc { margin-top: 1em; border-top: solid grey 1px; }
-      .exp { display: none; font-family: monospace; white-space: pre; }
-    </style>
-  </xsl:template>
-
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/mail/conf/xslt/example_atom.xsl b/solr/example/example-DIH/solr/mail/conf/xslt/example_atom.xsl
deleted file mode 100644
index b6c2315..0000000
--- a/solr/example/example-DIH/solr/mail/conf/xslt/example_atom.xsl
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to Atom
- -->
-
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-
-  <xsl:output
-       method="xml"
-       encoding="utf-8"
-       media-type="application/xml"
-  />
-
-  <xsl:template match='/'>
-    <xsl:variable name="query" select="response/lst[@name='responseHeader']/lst[@name='params']/str[@name='q']"/>
-    <feed xmlns="http://www.w3.org/2005/Atom">
-      <title>Example Solr Atom 1.0 Feed</title>
-      <subtitle>
-       This has been formatted by the sample "example_atom.xsl" transform -
-       use your own XSLT to get a nicer Atom feed.
-      </subtitle>
-      <author>
-        <name>Apache Solr</name>
-        <email>solr-user@lucene.apache.org</email>
-      </author>
-      <link rel="self" type="application/atom+xml" 
-            href="http://localhost:8983/solr/q={$query}&amp;wt=xslt&amp;tr=atom.xsl"/>
-      <updated>
-        <xsl:value-of select="response/result/doc[position()=1]/date[@name='timestamp']"/>
-      </updated>
-      <id>tag:localhost,2007:example</id>
-      <xsl:apply-templates select="response/result/doc"/>
-    </feed>
-  </xsl:template>
-    
-  <!-- search results xslt -->
-  <xsl:template match="doc">
-    <xsl:variable name="id" select="str[@name='id']"/>
-    <entry>
-      <title><xsl:value-of select="str[@name='name']"/></title>
-      <link href="http://localhost:8983/solr/select?q={$id}"/>
-      <id>tag:localhost,2007:<xsl:value-of select="$id"/></id>
-      <summary><xsl:value-of select="arr[@name='features']"/></summary>
-      <updated><xsl:value-of select="date[@name='timestamp']"/></updated>
-    </entry>
-  </xsl:template>
-
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/mail/conf/xslt/example_rss.xsl b/solr/example/example-DIH/solr/mail/conf/xslt/example_rss.xsl
deleted file mode 100644
index c8ab5bf..0000000
--- a/solr/example/example-DIH/solr/mail/conf/xslt/example_rss.xsl
+++ /dev/null
@@ -1,66 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to RSS
- -->
-
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-
-  <xsl:output
-       method="xml"
-       encoding="utf-8"
-       media-type="application/xml"
-  />
-  <xsl:template match='/'>
-    <rss version="2.0">
-       <channel>
-         <title>Example Solr RSS 2.0 Feed</title>
-         <link>http://localhost:8983/solr</link>
-         <description>
-          This has been formatted by the sample "example_rss.xsl" transform -
-          use your own XSLT to get a nicer RSS feed.
-         </description>
-         <language>en-us</language>
-         <docs>http://localhost:8983/solr</docs>
-         <xsl:apply-templates select="response/result/doc"/>
-       </channel>
-    </rss>
-  </xsl:template>
-  
-  <!-- search results xslt -->
-  <xsl:template match="doc">
-    <xsl:variable name="id" select="str[@name='id']"/>
-    <xsl:variable name="timestamp" select="date[@name='timestamp']"/>
-    <item>
-      <title><xsl:value-of select="str[@name='name']"/></title>
-      <link>
-        http://localhost:8983/solr/select?q=id:<xsl:value-of select="$id"/>
-      </link>
-      <description>
-        <xsl:value-of select="arr[@name='features']"/>
-      </description>
-      <pubDate><xsl:value-of select="$timestamp"/></pubDate>
-      <guid>
-        http://localhost:8983/solr/select?q=id:<xsl:value-of select="$id"/>
-      </guid>
-    </item>
-  </xsl:template>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/mail/conf/xslt/luke.xsl b/solr/example/example-DIH/solr/mail/conf/xslt/luke.xsl
deleted file mode 100644
index 05fb5bf..0000000
--- a/solr/example/example-DIH/solr/mail/conf/xslt/luke.xsl
+++ /dev/null
@@ -1,337 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-    
-    http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-
-
-<!-- 
-  Display the luke request handler with graphs
- -->
-<xsl:stylesheet
-    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
-    xmlns="http://www.w3.org/1999/xhtml"
-    version="1.0"
-    >
-    <xsl:output
-        method="html"
-        encoding="UTF-8"
-        media-type="text/html"
-        doctype-public="-//W3C//DTD XHTML 1.0 Strict//EN"
-        doctype-system="http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"
-    />
-
-    <xsl:variable name="title">Solr Luke Request Handler Response</xsl:variable>
-
-    <xsl:template match="/">
-        <html xmlns="http://www.w3.org/1999/xhtml">
-            <head>
-                <link rel="stylesheet" type="text/css" href="solr-admin.css"/>
-                <link rel="icon" href="favicon.ico" type="image/x-icon"/>
-                <link rel="shortcut icon" href="favicon.ico" type="image/x-icon"/>
-                <title>
-                    <xsl:value-of select="$title"/>
-                </title>
-                <xsl:call-template name="css"/>
-
-            </head>
-            <body>
-                <h1>
-                    <xsl:value-of select="$title"/>
-                </h1>
-                <div class="doc">
-                    <ul>
-                        <xsl:if test="response/lst[@name='index']">
-                            <li>
-                                <a href="#index">Index Statistics</a>
-                            </li>
-                        </xsl:if>
-                        <xsl:if test="response/lst[@name='fields']">
-                            <li>
-                                <a href="#fields">Field Statistics</a>
-                                <ul>
-                                    <xsl:for-each select="response/lst[@name='fields']/lst">
-                                        <li>
-                                            <a href="#{@name}">
-                                                <xsl:value-of select="@name"/>
-                                            </a>
-                                        </li>
-                                    </xsl:for-each>
-                                </ul>
-                            </li>
-                        </xsl:if>
-                        <xsl:if test="response/lst[@name='doc']">
-                            <li>
-                                <a href="#doc">Document statistics</a>
-                            </li>
-                        </xsl:if>
-                    </ul>
-                </div>
-                <xsl:if test="response/lst[@name='index']">
-                    <h2><a name="index"/>Index Statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='index']"/>
-                </xsl:if>
-                <xsl:if test="response/lst[@name='fields']">
-                    <h2><a name="fields"/>Field Statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='fields']"/>
-                </xsl:if>
-                <xsl:if test="response/lst[@name='doc']">
-                    <h2><a name="doc"/>Document statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='doc']"/>
-                </xsl:if>
-            </body>
-        </html>
-    </xsl:template>
-
-    <xsl:template match="lst">
-        <xsl:if test="parent::lst">
-            <tr>
-                <td colspan="2">
-                    <div class="doc">
-                        <xsl:call-template name="list"/>
-                    </div>
-                </td>
-            </tr>
-        </xsl:if>
-        <xsl:if test="not(parent::lst)">
-            <div class="doc">
-                <xsl:call-template name="list"/>
-            </div>
-        </xsl:if>
-    </xsl:template>
-
-    <xsl:template name="list">
-        <xsl:if test="count(child::*)>0">
-            <table>
-                <thead>
-                    <tr>
-                        <th colspan="2">
-                            <p>
-                                <a name="{@name}"/>
-                            </p>
-                            <xsl:value-of select="@name"/>
-                        </th>
-                    </tr>
-                </thead>
-                <tbody>
-                    <xsl:choose>
-                        <xsl:when
-                            test="@name='histogram'">
-                            <tr>
-                                <td colspan="2">
-                                    <xsl:call-template name="histogram"/>
-                                </td>
-                            </tr>
-                        </xsl:when>
-                        <xsl:otherwise>
-                            <xsl:apply-templates/>
-                        </xsl:otherwise>
-                    </xsl:choose>
-                </tbody>
-            </table>
-        </xsl:if>
-    </xsl:template>
-
-    <xsl:template name="histogram">
-        <div class="doc">
-            <xsl:call-template name="barchart">
-                <xsl:with-param name="max_bar_width">50</xsl:with-param>
-                <xsl:with-param name="iwidth">800</xsl:with-param>
-                <xsl:with-param name="iheight">160</xsl:with-param>
-                <xsl:with-param name="fill">blue</xsl:with-param>
-            </xsl:call-template>
-        </div>
-    </xsl:template>
-
-    <xsl:template name="barchart">
-        <xsl:param name="max_bar_width"/>
-        <xsl:param name="iwidth"/>
-        <xsl:param name="iheight"/>
-        <xsl:param name="fill"/>
-        <xsl:variable name="max">
-            <xsl:for-each select="int">
-                <xsl:sort data-type="number" order="descending"/>
-                <xsl:if test="position()=1">
-                    <xsl:value-of select="."/>
-                </xsl:if>
-            </xsl:for-each>
-        </xsl:variable>
-        <xsl:variable name="bars">
-           <xsl:value-of select="count(int)"/>
-        </xsl:variable>
-        <xsl:variable name="bar_width">
-           <xsl:choose>
-             <xsl:when test="$max_bar_width &lt; ($iwidth div $bars)">
-               <xsl:value-of select="$max_bar_width"/>
-             </xsl:when>
-             <xsl:otherwise>
-               <xsl:value-of select="$iwidth div $bars"/>
-             </xsl:otherwise>
-           </xsl:choose>
-        </xsl:variable>
-        <table class="histogram">
-           <tbody>
-              <tr>
-                <xsl:for-each select="int">
-                   <td>
-                 <xsl:value-of select="."/>
-                 <div class="histogram">
-                  <xsl:attribute name="style">background-color: <xsl:value-of select="$fill"/>; width: <xsl:value-of select="$bar_width"/>px; height: <xsl:value-of select="($iheight*number(.)) div $max"/>px;</xsl:attribute>
-                 </div>
-                   </td> 
-                </xsl:for-each>
-              </tr>
-              <tr>
-                <xsl:for-each select="int">
-                   <td>
-                       <xsl:value-of select="@name"/>
-                   </td>
-                </xsl:for-each>
-              </tr>
-           </tbody>
-        </table>
-    </xsl:template>
-
-    <xsl:template name="keyvalue">
-        <xsl:choose>
-            <xsl:when test="@name">
-                <tr>
-                    <td class="name">
-                        <xsl:value-of select="@name"/>
-                    </td>
-                    <td class="value">
-                        <xsl:value-of select="."/>
-                    </td>
-                </tr>
-            </xsl:when>
-            <xsl:otherwise>
-                <xsl:value-of select="."/>
-            </xsl:otherwise>
-        </xsl:choose>
-    </xsl:template>
-
-    <xsl:template match="int|bool|long|float|double|uuid|date">
-        <xsl:call-template name="keyvalue"/>
-    </xsl:template>
-
-    <xsl:template match="arr">
-        <tr>
-            <td class="name">
-                <xsl:value-of select="@name"/>
-            </td>
-            <td class="value">
-                <ul>
-                    <xsl:for-each select="child::*">
-                        <li>
-                            <xsl:apply-templates/>
-                        </li>
-                    </xsl:for-each>
-                </ul>
-            </td>
-        </tr>
-    </xsl:template>
-
-    <xsl:template match="str">
-        <xsl:choose>
-            <xsl:when test="@name='schema' or @name='index' or @name='flags'">
-                <xsl:call-template name="schema"/>
-            </xsl:when>
-            <xsl:otherwise>
-                <xsl:call-template name="keyvalue"/>
-            </xsl:otherwise>
-        </xsl:choose>
-    </xsl:template>
-
-    <xsl:template name="schema">
-        <tr>
-            <td class="name">
-                <xsl:value-of select="@name"/>
-            </td>
-            <td class="value">
-                <xsl:if test="contains(.,'unstored')">
-                    <xsl:value-of select="."/>
-                </xsl:if>
-                <xsl:if test="not(contains(.,'unstored'))">
-                    <xsl:call-template name="infochar2string">
-                        <xsl:with-param name="charList">
-                            <xsl:value-of select="."/>
-                        </xsl:with-param>
-                    </xsl:call-template>
-                </xsl:if>
-            </td>
-        </tr>
-    </xsl:template>
-
-    <xsl:template name="infochar2string">
-        <xsl:param name="i">1</xsl:param>
-        <xsl:param name="charList"/>
-
-        <xsl:variable name="char">
-            <xsl:value-of select="substring($charList,$i,1)"/>
-        </xsl:variable>
-        <xsl:choose>
-            <xsl:when test="$char='I'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='I']"/> - </xsl:when>
-            <xsl:when test="$char='T'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='T']"/> - </xsl:when>
-            <xsl:when test="$char='S'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='S']"/> - </xsl:when>
-            <xsl:when test="$char='M'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='M']"/> - </xsl:when>
-            <xsl:when test="$char='V'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='V']"/> - </xsl:when>
-            <xsl:when test="$char='o'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='o']"/> - </xsl:when>
-            <xsl:when test="$char='p'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='p']"/> - </xsl:when>
-            <xsl:when test="$char='O'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='O']"/> - </xsl:when>
-            <xsl:when test="$char='L'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='L']"/> - </xsl:when>
-            <xsl:when test="$char='B'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='B']"/> - </xsl:when>
-            <xsl:when test="$char='C'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='C']"/> - </xsl:when>
-            <xsl:when test="$char='f'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='f']"/> - </xsl:when>
-            <xsl:when test="$char='l'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='l']"/> -
-            </xsl:when>
-        </xsl:choose>
-
-        <xsl:if test="not($i>=string-length($charList))">
-            <xsl:call-template name="infochar2string">
-                <xsl:with-param name="i">
-                    <xsl:value-of select="$i+1"/>
-                </xsl:with-param>
-                <xsl:with-param name="charList">
-                    <xsl:value-of select="$charList"/>
-                </xsl:with-param>
-            </xsl:call-template>
-        </xsl:if>
-    </xsl:template>
-    <xsl:template name="css">
-        <style type="text/css">
-            <![CDATA[
-            td.name {font-style: italic; font-size:80%; }
-            .doc { margin: 0.5em; border: solid grey 1px; }
-            .exp { display: none; font-family: monospace; white-space: pre; }
-            div.histogram { background: none repeat scroll 0%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;}
-            table.histogram { width: auto; vertical-align: bottom; }
-            table.histogram td, table.histogram th { text-align: center; vertical-align: bottom; border-bottom: 1px solid #ff9933; width: auto; }
-            ]]>
-        </style>
-    </xsl:template>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/mail/conf/xslt/updateXml.xsl b/solr/example/example-DIH/solr/mail/conf/xslt/updateXml.xsl
deleted file mode 100644
index a96e1d0..0000000
--- a/solr/example/example-DIH/solr/mail/conf/xslt/updateXml.xsl
+++ /dev/null
@@ -1,70 +0,0 @@
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!--
-  Simple transform of Solr query response into Solr Update XML compliant XML.
-  When used in the xslt response writer you will get UpdaateXML as output.
-  But you can also store a query response XML to disk and feed this XML to
-  the XSLTUpdateRequestHandler to index the content. Provided as example only.
-  See http://wiki.apache.org/solr/XsltUpdateRequestHandler for more info
- -->
-<xsl:stylesheet version='1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-  <xsl:output media-type="text/xml" method="xml" indent="yes"/>
-
-  <xsl:template match='/'>
-    <add>
-        <xsl:apply-templates select="response/result/doc"/>
-    </add>
-  </xsl:template>
-  
-  <!-- Ignore score (makes no sense to index) -->
-  <xsl:template match="doc/*[@name='score']" priority="100">
-  </xsl:template>
-
-  <xsl:template match="doc">
-    <xsl:variable name="pos" select="position()"/>
-    <doc>
-        <xsl:apply-templates>
-          <xsl:with-param name="pos"><xsl:value-of select="$pos"/></xsl:with-param>
-        </xsl:apply-templates>
-    </doc>
-  </xsl:template>
-
-  <!-- Flatten arrays to duplicate field lines -->
-  <xsl:template match="doc/arr" priority="100">
-      <xsl:variable name="fn" select="@name"/>
-      
-      <xsl:for-each select="*">
-        <xsl:element name="field">
-          <xsl:attribute name="name"><xsl:value-of select="$fn"/></xsl:attribute>
-          <xsl:value-of select="."/>
-        </xsl:element>
-      </xsl:for-each>
-  </xsl:template>
-
-
-  <xsl:template match="doc/*">
-      <xsl:variable name="fn" select="@name"/>
-
-      <xsl:element name="field">
-        <xsl:attribute name="name"><xsl:value-of select="$fn"/></xsl:attribute>
-        <xsl:value-of select="."/>
-      </xsl:element>
-  </xsl:template>
-
-  <xsl:template match="*"/>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/mail/core.properties b/solr/example/example-DIH/solr/mail/core.properties
deleted file mode 100644
index e69de29..0000000
--- a/solr/example/example-DIH/solr/mail/core.properties
+++ /dev/null
diff --git a/solr/example/example-DIH/solr/solr.xml b/solr/example/example-DIH/solr/solr.xml
deleted file mode 100644
index 191e51f..0000000
--- a/solr/example/example-DIH/solr/solr.xml
+++ /dev/null
@@ -1,2 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<solr></solr>
diff --git a/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/kmeans-attributes.xml b/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/kmeans-attributes.xml
deleted file mode 100644
index d802465..0000000
--- a/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/kmeans-attributes.xml
+++ /dev/null
@@ -1,19 +0,0 @@
-<!-- 
-  Default configuration for the bisecting k-means clustering algorithm.
-  
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="MultilingualClustering.languageAggregationStrategy">
-            <value type="org.carrot2.text.clustering.MultilingualClustering$LanguageAggregationStrategy" value="FLATTEN_MAJOR_LANGUAGE"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
diff --git a/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/lingo-attributes.xml b/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/lingo-attributes.xml
deleted file mode 100644
index 4bf1360..0000000
--- a/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/lingo-attributes.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-<!-- 
-  Default configuration for the Lingo clustering algorithm.
-
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <!-- 
-          The language to assume for clustered documents.
-          For a list of allowed values, see: 
-          http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage
-          -->
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="LingoClusteringAlgorithm.desiredClusterCountBase">
-            <value type="java.lang.Integer" value="20"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/stc-attributes.xml b/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/stc-attributes.xml
deleted file mode 100644
index c1bf110..0000000
--- a/solr/example/example-DIH/solr/solr/conf/clustering/carrot2/stc-attributes.xml
+++ /dev/null
@@ -1,19 +0,0 @@
-<!-- 
-  Default configuration for the STC clustering algorithm.
-
-  This file can be loaded (and saved) by Carrot2 Workbench.
-  http://project.carrot2.org/download.html
--->
-<attribute-sets default="attributes">
-    <attribute-set id="attributes">
-      <value-set>
-        <label>attributes</label>
-          <attribute key="MultilingualClustering.defaultLanguage">
-            <value type="org.carrot2.core.LanguageCode" value="ENGLISH"/>
-          </attribute>
-          <attribute key="MultilingualClustering.languageAggregationStrategy">
-            <value type="org.carrot2.text.clustering.MultilingualClustering$LanguageAggregationStrategy" value="FLATTEN_MAJOR_LANGUAGE"/>
-          </attribute>
-      </value-set>
-  </attribute-set>
-</attribute-sets>
diff --git a/solr/example/example-DIH/solr/solr/conf/currency.xml b/solr/example/example-DIH/solr/solr/conf/currency.xml
deleted file mode 100644
index 532221a..0000000
--- a/solr/example/example-DIH/solr/solr/conf/currency.xml
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version="1.0" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- Example exchange rates file for CurrencyFieldType named "currency" in example schema -->
-
-<currencyConfig version="1.0">
-  <rates>
-    <!-- Updated from http://www.exchangerate.com/ at 2011-09-27 -->
-    <rate from="USD" to="ARS" rate="4.333871" comment="ARGENTINA Peso" />
-    <rate from="USD" to="AUD" rate="1.025768" comment="AUSTRALIA Dollar" />
-    <rate from="USD" to="EUR" rate="0.743676" comment="European Euro" />
-    <rate from="USD" to="BRL" rate="1.881093" comment="BRAZIL Real" />
-    <rate from="USD" to="CAD" rate="1.030815" comment="CANADA Dollar" />
-    <rate from="USD" to="CLP" rate="519.0996" comment="CHILE Peso" />
-    <rate from="USD" to="CNY" rate="6.387310" comment="CHINA Yuan" />
-    <rate from="USD" to="CZK" rate="18.47134" comment="CZECH REP. Koruna" />
-    <rate from="USD" to="DKK" rate="5.515436" comment="DENMARK Krone" />
-    <rate from="USD" to="HKD" rate="7.801922" comment="HONG KONG Dollar" />
-    <rate from="USD" to="HUF" rate="215.6169" comment="HUNGARY Forint" />
-    <rate from="USD" to="ISK" rate="118.1280" comment="ICELAND Krona" />
-    <rate from="USD" to="INR" rate="49.49088" comment="INDIA Rupee" />
-    <rate from="USD" to="XDR" rate="0.641358" comment="INTNL MON. FUND SDR" />
-    <rate from="USD" to="ILS" rate="3.709739" comment="ISRAEL Sheqel" />
-    <rate from="USD" to="JPY" rate="76.32419" comment="JAPAN Yen" />
-    <rate from="USD" to="KRW" rate="1169.173" comment="KOREA (SOUTH) Won" />
-    <rate from="USD" to="KWD" rate="0.275142" comment="KUWAIT Dinar" />
-    <rate from="USD" to="MXN" rate="13.85895" comment="MEXICO Peso" />
-    <rate from="USD" to="NZD" rate="1.285159" comment="NEW ZEALAND Dollar" />
-    <rate from="USD" to="NOK" rate="5.859035" comment="NORWAY Krone" />
-    <rate from="USD" to="PKR" rate="87.57007" comment="PAKISTAN Rupee" />
-    <rate from="USD" to="PEN" rate="2.730683" comment="PERU Sol" />
-    <rate from="USD" to="PHP" rate="43.62039" comment="PHILIPPINES Peso" />
-    <rate from="USD" to="PLN" rate="3.310139" comment="POLAND Zloty" />
-    <rate from="USD" to="RON" rate="3.100932" comment="ROMANIA Leu" />
-    <rate from="USD" to="RUB" rate="32.14663" comment="RUSSIA Ruble" />
-    <rate from="USD" to="SAR" rate="3.750465" comment="SAUDI ARABIA Riyal" />
-    <rate from="USD" to="SGD" rate="1.299352" comment="SINGAPORE Dollar" />
-    <rate from="USD" to="ZAR" rate="8.329761" comment="SOUTH AFRICA Rand" />
-    <rate from="USD" to="SEK" rate="6.883442" comment="SWEDEN Krona" />
-    <rate from="USD" to="CHF" rate="0.906035" comment="SWITZERLAND Franc" />
-    <rate from="USD" to="TWD" rate="30.40283" comment="TAIWAN Dollar" />
-    <rate from="USD" to="THB" rate="30.89487" comment="THAILAND Baht" />
-    <rate from="USD" to="AED" rate="3.672955" comment="U.A.E. Dirham" />
-    <rate from="USD" to="UAH" rate="7.988582" comment="UKRAINE Hryvnia" />
-    <rate from="USD" to="GBP" rate="0.647910" comment="UNITED KINGDOM Pound" />
-    
-    <!-- Cross-rates for some common currencies -->
-    <rate from="EUR" to="GBP" rate="0.869914" />  
-    <rate from="EUR" to="NOK" rate="7.800095" />  
-    <rate from="GBP" to="NOK" rate="8.966508" />  
-  </rates>
-</currencyConfig>
diff --git a/solr/example/example-DIH/solr/solr/conf/elevate.xml b/solr/example/example-DIH/solr/solr/conf/elevate.xml
deleted file mode 100644
index 2c09ebe..0000000
--- a/solr/example/example-DIH/solr/solr/conf/elevate.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!-- If this file is found in the config directory, it will only be
-     loaded once at startup.  If it is found in Solr's data
-     directory, it will be re-loaded every commit.
-
-   See http://wiki.apache.org/solr/QueryElevationComponent for more info
-
--->
-<elevate>
- <!-- Query elevation examples
-  <query text="foo bar">
-    <doc id="1" />
-    <doc id="2" />
-    <doc id="3" />
-  </query>
-
-for use with techproducts example
- 
-  <query text="ipod">
-    <doc id="MA147LL/A" />  put the actual ipod at the top 
-    <doc id="IW-02" exclude="true" /> exclude this cable
-  </query>
--->
-
-</elevate>
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/contractions_ca.txt b/solr/example/example-DIH/solr/solr/conf/lang/contractions_ca.txt
deleted file mode 100644
index 307a85f..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/contractions_ca.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-# Set of Catalan contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-d
-l
-m
-n
-s
-t
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/contractions_fr.txt b/solr/example/example-DIH/solr/solr/conf/lang/contractions_fr.txt
deleted file mode 100644
index f1bba51..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/contractions_fr.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-# Set of French contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-l
-m
-t
-qu
-n
-s
-j
-d
-c
-jusqu
-quoiqu
-lorsqu
-puisqu
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/contractions_ga.txt b/solr/example/example-DIH/solr/solr/conf/lang/contractions_ga.txt
deleted file mode 100644
index 9ebe7fa..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/contractions_ga.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# Set of Irish contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-d
-m
-b
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/contractions_it.txt b/solr/example/example-DIH/solr/solr/conf/lang/contractions_it.txt
deleted file mode 100644
index cac0409..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/contractions_it.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-# Set of Italian contractions for ElisionFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-c
-l 
-all 
-dall 
-dell 
-nell 
-sull 
-coll 
-pell 
-gl 
-agl 
-dagl 
-degl 
-negl 
-sugl 
-un 
-m 
-t 
-s 
-v 
-d
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/hyphenations_ga.txt b/solr/example/example-DIH/solr/solr/conf/lang/hyphenations_ga.txt
deleted file mode 100644
index 4d2642c..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/hyphenations_ga.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# Set of Irish hyphenations for StopFilter
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-h
-n
-t
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stemdict_nl.txt b/solr/example/example-DIH/solr/solr/conf/lang/stemdict_nl.txt
deleted file mode 100644
index 4410729..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stemdict_nl.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-# Set of overrides for the dutch stemmer
-# TODO: load this as a resource from the analyzer and sync it in build.xml
-fiets	fiets
-bromfiets	bromfiets
-ei	eier
-kind	kinder
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stoptags_ja.txt b/solr/example/example-DIH/solr/solr/conf/lang/stoptags_ja.txt
deleted file mode 100644
index 71b7508..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stoptags_ja.txt
+++ /dev/null
@@ -1,420 +0,0 @@
-#
-# This file defines a Japanese stoptag set for JapanesePartOfSpeechStopFilter.
-#
-# Any token with a part-of-speech tag that exactly matches those defined in this
-# file are removed from the token stream.
-#
-# Set your own stoptags by uncommenting the lines below.  Note that comments are
-# not allowed on the same line as a stoptag.  See LUCENE-3745 for frequency lists,
-# etc. that can be useful for building you own stoptag set.
-#
-# The entire possible tagset is provided below for convenience.
-#
-#####
-#  noun: unclassified nouns
-#名詞
-#
-#  noun-common: Common nouns or nouns where the sub-classification is undefined
-#名詞-一般
-#
-#  noun-proper: Proper nouns where the sub-classification is undefined 
-#名詞-固有名詞
-#
-#  noun-proper-misc: miscellaneous proper nouns
-#名詞-固有名詞-一般
-#
-#  noun-proper-person: Personal names where the sub-classification is undefined
-#名詞-固有名詞-人名
-#
-#  noun-proper-person-misc: names that cannot be divided into surname and 
-#  given name; foreign names; names where the surname or given name is unknown.
-#  e.g. お市の方
-#名詞-固有名詞-人名-一般
-#
-#  noun-proper-person-surname: Mainly Japanese surnames.
-#  e.g. 山田
-#名詞-固有名詞-人名-姓
-#
-#  noun-proper-person-given_name: Mainly Japanese given names.
-#  e.g. 太郎
-#名詞-固有名詞-人名-名
-#
-#  noun-proper-organization: Names representing organizations.
-#  e.g. 通産省, NHK
-#名詞-固有名詞-組織
-#
-#  noun-proper-place: Place names where the sub-classification is undefined
-#名詞-固有名詞-地域
-#
-#  noun-proper-place-misc: Place names excluding countries.
-#  e.g. アジア, バルセロナ, 京都
-#名詞-固有名詞-地域-一般
-#
-#  noun-proper-place-country: Country names. 
-#  e.g. 日本, オーストラリア
-#名詞-固有名詞-地域-国
-#
-#  noun-pronoun: Pronouns where the sub-classification is undefined
-#名詞-代名詞
-#
-#  noun-pronoun-misc: miscellaneous pronouns: 
-#  e.g. それ, ここ, あいつ, あなた, あちこち, いくつ, どこか, なに, みなさん, みんな, わたくし, われわれ
-#名詞-代名詞-一般
-#
-#  noun-pronoun-contraction: Spoken language contraction made by combining a 
-#  pronoun and the particle 'wa'.
-#  e.g. ありゃ, こりゃ, こりゃあ, そりゃ, そりゃあ 
-#名詞-代名詞-縮約
-#
-#  noun-adverbial: Temporal nouns such as names of days or months that behave 
-#  like adverbs. Nouns that represent amount or ratios and can be used adverbially,
-#  e.g. 金曜, 一月, 午後, 少量
-#名詞-副詞可能
-#
-#  noun-verbal: Nouns that take arguments with case and can appear followed by 
-#  'suru' and related verbs (する, できる, なさる, くださる)
-#  e.g. インプット, 愛着, 悪化, 悪戦苦闘, 一安心, 下取り
-#名詞-サ変接続
-#
-#  noun-adjective-base: The base form of adjectives, words that appear before な ("na")
-#  e.g. 健康, 安易, 駄目, だめ
-#名詞-形容動詞語幹
-#
-#  noun-numeric: Arabic numbers, Chinese numerals, and counters like 何 (回), 数.
-#  e.g. 0, 1, 2, 何, 数, 幾
-#名詞-数
-#
-#  noun-affix: noun affixes where the sub-classification is undefined
-#名詞-非自立
-#
-#  noun-affix-misc: Of adnominalizers, the case-marker の ("no"), and words that 
-#  attach to the base form of inflectional words, words that cannot be classified 
-#  into any of the other categories below. This category includes indefinite nouns.
-#  e.g. あかつき, 暁, かい, 甲斐, 気, きらい, 嫌い, くせ, 癖, こと, 事, ごと, 毎, しだい, 次第, 
-#       順, せい, 所為, ついで, 序で, つもり, 積もり, 点, どころ, の, はず, 筈, はずみ, 弾み, 
-#       拍子, ふう, ふり, 振り, ほう, 方, 旨, もの, 物, 者, ゆえ, 故, ゆえん, 所以, わけ, 訳,
-#       わり, 割り, 割, ん-口語/, もん-口語/
-#名詞-非自立-一般
-#
-#  noun-affix-adverbial: noun affixes that that can behave as adverbs.
-#  e.g. あいだ, 間, あげく, 挙げ句, あと, 後, 余り, 以外, 以降, 以後, 以上, 以前, 一方, うえ, 
-#       上, うち, 内, おり, 折り, かぎり, 限り, きり, っきり, 結果, ころ, 頃, さい, 際, 最中, さなか, 
-#       最中, じたい, 自体, たび, 度, ため, 為, つど, 都度, とおり, 通り, とき, 時, ところ, 所, 
-#       とたん, 途端, なか, 中, のち, 後, ばあい, 場合, 日, ぶん, 分, ほか, 他, まえ, 前, まま, 
-#       儘, 侭, みぎり, 矢先
-#名詞-非自立-副詞可能
-#
-#  noun-affix-aux: noun affixes treated as 助動詞 ("auxiliary verb") in school grammars 
-#  with the stem よう(だ) ("you(da)").
-#  e.g.  よう, やう, 様 (よう)
-#名詞-非自立-助動詞語幹
-#  
-#  noun-affix-adjective-base: noun affixes that can connect to the indeclinable
-#  connection form な (aux "da").
-#  e.g. みたい, ふう
-#名詞-非自立-形容動詞語幹
-#
-#  noun-special: special nouns where the sub-classification is undefined.
-#名詞-特殊
-#
-#  noun-special-aux: The そうだ ("souda") stem form that is used for reporting news, is 
-#  treated as 助動詞 ("auxiliary verb") in school grammars, and attach to the base 
-#  form of inflectional words.
-#  e.g. そう
-#名詞-特殊-助動詞語幹
-#
-#  noun-suffix: noun suffixes where the sub-classification is undefined.
-#名詞-接尾
-#
-#  noun-suffix-misc: Of the nouns or stem forms of other parts of speech that connect 
-#  to ガル or タイ and can combine into compound nouns, words that cannot be classified into
-#  any of the other categories below. In general, this category is more inclusive than 
-#  接尾語 ("suffix") and is usually the last element in a compound noun.
-#  e.g. おき, かた, 方, 甲斐 (がい), がかり, ぎみ, 気味, ぐるみ, (~した) さ, 次第, 済 (ず) み,
-#       よう, (でき)っこ, 感, 観, 性, 学, 類, 面, 用
-#名詞-接尾-一般
-#
-#  noun-suffix-person: Suffixes that form nouns and attach to person names more often
-#  than other nouns.
-#  e.g. 君, 様, 著
-#名詞-接尾-人名
-#
-#  noun-suffix-place: Suffixes that form nouns and attach to place names more often 
-#  than other nouns.
-#  e.g. 町, 市, 県
-#名詞-接尾-地域
-#
-#  noun-suffix-verbal: Of the suffixes that attach to nouns and form nouns, those that 
-#  can appear before スル ("suru").
-#  e.g. 化, 視, 分け, 入り, 落ち, 買い
-#名詞-接尾-サ変接続
-#
-#  noun-suffix-aux: The stem form of そうだ (様態) that is used to indicate conditions, 
-#  is treated as 助動詞 ("auxiliary verb") in school grammars, and attach to the 
-#  conjunctive form of inflectional words.
-#  e.g. そう
-#名詞-接尾-助動詞語幹
-#
-#  noun-suffix-adjective-base: Suffixes that attach to other nouns or the conjunctive 
-#  form of inflectional words and appear before the copula だ ("da").
-#  e.g. 的, げ, がち
-#名詞-接尾-形容動詞語幹
-#
-#  noun-suffix-adverbial: Suffixes that attach to other nouns and can behave as adverbs.
-#  e.g. 後 (ご), 以後, 以降, 以前, 前後, 中, 末, 上, 時 (じ)
-#名詞-接尾-副詞可能
-#
-#  noun-suffix-classifier: Suffixes that attach to numbers and form nouns. This category 
-#  is more inclusive than 助数詞 ("classifier") and includes common nouns that attach 
-#  to numbers.
-#  e.g. 個, つ, 本, 冊, パーセント, cm, kg, カ月, か国, 区画, 時間, 時半
-#名詞-接尾-助数詞
-#
-#  noun-suffix-special: Special suffixes that mainly attach to inflecting words.
-#  e.g. (楽し) さ, (考え) 方
-#名詞-接尾-特殊
-#
-#  noun-suffix-conjunctive: Nouns that behave like conjunctions and join two words 
-#  together.
-#  e.g. (日本) 対 (アメリカ), 対 (アメリカ), (3) 対 (5), (女優) 兼 (主婦)
-#名詞-接続詞的
-#
-#  noun-verbal_aux: Nouns that attach to the conjunctive particle て ("te") and are 
-#  semantically verb-like.
-#  e.g. ごらん, ご覧, 御覧, 頂戴
-#名詞-動詞非自立的
-#
-#  noun-quotation: text that cannot be segmented into words, proverbs, Chinese poetry, 
-#  dialects, English, etc. Currently, the only entry for 名詞 引用文字列 ("noun quotation") 
-#  is いわく ("iwaku").
-#名詞-引用文字列
-#
-#  noun-nai_adjective: Words that appear before the auxiliary verb ない ("nai") and
-#  behave like an adjective.
-#  e.g. 申し訳, 仕方, とんでも, 違い
-#名詞-ナイ形容詞語幹
-#
-#####
-#  prefix: unclassified prefixes
-#接頭詞
-#
-#  prefix-nominal: Prefixes that attach to nouns (including adjective stem forms) 
-#  excluding numerical expressions.
-#  e.g. お (水), 某 (氏), 同 (社), 故 (~氏), 高 (品質), お (見事), ご (立派)
-#接頭詞-名詞接続
-#
-#  prefix-verbal: Prefixes that attach to the imperative form of a verb or a verb
-#  in conjunctive form followed by なる/なさる/くださる.
-#  e.g. お (読みなさい), お (座り)
-#接頭詞-動詞接続
-#
-#  prefix-adjectival: Prefixes that attach to adjectives.
-#  e.g. お (寒いですねえ), バカ (でかい)
-#接頭詞-形容詞接続
-#
-#  prefix-numerical: Prefixes that attach to numerical expressions.
-#  e.g. 約, およそ, 毎時
-#接頭詞-数接続
-#
-#####
-#  verb: unclassified verbs
-#動詞
-#
-#  verb-main:
-#動詞-自立
-#
-#  verb-auxiliary:
-#動詞-非自立
-#
-#  verb-suffix:
-#動詞-接尾
-#
-#####
-#  adjective: unclassified adjectives
-#形容詞
-#
-#  adjective-main:
-#形容詞-自立
-#
-#  adjective-auxiliary:
-#形容詞-非自立
-#
-#  adjective-suffix:
-#形容詞-接尾
-#
-#####
-#  adverb: unclassified adverbs
-#副詞
-#
-#  adverb-misc: Words that can be segmented into one unit and where adnominal 
-#  modification is not possible.
-#  e.g. あいかわらず, 多分
-#副詞-一般
-#
-#  adverb-particle_conjunction: Adverbs that can be followed by の, は, に, 
-#  な, する, だ, etc.
-#  e.g. こんなに, そんなに, あんなに, なにか, なんでも
-#副詞-助詞類接続
-#
-#####
-#  adnominal: Words that only have noun-modifying forms.
-#  e.g. この, その, あの, どの, いわゆる, なんらかの, 何らかの, いろんな, こういう, そういう, ああいう, 
-#       どういう, こんな, そんな, あんな, どんな, 大きな, 小さな, おかしな, ほんの, たいした, 
-#       「(, も) さる (ことながら)」, 微々たる, 堂々たる, 単なる, いかなる, 我が」「同じ, 亡き
-#連体詞
-#
-#####
-#  conjunction: Conjunctions that can occur independently.
-#  e.g. が, けれども, そして, じゃあ, それどころか
-接続詞
-#
-#####
-#  particle: unclassified particles.
-助詞
-#
-#  particle-case: case particles where the subclassification is undefined.
-助詞-格助詞
-#
-#  particle-case-misc: Case particles.
-#  e.g. から, が, で, と, に, へ, より, を, の, にて
-助詞-格助詞-一般
-#
-#  particle-case-quote: the "to" that appears after nouns, a person’s speech, 
-#  quotation marks, expressions of decisions from a meeting, reasons, judgements,
-#  conjectures, etc.
-#  e.g. ( だ) と (述べた.), ( である) と (して執行猶予...)
-助詞-格助詞-引用
-#
-#  particle-case-compound: Compounds of particles and verbs that mainly behave 
-#  like case particles.
-#  e.g. という, といった, とかいう, として, とともに, と共に, でもって, にあたって, に当たって, に当って,
-#       にあたり, に当たり, に当り, に当たる, にあたる, において, に於いて,に於て, における, に於ける, 
-#       にかけ, にかけて, にかんし, に関し, にかんして, に関して, にかんする, に関する, に際し, 
-#       に際して, にしたがい, に従い, に従う, にしたがって, に従って, にたいし, に対し, にたいして, 
-#       に対して, にたいする, に対する, について, につき, につけ, につけて, につれ, につれて, にとって,
-#       にとり, にまつわる, によって, に依って, に因って, により, に依り, に因り, による, に依る, に因る, 
-#       にわたって, にわたる, をもって, を以って, を通じ, を通じて, を通して, をめぐって, をめぐり, をめぐる,
-#       って-口語/, ちゅう-関西弁「という」/, (何) ていう (人)-口語/, っていう-口語/, といふ, とかいふ
-助詞-格助詞-連語
-#
-#  particle-conjunctive:
-#  e.g. から, からには, が, けれど, けれども, けど, し, つつ, て, で, と, ところが, どころか, とも, ども, 
-#       ながら, なり, ので, のに, ば, ものの, や ( した), やいなや, (ころん) じゃ(いけない)-口語/, 
-#       (行っ) ちゃ(いけない)-口語/, (言っ) たって (しかたがない)-口語/, (それがなく)ったって (平気)-口語/
-助詞-接続助詞
-#
-#  particle-dependency:
-#  e.g. こそ, さえ, しか, すら, は, も, ぞ
-助詞-係助詞
-#
-#  particle-adverbial:
-#  e.g. がてら, かも, くらい, 位, ぐらい, しも, (学校) じゃ(これが流行っている)-口語/, 
-#       (それ)じゃあ (よくない)-口語/, ずつ, (私) なぞ, など, (私) なり (に), (先生) なんか (大嫌い)-口語/,
-#       (私) なんぞ, (先生) なんて (大嫌い)-口語/, のみ, だけ, (私) だって-口語/, だに, 
-#       (彼)ったら-口語/, (お茶) でも (いかが), 等 (とう), (今後) とも, ばかり, ばっか-口語/, ばっかり-口語/,
-#       ほど, 程, まで, 迄, (誰) も (が)([助詞-格助詞] および [助詞-係助詞] の前に位置する「も」)
-助詞-副助詞
-#
-#  particle-interjective: particles with interjective grammatical roles.
-#  e.g. (松島) や
-助詞-間投助詞
-#
-#  particle-coordinate:
-#  e.g. と, たり, だの, だり, とか, なり, や, やら
-助詞-並立助詞
-#
-#  particle-final:
-#  e.g. かい, かしら, さ, ぜ, (だ)っけ-口語/, (とまってる) で-方言/, な, ナ, なあ-口語/, ぞ, ね, ネ, 
-#       ねぇ-口語/, ねえ-口語/, ねん-方言/, の, のう-口語/, や, よ, ヨ, よぉ-口語/, わ, わい-口語/
-助詞-終助詞
-#
-#  particle-adverbial/conjunctive/final: The particle "ka" when unknown whether it is 
-#  adverbial, conjunctive, or sentence final. For example:
-#       (a) 「A か B か」. Ex:「(国内で運用する) か,(海外で運用する) か (.)」
-#       (b) Inside an adverb phrase. Ex:「(幸いという) か (, 死者はいなかった.)」
-#           「(祈りが届いたせい) か (, 試験に合格した.)」
-#       (c) 「かのように」. Ex:「(何もなかった) か (のように振る舞った.)」
-#  e.g. か
-助詞-副助詞/並立助詞/終助詞
-#
-#  particle-adnominalizer: The "no" that attaches to nouns and modifies 
-#  non-inflectional words.
-助詞-連体化
-#
-#  particle-adnominalizer: The "ni" and "to" that appear following nouns and adverbs 
-#  that are giongo, giseigo, or gitaigo.
-#  e.g. に, と
-助詞-副詞化
-#
-#  particle-special: A particle that does not fit into one of the above classifications. 
-#  This includes particles that are used in Tanka, Haiku, and other poetry.
-#  e.g. かな, けむ, ( しただろう) に, (あんた) にゃ(わからん), (俺) ん (家)
-助詞-特殊
-#
-#####
-#  auxiliary-verb:
-助動詞
-#
-#####
-#  interjection: Greetings and other exclamations.
-#  e.g. おはよう, おはようございます, こんにちは, こんばんは, ありがとう, どうもありがとう, ありがとうございます, 
-#       いただきます, ごちそうさま, さよなら, さようなら, はい, いいえ, ごめん, ごめんなさい
-#感動詞
-#
-#####
-#  symbol: unclassified Symbols.
-記号
-#
-#  symbol-misc: A general symbol not in one of the categories below.
-#  e.g. [○◎@$〒→+]
-記号-一般
-#
-#  symbol-comma: Commas
-#  e.g. [,、]
-記号-読点
-#
-#  symbol-period: Periods and full stops.
-#  e.g. [..。]
-記号-句点
-#
-#  symbol-space: Full-width whitespace.
-記号-空白
-#
-#  symbol-open_bracket:
-#  e.g. [({‘“『【]
-記号-括弧開
-#
-#  symbol-close_bracket:
-#  e.g. [)}’”』」】]
-記号-括弧閉
-#
-#  symbol-alphabetic:
-#記号-アルファベット
-#
-#####
-#  other: unclassified other
-#その他
-#
-#  other-interjection: Words that are hard to classify as noun-suffixes or 
-#  sentence-final particles.
-#  e.g. (だ)ァ
-その他-間投
-#
-#####
-#  filler: Aizuchi that occurs during a conversation or sounds inserted as filler.
-#  e.g. あの, うんと, えと
-フィラー
-#
-#####
-#  non-verbal: non-verbal sound.
-非言語音
-#
-#####
-#  fragment:
-#語断片
-#
-#####
-#  unknown: unknown part of speech.
-#未知語
-#
-##### End of file
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ar.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ar.txt
deleted file mode 100644
index 046829d..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ar.txt
+++ /dev/null
@@ -1,125 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# Cleaned on October 11, 2009 (not normalized, so use before normalization)
-# This means that when modifying this list, you might need to add some 
-# redundant entries, for example containing forms with both أ and ا
-من
-ومن
-منها
-منه
-في
-وفي
-فيها
-فيه


-ثم
-او
-أو

-بها
-به


-اى
-اي
-أي
-أى
-لا
-ولا
-الا
-ألا
-إلا
-لكن
-ما
-وما
-كما
-فما
-عن
-مع
-اذا
-إذا
-ان
-أن
-إن
-انها
-أنها
-إنها
-انه
-أنه
-إنه
-بان
-بأن
-فان
-فأن
-وان
-وأن
-وإن
-التى
-التي
-الذى
-الذي
-الذين
-الى
-الي
-إلى
-إلي
-على
-عليها
-عليه
-اما
-أما
-إما
-ايضا
-أيضا
-كل
-وكل
-لم
-ولم
-لن
-ولن
-هى
-هي
-هو
-وهى
-وهي
-وهو
-فهى
-فهي
-فهو
-انت
-أنت
-لك
-لها
-له
-هذه
-هذا
-تلك
-ذلك
-هناك
-كانت
-كان
-يكون
-تكون
-وكانت
-وكان
-غير
-بعض
-قد
-نحو
-بين
-بينما
-منذ
-ضمن
-حيث
-الان
-الآن
-خلال
-بعد
-قبل
-حتى
-عند
-عندما
-لدى
-جميع
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_bg.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_bg.txt
deleted file mode 100644
index 1ae4ba2..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_bg.txt
+++ /dev/null
@@ -1,193 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html

-аз
-ако
-ала
-бе
-без
-беше
-би
-бил
-била
-били
-било
-близо
-бъдат
-бъде
-бяха

-вас
-ваш
-ваша
-вероятно
-вече
-взема
-ви
-вие
-винаги
-все
-всеки
-всички
-всичко
-всяка
-във
-въпреки
-върху

-ги
-главно
-го

-да
-дали
-до
-докато
-докога
-дори
-досега
-доста

-едва
-един
-ето
-за
-зад
-заедно
-заради
-засега
-затова
-защо
-защото

-из
-или
-им
-има
-имат
-иска

-каза
-как
-каква
-какво
-както
-какъв
-като
-кога
-когато
-което
-които
-кой
-който
-колко
-която
-къде
-където
-към
-ли

-ме
-между
-мен
-ми
-мнозина
-мога
-могат
-може
-моля
-момента
-му

-на
-над
-назад
-най
-направи
-напред
-например
-нас
-не
-него
-нея
-ни
-ние
-никой
-нито
-но
-някои
-някой
-няма
-обаче
-около
-освен
-особено
-от
-отгоре
-отново
-още
-пак
-по
-повече
-повечето
-под
-поне
-поради
-после
-почти
-прави
-пред
-преди
-през
-при
-пък
-първо

-са
-само
-се
-сега
-си
-скоро
-след
-сме
-според
-сред
-срещу
-сте
-съм
-със
-също

-тази
-така
-такива
-такъв
-там
-твой
-те
-тези
-ти
-тн
-то
-това
-тогава
-този
-той
-толкова
-точно
-трябва
-тук
-тъй
-тя
-тях

-харесва

-че
-често
-чрез
-ще
-щом

diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ca.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ca.txt
deleted file mode 100644
index 3da65de..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ca.txt
+++ /dev/null
@@ -1,220 +0,0 @@
-# Catalan stopwords from http://github.com/vcl/cue.language (Apache 2 Licensed)
-a
-abans
-ací
-ah
-així
-això
-al
-als
-aleshores
-algun
-alguna
-algunes
-alguns
-alhora
-allà
-allí
-allò
-altra
-altre
-altres
-amb
-ambdós
-ambdues
-apa
-aquell
-aquella
-aquelles
-aquells
-aquest
-aquesta
-aquestes
-aquests
-aquí
-baix
-cada
-cadascú
-cadascuna
-cadascunes
-cadascuns
-com
-contra
-d'un
-d'una
-d'unes
-d'uns
-dalt
-de
-del
-dels
-des
-després
-dins
-dintre
-donat
-doncs
-durant
-e
-eh
-el
-els
-em
-en
-encara
-ens
-entre
-érem
-eren
-éreu
-es
-és
-esta
-està
-estàvem
-estaven
-estàveu
-esteu
-et
-etc
-ets
-fins
-fora
-gairebé
-ha
-han
-has
-havia
-he
-hem
-heu
-hi 
-ho
-i
-igual
-iguals
-ja
-l'hi
-la
-les
-li
-li'n
-llavors
-m'he
-ma
-mal
-malgrat
-mateix
-mateixa
-mateixes
-mateixos
-me
-mentre
-més
-meu
-meus
-meva
-meves
-molt
-molta
-moltes
-molts
-mon
-mons
-n'he
-n'hi
-ne
-ni
-no
-nogensmenys
-només
-nosaltres
-nostra
-nostre
-nostres
-o
-oh
-oi
-on
-pas
-pel
-pels
-per
-però
-perquè
-poc 
-poca
-pocs
-poques
-potser
-propi
-qual
-quals
-quan
-quant 
-que
-què
-quelcom
-qui
-quin
-quina
-quines
-quins
-s'ha
-s'han
-sa
-semblant
-semblants
-ses
-seu 
-seus
-seva
-seva
-seves
-si
-sobre
-sobretot
-sóc
-solament
-sols
-son 
-són
-sons 
-sota
-sou
-t'ha
-t'han
-t'he
-ta
-tal
-també
-tampoc
-tan
-tant
-tanta
-tantes
-teu
-teus
-teva
-teves
-ton
-tons
-tot
-tota
-totes
-tots
-un
-una
-unes
-uns
-us
-va
-vaig
-vam
-van
-vas
-veu
-vosaltres
-vostra
-vostre
-vostres
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ckb.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ckb.txt
deleted file mode 100644
index 87abf11..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ckb.txt
+++ /dev/null
@@ -1,136 +0,0 @@
-# set of kurdish stopwords
-# note these have been normalized with our scheme (e represented with U+06D5, etc)
-# constructed from:
-# * Fig 5 of "Building A Test Collection For Sorani Kurdish" (Esmaili et al)
-# * "Sorani Kurdish: A Reference Grammar with selected readings" (Thackston)
-# * Corpus-based analysis of 77M word Sorani collection: wikipedia, news, blogs, etc
-
-# and

-# which
-کە
-# of

-# made/did
-کرد
-# that/which
-ئەوەی
-# on/head
-سەر
-# two
-دوو
-# also
-هەروەها
-# from/that
-لەو
-# makes/does
-دەکات
-# some
-چەند
-# every
-هەر
-
-# demonstratives
-# that
-ئەو
-# this
-ئەم
-
-# personal pronouns
-# I
-من
-# we
-ئێمە
-# you
-تۆ
-# you
-ئێوە
-# he/she/it
-ئەو
-# they
-ئەوان
-
-# prepositions
-# to/with/by
-بە
-پێ
-# without
-بەبێ
-# along with/while/during
-بەدەم
-# in the opinion of
-بەلای
-# according to
-بەپێی
-# before
-بەرلە
-# in the direction of
-بەرەوی
-# in front of/toward
-بەرەوە
-# before/in the face of
-بەردەم
-# without
-بێ
-# except for
-بێجگە
-# for
-بۆ
-# on/in
-دە
-تێ
-# with
-دەگەڵ
-# after
-دوای
-# except for/aside from
-جگە
-# in/from
-لە
-لێ
-# in front of/before/because of
-لەبەر
-# between/among
-لەبەینی
-# concerning/about
-لەبابەت
-# concerning
-لەبارەی
-# instead of
-لەباتی
-# beside
-لەبن
-# instead of
-لەبرێتی
-# behind
-لەدەم
-# with/together with
-لەگەڵ
-# by
-لەلایەن
-# within
-لەناو
-# between/among
-لەنێو
-# for the sake of
-لەپێناوی
-# with respect to
-لەرەوی
-# by means of/for
-لەرێ
-# for the sake of
-لەرێگا
-# on/on top of/according to
-لەسەر
-# under
-لەژێر
-# between/among
-ناو
-# between/among
-نێوان
-# after
-پاش
-# before
-پێش
-# like
-وەک
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_cz.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_cz.txt
deleted file mode 100644
index 53c6097..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_cz.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-a
-s
-k
-o
-i
-u
-v
-z
-dnes
-cz
-tímto
-budeš
-budem
-byli
-jseš
-můj
-svým
-ta
-tomto
-tohle
-tuto
-tyto
-jej
-zda
-proč
-máte
-tato
-kam
-tohoto
-kdo
-kteří
-mi
-nám
-tom
-tomuto
-mít
-nic
-proto
-kterou
-byla
-toho
-protože
-asi
-ho
-naši
-napište
-re
-což
-tím
-takže
-svých
-její
-svými
-jste
-aj
-tu
-tedy
-teto
-bylo
-kde
-ke
-pravé
-ji
-nad
-nejsou
-či
-pod
-téma
-mezi
-přes
-ty
-pak
-vám
-ani
-když
-však
-neg
-jsem
-tento
-článku
-články
-aby
-jsme
-před
-pta
-jejich
-byl
-ještě
-až
-bez
-také
-pouze
-první
-vaše
-která
-nás
-nový
-tipy
-pokud
-může
-strana
-jeho
-své
-jiné
-zprávy
-nové
-není
-vás
-jen
-podle
-zde
-už
-být
-více
-bude
-již
-než
-který
-by
-které
-co
-nebo
-ten
-tak
-má
-při
-od
-po
-jsou
-jak
-další
-ale
-si
-se
-ve
-to
-jako
-za
-zpět
-ze
-do
-pro
-je
-na
-atd
-atp
-jakmile
-přičemž
-já
-on
-ona
-ono
-oni
-ony
-my
-vy
-jí
-ji
-mě
-mne
-jemu
-tomu
-těm
-těmu
-němu
-němuž
-jehož
-jíž
-jelikož
-jež
-jakož
-načež
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_da.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_da.txt
deleted file mode 100644
index 42e6145..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_da.txt
+++ /dev/null
@@ -1,110 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/danish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Danish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
-
-og           | and
-i            | in
-jeg          | I
-det          | that (dem. pronoun)/it (pers. pronoun)
-at           | that (in front of a sentence)/to (with infinitive)
-en           | a/an
-den          | it (pers. pronoun)/that (dem. pronoun)
-til          | to/at/for/until/against/by/of/into, more
-er           | present tense of "to be"
-som          | who, as
-på           | on/upon/in/on/at/to/after/of/with/for, on
-de           | they
-med          | with/by/in, along
-han          | he
-af           | of/by/from/off/for/in/with/on, off
-for          | at/for/to/from/by/of/ago, in front/before, because
-ikke         | not
-der          | who/which, there/those
-var          | past tense of "to be"
-mig          | me/myself
-sig          | oneself/himself/herself/itself/themselves
-men          | but
-et           | a/an/one, one (number), someone/somebody/one
-har          | present tense of "to have"
-om           | round/about/for/in/a, about/around/down, if
-vi           | we
-min          | my
-havde        | past tense of "to have"
-ham          | him
-hun          | she
-nu           | now
-over         | over/above/across/by/beyond/past/on/about, over/past
-da           | then, when/as/since
-fra          | from/off/since, off, since
-du           | you
-ud           | out
-sin          | his/her/its/one's
-dem          | them
-os           | us/ourselves
-op           | up
-man          | you/one
-hans         | his
-hvor         | where
-eller        | or
-hvad         | what
-skal         | must/shall etc.
-selv         | myself/youself/herself/ourselves etc., even
-her          | here
-alle         | all/everyone/everybody etc.
-vil          | will (verb)
-blev         | past tense of "to stay/to remain/to get/to become"
-kunne        | could
-ind          | in
-når          | when
-være         | present tense of "to be"
-dog          | however/yet/after all
-noget        | something
-ville        | would
-jo           | you know/you see (adv), yes
-deres        | their/theirs
-efter        | after/behind/according to/for/by/from, later/afterwards
-ned          | down
-skulle       | should
-denne        | this
-end          | than
-dette        | this
-mit          | my/mine
-også         | also
-under        | under/beneath/below/during, below/underneath
-have         | have
-dig          | you
-anden        | other
-hende        | her
-mine         | my
-alt          | everything
-meget        | much/very, plenty of
-sit          | his, her, its, one's
-sine         | his, her, its, one's
-vor          | our
-mod          | against
-disse        | these
-hvis         | if
-din          | your/yours
-nogle        | some
-hos          | by/at
-blive        | be/become
-mange        | many
-ad           | by/through
-bliver       | present tense of "to be/to become"
-hendes       | her/hers
-været        | be
-thi          | for (conj)
-jer          | you
-sådan        | such, like this/like that
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_de.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_de.txt
deleted file mode 100644
index 86525e7..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_de.txt
+++ /dev/null
@@ -1,294 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/german/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A German stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | The number of forms in this list is reduced significantly by passing it
- | through the German stemmer.
-
-
-aber           |  but
-
-alle           |  all
-allem
-allen
-aller
-alles
-
-als            |  than, as
-also           |  so
-am             |  an + dem
-an             |  at
-
-ander          |  other
-andere
-anderem
-anderen
-anderer
-anderes
-anderm
-andern
-anderr
-anders
-
-auch           |  also
-auf            |  on
-aus            |  out of
-bei            |  by
-bin            |  am
-bis            |  until
-bist           |  art
-da             |  there
-damit          |  with it
-dann           |  then
-
-der            |  the
-den
-des
-dem
-die
-das
-
-daß            |  that
-
-derselbe       |  the same
-derselben
-denselben
-desselben
-demselben
-dieselbe
-dieselben
-dasselbe
-
-dazu           |  to that
-
-dein           |  thy
-deine
-deinem
-deinen
-deiner
-deines
-
-denn           |  because
-
-derer          |  of those
-dessen         |  of him
-
-dich           |  thee
-dir            |  to thee
-du             |  thou
-
-dies           |  this
-diese
-diesem
-diesen
-dieser
-dieses
-
-
-doch           |  (several meanings)
-dort           |  (over) there
-
-
-durch          |  through
-
-ein            |  a
-eine
-einem
-einen
-einer
-eines
-
-einig          |  some
-einige
-einigem
-einigen
-einiger
-einiges
-
-einmal         |  once
-
-er             |  he
-ihn            |  him
-ihm            |  to him
-
-es             |  it
-etwas          |  something
-
-euer           |  your
-eure
-eurem
-euren
-eurer
-eures
-
-für            |  for
-gegen          |  towards
-gewesen        |  p.p. of sein
-hab            |  have
-habe           |  have
-haben          |  have
-hat            |  has
-hatte          |  had
-hatten         |  had
-hier           |  here
-hin            |  there
-hinter         |  behind
-
-ich            |  I
-mich           |  me
-mir            |  to me
-
-
-ihr            |  you, to her
-ihre
-ihrem
-ihren
-ihrer
-ihres
-euch           |  to you
-
-im             |  in + dem
-in             |  in
-indem          |  while
-ins            |  in + das
-ist            |  is
-
-jede           |  each, every
-jedem
-jeden
-jeder
-jedes
-
-jene           |  that
-jenem
-jenen
-jener
-jenes
-
-jetzt          |  now
-kann           |  can
-
-kein           |  no
-keine
-keinem
-keinen
-keiner
-keines
-
-können         |  can
-könnte         |  could
-machen         |  do
-man            |  one
-
-manche         |  some, many a
-manchem
-manchen
-mancher
-manches
-
-mein           |  my
-meine
-meinem
-meinen
-meiner
-meines
-
-mit            |  with
-muss           |  must
-musste         |  had to
-nach           |  to(wards)
-nicht          |  not
-nichts         |  nothing
-noch           |  still, yet
-nun            |  now
-nur            |  only
-ob             |  whether
-oder           |  or
-ohne           |  without
-sehr           |  very
-
-sein           |  his
-seine
-seinem
-seinen
-seiner
-seines
-
-selbst         |  self
-sich           |  herself
-
-sie            |  they, she
-ihnen          |  to them
-
-sind           |  are
-so             |  so
-
-solche         |  such
-solchem
-solchen
-solcher
-solches
-
-soll           |  shall
-sollte         |  should
-sondern        |  but
-sonst          |  else
-über           |  over
-um             |  about, around
-und            |  and
-
-uns            |  us
-unse
-unsem
-unsen
-unser
-unses
-
-unter          |  under
-viel           |  much
-vom            |  von + dem
-von            |  from
-vor            |  before
-während        |  while
-war            |  was
-waren          |  were
-warst          |  wast
-was            |  what
-weg            |  away, off
-weil           |  because
-weiter         |  further
-
-welche         |  which
-welchem
-welchen
-welcher
-welches
-
-wenn           |  when
-werde          |  will
-werden         |  will
-wie            |  how
-wieder         |  again
-will           |  want
-wir            |  we
-wird           |  will
-wirst          |  willst
-wo             |  where
-wollen         |  want
-wollte         |  wanted
-würde          |  would
-würden         |  would
-zu             |  to
-zum            |  zu + dem
-zur            |  zu + der
-zwar           |  indeed
-zwischen       |  between
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_el.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_el.txt
deleted file mode 100644
index 232681f..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_el.txt
+++ /dev/null
@@ -1,78 +0,0 @@
-# Lucene Greek Stopwords list
-# Note: by default this file is used after GreekLowerCaseFilter,
-# so when modifying this file use 'σ' instead of 'ς' 
-ο

-το
-οι
-τα
-του
-τησ
-των
-τον
-την
-και 
-κι

-ειμαι
-εισαι
-ειναι
-ειμαστε
-ειστε
-στο
-στον
-στη
-στην
-μα
-αλλα
-απο
-για
-προσ
-με
-σε
-ωσ
-παρα
-αντι
-κατα
-μετα
-θα
-να
-δε
-δεν
-μη
-μην
-επι
-ενω
-εαν
-αν
-τοτε
-που
-πωσ
-ποιοσ
-ποια
-ποιο
-ποιοι
-ποιεσ
-ποιων
-ποιουσ
-αυτοσ
-αυτη
-αυτο
-αυτοι
-αυτων
-αυτουσ
-αυτεσ
-αυτα
-εκεινοσ
-εκεινη
-εκεινο
-εκεινοι
-εκεινεσ
-εκεινα
-εκεινων
-εκεινουσ
-οπωσ
-ομωσ
-ισωσ
-οσο
-οτι
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_en.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_en.txt
deleted file mode 100644
index 2c164c0..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_en.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# a couple of test stopwords to test that the words are really being
-# configured from this file:
-stopworda
-stopwordb
-
-# Standard english stop words taken from Lucene's StopAnalyzer
-a
-an
-and
-are
-as
-at
-be
-but
-by
-for
-if
-in
-into
-is
-it
-no
-not
-of
-on
-or
-such
-that
-the
-their
-then
-there
-these
-they
-this
-to
-was
-will
-with
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_es.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_es.txt
deleted file mode 100644
index 487d78c..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_es.txt
+++ /dev/null
@@ -1,356 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/spanish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Spanish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-
- | The following is a ranked list (commonest to rarest) of stopwords
- | deriving from a large sample of text.
-
- | Extra words have been added at the end.
-
-de             |  from, of
-la             |  the, her
-que            |  who, that
-el             |  the
-en             |  in
-y              |  and
-a              |  to
-los            |  the, them
-del            |  de + el
-se             |  himself, from him etc
-las            |  the, them
-por            |  for, by, etc
-un             |  a
-para           |  for
-con            |  with
-no             |  no
-una            |  a
-su             |  his, her
-al             |  a + el
-  | es         from SER
-lo             |  him
-como           |  how
-más            |  more
-pero           |  pero
-sus            |  su plural
-le             |  to him, her
-ya             |  already
-o              |  or
-  | fue        from SER
-este           |  this
-  | ha         from HABER
-sí             |  himself etc
-porque         |  because
-esta           |  this
-  | son        from SER
-entre          |  between
-  | está     from ESTAR
-cuando         |  when
-muy            |  very
-sin            |  without
-sobre          |  on
-  | ser        from SER
-  | tiene      from TENER
-también        |  also
-me             |  me
-hasta          |  until
-hay            |  there is/are
-donde          |  where
-  | han        from HABER
-quien          |  whom, that
-  | están      from ESTAR
-  | estado     from ESTAR
-desde          |  from
-todo           |  all
-nos            |  us
-durante        |  during
-  | estados    from ESTAR
-todos          |  all
-uno            |  a
-les            |  to them
-ni             |  nor
-contra         |  against
-otros          |  other
-  | fueron     from SER
-ese            |  that
-eso            |  that
-  | había      from HABER
-ante           |  before
-ellos          |  they
-e              |  and (variant of y)
-esto           |  this
-mí             |  me
-antes          |  before
-algunos        |  some
-qué            |  what?
-unos           |  a
-yo             |  I
-otro           |  other
-otras          |  other
-otra           |  other
-él             |  he
-tanto          |  so much, many
-esa            |  that
-estos          |  these
-mucho          |  much, many
-quienes        |  who
-nada           |  nothing
-muchos         |  many
-cual           |  who
-  | sea        from SER
-poco           |  few
-ella           |  she
-estar          |  to be
-  | haber      from HABER
-estas          |  these
-  | estaba     from ESTAR
-  | estamos    from ESTAR
-algunas        |  some
-algo           |  something
-nosotros       |  we
-
-      | other forms
-
-mi             |  me
-mis            |  mi plural
-tú             |  thou
-te             |  thee
-ti             |  thee
-tu             |  thy
-tus            |  tu plural
-ellas          |  they
-nosotras       |  we
-vosotros       |  you
-vosotras       |  you
-os             |  you
-mío            |  mine
-mía            |
-míos           |
-mías           |
-tuyo           |  thine
-tuya           |
-tuyos          |
-tuyas          |
-suyo           |  his, hers, theirs
-suya           |
-suyos          |
-suyas          |
-nuestro        |  ours
-nuestra        |
-nuestros       |
-nuestras       |
-vuestro        |  yours
-vuestra        |
-vuestros       |
-vuestras       |
-esos           |  those
-esas           |  those
-
-               | forms of estar, to be (not including the infinitive):
-estoy
-estás
-está
-estamos
-estáis
-están
-esté
-estés
-estemos
-estéis
-estén
-estaré
-estarás
-estará
-estaremos
-estaréis
-estarán
-estaría
-estarías
-estaríamos
-estaríais
-estarían
-estaba
-estabas
-estábamos
-estabais
-estaban
-estuve
-estuviste
-estuvo
-estuvimos
-estuvisteis
-estuvieron
-estuviera
-estuvieras
-estuviéramos
-estuvierais
-estuvieran
-estuviese
-estuvieses
-estuviésemos
-estuvieseis
-estuviesen
-estando
-estado
-estada
-estados
-estadas
-estad
-
-               | forms of haber, to have (not including the infinitive):
-he
-has
-ha
-hemos
-habéis
-han
-haya
-hayas
-hayamos
-hayáis
-hayan
-habré
-habrás
-habrá
-habremos
-habréis
-habrán
-habría
-habrías
-habríamos
-habríais
-habrían
-había
-habías
-habíamos
-habíais
-habían
-hube
-hubiste
-hubo
-hubimos
-hubisteis
-hubieron
-hubiera
-hubieras
-hubiéramos
-hubierais
-hubieran
-hubiese
-hubieses
-hubiésemos
-hubieseis
-hubiesen
-habiendo
-habido
-habida
-habidos
-habidas
-
-               | forms of ser, to be (not including the infinitive):
-soy
-eres
-es
-somos
-sois
-son
-sea
-seas
-seamos
-seáis
-sean
-seré
-serás
-será
-seremos
-seréis
-serán
-sería
-serías
-seríamos
-seríais
-serían
-era
-eras
-éramos
-erais
-eran
-fui
-fuiste
-fue
-fuimos
-fuisteis
-fueron
-fuera
-fueras
-fuéramos
-fuerais
-fueran
-fuese
-fueses
-fuésemos
-fueseis
-fuesen
-siendo
-sido
-  |  sed also means 'thirst'
-
-               | forms of tener, to have (not including the infinitive):
-tengo
-tienes
-tiene
-tenemos
-tenéis
-tienen
-tenga
-tengas
-tengamos
-tengáis
-tengan
-tendré
-tendrás
-tendrá
-tendremos
-tendréis
-tendrán
-tendría
-tendrías
-tendríamos
-tendríais
-tendrían
-tenía
-tenías
-teníamos
-teníais
-tenían
-tuve
-tuviste
-tuvo
-tuvimos
-tuvisteis
-tuvieron
-tuviera
-tuvieras
-tuviéramos
-tuvierais
-tuvieran
-tuviese
-tuvieses
-tuviésemos
-tuvieseis
-tuviesen
-teniendo
-tenido
-tenida
-tenidos
-tenidas
-tened
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_eu.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_eu.txt
deleted file mode 100644
index 25f1db9..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_eu.txt
+++ /dev/null
@@ -1,99 +0,0 @@
-# example set of basque stopwords
-al
-anitz
-arabera
-asko
-baina
-bat
-batean
-batek
-bati
-batzuei
-batzuek
-batzuetan
-batzuk
-bera
-beraiek
-berau
-berauek
-bere
-berori
-beroriek
-beste
-bezala
-da
-dago
-dira
-ditu
-du
-dute
-edo
-egin
-ere
-eta
-eurak
-ez
-gainera
-gu
-gutxi
-guzti
-haiei
-haiek
-haietan
-hainbeste
-hala
-han
-handik
-hango
-hara
-hari
-hark
-hartan
-hau
-hauei
-hauek
-hauetan
-hemen
-hemendik
-hemengo
-hi
-hona
-honek
-honela
-honetan
-honi
-hor
-hori
-horiei
-horiek
-horietan
-horko
-horra
-horrek
-horrela
-horretan
-horri
-hortik
-hura
-izan
-ni
-noiz
-nola
-non
-nondik
-nongo
-nor
-nora
-ze
-zein
-zen
-zenbait
-zenbat
-zer
-zergatik
-ziren
-zituen
-zu
-zuek
-zuen
-zuten
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fa.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fa.txt
deleted file mode 100644
index 723641c..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fa.txt
+++ /dev/null
@@ -1,313 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# Note: by default this file is used after normalization, so when adding entries
-# to this file, use the arabic 'ي' instead of 'ی'
-انان
-نداشته
-سراسر
-خياه
-ايشان
-وي
-تاكنون
-بيشتري
-دوم
-پس
-ناشي
-وگو
-يا
-داشتند
-سپس
-هنگام
-هرگز
-پنج
-نشان
-امسال
-ديگر
-گروهي
-شدند
-چطور
-ده

-دو
-نخستين
-ولي
-چرا
-چه
-وسط

-كدام
-قابل
-يك
-رفت
-هفت
-همچنين
-در
-هزار
-بله
-بلي
-شايد
-اما
-شناسي
-گرفته
-دهد
-داشته
-دانست
-داشتن
-خواهيم
-ميليارد
-وقتيكه
-امد
-خواهد
-جز
-اورده
-شده
-بلكه
-خدمات
-شدن
-برخي
-نبود
-بسياري
-جلوگيري
-حق
-كردند
-نوعي
-بعري
-نكرده
-نظير
-نبايد
-بوده
-بودن
-داد
-اورد
-هست
-جايي
-شود
-دنبال
-داده
-بايد
-سابق
-هيچ
-همان
-انجا
-كمتر
-كجاست
-گردد
-كسي
-تر
-مردم
-تان
-دادن
-بودند
-سري
-جدا
-ندارند
-مگر
-يكديگر
-دارد
-دهند
-بنابراين
-هنگامي
-سمت
-جا
-انچه
-خود
-دادند
-زياد
-دارند
-اثر
-بدون
-بهترين
-بيشتر
-البته
-به
-براساس
-بيرون
-كرد
-بعضي
-گرفت
-توي
-اي
-ميليون
-او
-جريان
-تول
-بر
-مانند
-برابر
-باشيم
-مدتي
-گويند
-اكنون
-تا
-تنها
-جديد
-چند
-بي
-نشده
-كردن
-كردم
-گويد
-كرده
-كنيم
-نمي
-نزد
-روي
-قصد
-فقط
-بالاي
-ديگران
-اين
-ديروز
-توسط
-سوم
-ايم
-دانند
-سوي
-استفاده
-شما
-كنار
-داريم
-ساخته
-طور
-امده
-رفته
-نخست
-بيست
-نزديك
-طي
-كنيد
-از
-انها
-تمامي
-داشت
-يكي
-طريق
-اش
-چيست
-روب
-نمايد
-گفت
-چندين
-چيزي
-تواند
-ام
-ايا
-با
-ان
-ايد
-ترين
-اينكه
-ديگري
-راه
-هايي
-بروز
-همچنان
-پاعين
-كس
-حدود
-مختلف
-مقابل
-چيز
-گيرد
-ندارد
-ضد
-همچون
-سازي
-شان
-مورد
-باره
-مرسي
-خويش
-برخوردار
-چون
-خارج
-شش
-هنوز
-تحت
-ضمن
-هستيم
-گفته
-فكر
-بسيار
-پيش
-براي
-روزهاي
-انكه
-نخواهد
-بالا
-كل
-وقتي
-كي
-چنين
-كه
-گيري
-نيست
-است
-كجا
-كند
-نيز
-يابد
-بندي
-حتي
-توانند
-عقب
-خواست
-كنند
-بين
-تمام
-همه
-ما
-باشند
-مثل
-شد
-اري
-باشد
-اره
-طبق
-بعد
-اگر
-صورت
-غير
-جاي
-بيش
-ريزي
-اند
-زيرا
-چگونه
-بار
-لطفا
-مي
-درباره
-من
-ديده
-همين
-گذاري
-برداري
-علت
-گذاشته
-هم
-فوق
-نه
-ها
-شوند
-اباد
-همواره
-هر
-اول
-خواهند
-چهار
-نام
-امروز
-مان
-هاي
-قبل
-كنم
-سعي
-تازه
-را
-هستند
-زير
-جلوي
-عنوان
-بود
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fi.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fi.txt
deleted file mode 100644
index 4372c9a..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fi.txt
+++ /dev/null
@@ -1,97 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/finnish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
- 
-| forms of BE
-
-olla
-olen
-olet
-on
-olemme
-olette
-ovat
-ole        | negative form
-
-oli
-olisi
-olisit
-olisin
-olisimme
-olisitte
-olisivat
-olit
-olin
-olimme
-olitte
-olivat
-ollut
-olleet
-
-en         | negation
-et
-ei
-emme
-ette
-eivät
-
-|Nom   Gen    Acc    Part   Iness   Elat    Illat  Adess   Ablat   Allat   Ess    Trans
-minä   minun  minut  minua  minussa minusta minuun minulla minulta minulle               | I
-sinä   sinun  sinut  sinua  sinussa sinusta sinuun sinulla sinulta sinulle               | you
-hän    hänen  hänet  häntä  hänessä hänestä häneen hänellä häneltä hänelle               | he she
-me     meidän meidät meitä  meissä  meistä  meihin meillä  meiltä  meille                | we
-te     teidän teidät teitä  teissä  teistä  teihin teillä  teiltä  teille                | you
-he     heidän heidät heitä  heissä  heistä  heihin heillä  heiltä  heille                | they
-
-tämä   tämän         tätä   tässä   tästä   tähän  tallä   tältä   tälle   tänä   täksi  | this
-tuo    tuon          tuotä  tuossa  tuosta  tuohon tuolla  tuolta  tuolle  tuona  tuoksi | that
-se     sen           sitä   siinä   siitä   siihen sillä   siltä   sille   sinä   siksi  | it
-nämä   näiden        näitä  näissä  näistä  näihin näillä  näiltä  näille  näinä  näiksi | these
-nuo    noiden        noita  noissa  noista  noihin noilla  noilta  noille  noina  noiksi | those
-ne     niiden        niitä  niissä  niistä  niihin niillä  niiltä  niille  niinä  niiksi | they
-
-kuka   kenen kenet   ketä   kenessä kenestä keneen kenellä keneltä kenelle kenenä keneksi| who
-ketkä  keiden ketkä  keitä  keissä  keistä  keihin keillä  keiltä  keille  keinä  keiksi | (pl)
-mikä   minkä minkä   mitä   missä   mistä   mihin  millä   miltä   mille   minä   miksi  | which what
-mitkä                                                                                    | (pl)
-
-joka   jonka         jota   jossa   josta   johon  jolla   jolta   jolle   jona   joksi  | who which
-jotka  joiden        joita  joissa  joista  joihin joilla  joilta  joille  joina  joiksi | (pl)
-
-| conjunctions
-
-että   | that
-ja     | and
-jos    | if
-koska  | because
-kuin   | than
-mutta  | but
-niin   | so
-sekä   | and
-sillä  | for
-tai    | or
-vaan   | but
-vai    | or
-vaikka | although
-
-
-| prepositions
-
-kanssa  | with
-mukaan  | according to
-noin    | about
-poikki  | across
-yli     | over, across
-
-| other
-
-kun    | when
-niin   | so
-nyt    | now
-itse   | self
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fr.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fr.txt
deleted file mode 100644
index 749abae..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_fr.txt
+++ /dev/null
@@ -1,186 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/french/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A French stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-au             |  a + le
-aux            |  a + les
-avec           |  with
-ce             |  this
-ces            |  these
-dans           |  with
-de             |  of
-des            |  de + les
-du             |  de + le
-elle           |  she
-en             |  `of them' etc
-et             |  and
-eux            |  them
-il             |  he
-je             |  I
-la             |  the
-le             |  the
-leur           |  their
-lui            |  him
-ma             |  my (fem)
-mais           |  but
-me             |  me
-même           |  same; as in moi-même (myself) etc
-mes            |  me (pl)
-moi            |  me
-mon            |  my (masc)
-ne             |  not
-nos            |  our (pl)
-notre          |  our
-nous           |  we
-on             |  one
-ou             |  where
-par            |  by
-pas            |  not
-pour           |  for
-qu             |  que before vowel
-que            |  that
-qui            |  who
-sa             |  his, her (fem)
-se             |  oneself
-ses            |  his (pl)
-son            |  his, her (masc)
-sur            |  on
-ta             |  thy (fem)
-te             |  thee
-tes            |  thy (pl)
-toi            |  thee
-ton            |  thy (masc)
-tu             |  thou
-un             |  a
-une            |  a
-vos            |  your (pl)
-votre          |  your
-vous           |  you
-
-               |  single letter forms
-
-c              |  c'
-d              |  d'
-j              |  j'
-l              |  l'
-à              |  to, at
-m              |  m'
-n              |  n'
-s              |  s'
-t              |  t'
-y              |  there
-
-               | forms of être (not including the infinitive):
-été
-étée
-étées
-étés
-étant
-suis
-es
-est
-sommes
-êtes
-sont
-serai
-seras
-sera
-serons
-serez
-seront
-serais
-serait
-serions
-seriez
-seraient
-étais
-était
-étions
-étiez
-étaient
-fus
-fut
-fûmes
-fûtes
-furent
-sois
-soit
-soyons
-soyez
-soient
-fusse
-fusses
-fût
-fussions
-fussiez
-fussent
-
-               | forms of avoir (not including the infinitive):
-ayant
-eu
-eue
-eues
-eus
-ai
-as
-avons
-avez
-ont
-aurai
-auras
-aura
-aurons
-aurez
-auront
-aurais
-aurait
-aurions
-auriez
-auraient
-avais
-avait
-avions
-aviez
-avaient
-eut
-eûmes
-eûtes
-eurent
-aie
-aies
-ait
-ayons
-ayez
-aient
-eusse
-eusses
-eût
-eussions
-eussiez
-eussent
-
-               | Later additions (from Jean-Christophe Deschamps)
-ceci           |  this
-cela           |  that
-celà           |  that
-cet            |  this
-cette          |  this
-ici            |  here
-ils            |  they
-les            |  the (pl)
-leurs          |  their (pl)
-quel           |  which
-quels          |  which
-quelle         |  which
-quelles        |  which
-sans           |  without
-soi            |  oneself
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ga.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ga.txt
deleted file mode 100644
index 9ff88d7..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ga.txt
+++ /dev/null
@@ -1,110 +0,0 @@
-
-a
-ach
-ag
-agus
-an
-aon
-ar
-arna
-as
-b'
-ba
-beirt
-bhúr
-caoga
-ceathair
-ceathrar
-chomh
-chtó
-chuig
-chun
-cois
-céad
-cúig
-cúigear
-d'
-daichead
-dar
-de
-deich
-deichniúr
-den
-dhá
-do
-don
-dtí
-dá
-dár
-dó
-faoi
-faoin
-faoina
-faoinár
-fara
-fiche
-gach
-gan
-go
-gur
-haon
-hocht
-i
-iad
-idir
-in
-ina
-ins
-inár
-is
-le
-leis
-lena
-lenár
-m'
-mar
-mo
-mé
-na
-nach
-naoi
-naonúr
-ná
-ní
-níor
-nó
-nócha
-ocht
-ochtar
-os
-roimh
-sa
-seacht
-seachtar
-seachtó
-seasca
-seisear
-siad
-sibh
-sinn
-sna
-sé
-sí
-tar
-thar
-thú
-triúr
-trí
-trína
-trínár
-tríocha
-tú
-um
-ár

-éis


-ón
-óna
-ónár
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_gl.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_gl.txt
deleted file mode 100644
index d8760b1..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_gl.txt
+++ /dev/null
@@ -1,161 +0,0 @@
-# galican stopwords
-a
-aínda
-alí
-aquel
-aquela
-aquelas
-aqueles
-aquilo
-aquí
-ao
-aos
-as
-así

-ben
-cando
-che
-co
-coa
-comigo
-con
-connosco
-contigo
-convosco
-coas
-cos
-cun
-cuns
-cunha
-cunhas
-da
-dalgunha
-dalgunhas
-dalgún
-dalgúns
-das
-de
-del
-dela
-delas
-deles
-desde
-deste
-do
-dos
-dun
-duns
-dunha
-dunhas
-e
-el
-ela
-elas
-eles
-en
-era
-eran
-esa
-esas
-ese
-eses
-esta
-estar
-estaba
-está
-están
-este
-estes
-estiven
-estou
-eu

-facer
-foi
-foron
-fun
-había
-hai
-iso
-isto
-la
-las
-lle
-lles
-lo
-los
-mais
-me
-meu
-meus
-min
-miña
-miñas
-moi
-na
-nas
-neste
-nin
-no
-non
-nos
-nosa
-nosas
-noso
-nosos
-nós
-nun
-nunha
-nuns
-nunhas
-o
-os
-ou

-ós
-para
-pero
-pode
-pois
-pola
-polas
-polo
-polos
-por
-que
-se
-senón
-ser
-seu
-seus
-sexa
-sido
-sobre
-súa
-súas
-tamén
-tan
-te
-ten
-teñen
-teño
-ter
-teu
-teus
-ti
-tido
-tiña
-tiven
-túa
-túas
-un
-unha
-unhas
-uns
-vos
-vosa
-vosas
-voso
-vosos
-vós
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hi.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hi.txt
deleted file mode 100644
index 86286bb..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hi.txt
+++ /dev/null
@@ -1,235 +0,0 @@
-# Also see http://www.opensource.org/licenses/bsd-license.html
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# Note: by default this file also contains forms normalized by HindiNormalizer 
-# for spelling variation (see section below), such that it can be used whether or 
-# not you enable that feature. When adding additional entries to this list,
-# please add the normalized form as well. 
-अंदर
-अत
-अपना
-अपनी
-अपने
-अभी
-आदि
-आप
-इत्यादि
-इन 
-इनका
-इन्हीं
-इन्हें
-इन्हों
-इस
-इसका
-इसकी
-इसके
-इसमें
-इसी
-इसे
-उन
-उनका
-उनकी
-उनके
-उनको
-उन्हीं
-उन्हें
-उन्हों
-उस
-उसके
-उसी
-उसे
-एक
-एवं
-एस
-ऐसे
-और
-कई
-कर
-करता
-करते
-करना
-करने
-करें
-कहते
-कहा
-का
-काफ़ी
-कि
-कितना
-किन्हें
-किन्हों
-किया
-किर
-किस
-किसी
-किसे
-की
-कुछ
-कुल
-के
-को
-कोई
-कौन
-कौनसा
-गया
-घर
-जब
-जहाँ
-जा
-जितना
-जिन
-जिन्हें
-जिन्हों
-जिस
-जिसे
-जीधर
-जैसा
-जैसे
-जो
-तक
-तब
-तरह
-तिन
-तिन्हें
-तिन्हों
-तिस
-तिसे
-तो
-था
-थी
-थे
-दबारा
-दिया
-दुसरा
-दूसरे
-दो
-द्वारा
-न
-नहीं
-ना
-निहायत
-नीचे
-ने
-पर
-पर  
-पहले
-पूरा
-पे
-फिर
-बनी
-बही
-बहुत
-बाद
-बाला
-बिलकुल
-भी
-भीतर
-मगर
-मानो
-मे
-में
-यदि
-यह
-यहाँ
-यही
-या
-यिह 
-ये
-रखें
-रहा
-रहे
-ऱ्वासा
-लिए
-लिये
-लेकिन
-व
-वर्ग
-वह
-वह 
-वहाँ
-वहीं
-वाले
-वुह 
-वे
-वग़ैरह
-संग
-सकता
-सकते
-सबसे
-सभी
-साथ
-साबुत
-साभ
-सारा
-से
-सो
-ही
-हुआ
-हुई
-हुए
-है
-हैं
-हो
-होता
-होती
-होते
-होना
-होने
-# additional normalized forms of the above
-अपनि
-जेसे
-होति
-सभि
-तिंहों
-इंहों
-दवारा
-इसि
-किंहें
-थि
-उंहों
-ओर
-जिंहें
-वहिं
-अभि
-बनि
-हि
-उंहिं
-उंहें
-हें
-वगेरह
-एसे
-रवासा
-कोन
-निचे
-काफि
-उसि
-पुरा
-भितर
-हे
-बहि
-वहां
-कोइ
-यहां
-जिंहों
-तिंहें
-किसि
-कइ
-यहि
-इंहिं
-जिधर
-इंहें
-अदि
-इतयादि
-हुइ
-कोनसा
-इसकि
-दुसरे
-जहां
-अप
-किंहों
-उनकि
-भि
-वरग
-हुअ
-जेसा
-नहिं
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hu.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hu.txt
deleted file mode 100644
index 37526da..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hu.txt
+++ /dev/null
@@ -1,211 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/hungarian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
- 
-| Hungarian stop word list
-| prepared by Anna Tordai
-
-a
-ahogy
-ahol
-aki
-akik
-akkor
-alatt
-által
-általában
-amely
-amelyek
-amelyekben
-amelyeket
-amelyet
-amelynek
-ami
-amit
-amolyan
-amíg
-amikor
-át
-abban
-ahhoz
-annak
-arra
-arról
-az
-azok
-azon
-azt
-azzal
-azért
-aztán
-azután
-azonban
-bár
-be
-belül
-benne
-cikk
-cikkek
-cikkeket
-csak
-de
-e
-eddig
-egész
-egy
-egyes
-egyetlen
-egyéb
-egyik
-egyre
-ekkor
-el
-elég
-ellen
-elő
-először
-előtt
-első
-én
-éppen
-ebben
-ehhez
-emilyen
-ennek
-erre
-ez
-ezt
-ezek
-ezen
-ezzel
-ezért
-és
-fel
-felé
-hanem
-hiszen
-hogy
-hogyan
-igen
-így
-illetve
-ill.
-ill
-ilyen
-ilyenkor
-ison
-ismét
-itt
-jó
-jól
-jobban
-kell
-kellett
-keresztül
-keressünk
-ki
-kívül
-között
-közül
-legalább
-lehet
-lehetett
-legyen
-lenne
-lenni
-lesz
-lett
-maga
-magát
-majd
-majd
-már
-más
-másik
-meg
-még
-mellett
-mert
-mely
-melyek
-mi
-mit
-míg
-miért
-milyen
-mikor
-minden
-mindent
-mindenki
-mindig
-mint
-mintha
-mivel
-most
-nagy
-nagyobb
-nagyon
-ne
-néha
-nekem
-neki
-nem
-néhány
-nélkül
-nincs
-olyan
-ott
-össze

-ők
-őket
-pedig
-persze
-rá
-s
-saját
-sem
-semmi
-sok
-sokat
-sokkal
-számára
-szemben
-szerint
-szinte
-talán
-tehát
-teljes
-tovább
-továbbá
-több
-úgy
-ugyanis
-új
-újabb
-újra
-után
-utána
-utolsó
-vagy
-vagyis
-valaki
-valami
-valamint
-való
-vagyok
-van
-vannak
-volt
-voltam
-voltak
-voltunk
-vissza
-vele
-viszont
-volna
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hy.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hy.txt
deleted file mode 100644
index 60c1c50..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_hy.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-# example set of Armenian stopwords.
-այդ
-այլ
-այն
-այս
-դու
-դուք
-եմ
-են
-ենք
-ես
-եք

-էի
-էին
-էինք
-էիր
-էիք
-էր
-ըստ


-ին
-իսկ
-իր
-կամ
-համար
-հետ
-հետո
-մենք
-մեջ
-մի

-նա
-նաև
-նրա
-նրանք
-որ
-որը
-որոնք
-որպես
-ու
-ում
-պիտի
-վրա

diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_id.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_id.txt
deleted file mode 100644
index 4617f83..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_id.txt
+++ /dev/null
@@ -1,359 +0,0 @@
-# from appendix D of: A Study of Stemming Effects on Information
-# Retrieval in Bahasa Indonesia
-ada
-adanya
-adalah
-adapun
-agak
-agaknya
-agar
-akan
-akankah
-akhirnya
-aku
-akulah
-amat
-amatlah
-anda
-andalah
-antar
-diantaranya
-antara
-antaranya
-diantara
-apa
-apaan
-mengapa
-apabila
-apakah
-apalagi
-apatah
-atau
-ataukah
-ataupun
-bagai
-bagaikan
-sebagai
-sebagainya
-bagaimana
-bagaimanapun
-sebagaimana
-bagaimanakah
-bagi
-bahkan
-bahwa
-bahwasanya
-sebaliknya
-banyak
-sebanyak
-beberapa
-seberapa
-begini
-beginian
-beginikah
-beginilah
-sebegini
-begitu
-begitukah
-begitulah
-begitupun
-sebegitu
-belum
-belumlah
-sebelum
-sebelumnya
-sebenarnya
-berapa
-berapakah
-berapalah
-berapapun
-betulkah
-sebetulnya
-biasa
-biasanya
-bila
-bilakah
-bisa
-bisakah
-sebisanya
-boleh
-bolehkah
-bolehlah
-buat
-bukan
-bukankah
-bukanlah
-bukannya
-cuma
-percuma
-dahulu
-dalam
-dan
-dapat
-dari
-daripada
-dekat
-demi
-demikian
-demikianlah
-sedemikian
-dengan
-depan
-di
-dia
-dialah
-dini
-diri
-dirinya
-terdiri
-dong
-dulu
-enggak
-enggaknya
-entah
-entahlah
-terhadap
-terhadapnya
-hal
-hampir
-hanya
-hanyalah
-harus
-haruslah
-harusnya
-seharusnya
-hendak
-hendaklah
-hendaknya
-hingga
-sehingga
-ia
-ialah
-ibarat
-ingin
-inginkah
-inginkan
-ini
-inikah
-inilah
-itu
-itukah
-itulah
-jangan
-jangankan
-janganlah
-jika
-jikalau
-juga
-justru
-kala
-kalau
-kalaulah
-kalaupun
-kalian
-kami
-kamilah
-kamu
-kamulah
-kan
-kapan
-kapankah
-kapanpun
-dikarenakan
-karena
-karenanya
-ke
-kecil
-kemudian
-kenapa
-kepada
-kepadanya
-ketika
-seketika
-khususnya
-kini
-kinilah
-kiranya
-sekiranya
-kita
-kitalah
-kok
-lagi
-lagian
-selagi
-lah
-lain
-lainnya
-melainkan
-selaku
-lalu
-melalui
-terlalu
-lama
-lamanya
-selama
-selama
-selamanya
-lebih
-terlebih
-bermacam
-macam
-semacam
-maka
-makanya
-makin
-malah
-malahan
-mampu
-mampukah
-mana
-manakala
-manalagi
-masih
-masihkah
-semasih
-masing
-mau
-maupun
-semaunya
-memang
-mereka
-merekalah
-meski
-meskipun
-semula
-mungkin
-mungkinkah
-nah
-namun
-nanti
-nantinya
-nyaris
-oleh
-olehnya
-seorang
-seseorang
-pada
-padanya
-padahal
-paling
-sepanjang
-pantas
-sepantasnya
-sepantasnyalah
-para
-pasti
-pastilah
-per
-pernah
-pula
-pun
-merupakan
-rupanya
-serupa
-saat
-saatnya
-sesaat
-saja
-sajalah
-saling
-bersama
-sama
-sesama
-sambil
-sampai
-sana
-sangat
-sangatlah
-saya
-sayalah
-se
-sebab
-sebabnya
-sebuah
-tersebut
-tersebutlah
-sedang
-sedangkan
-sedikit
-sedikitnya
-segala
-segalanya
-segera
-sesegera
-sejak
-sejenak
-sekali
-sekalian
-sekalipun
-sesekali
-sekaligus
-sekarang
-sekarang
-sekitar
-sekitarnya
-sela
-selain
-selalu
-seluruh
-seluruhnya
-semakin
-sementara
-sempat
-semua
-semuanya
-sendiri
-sendirinya
-seolah
-seperti
-sepertinya
-sering
-seringnya
-serta
-siapa
-siapakah
-siapapun
-disini
-disinilah
-sini
-sinilah
-sesuatu
-sesuatunya
-suatu
-sesudah
-sesudahnya
-sudah
-sudahkah
-sudahlah
-supaya
-tadi
-tadinya
-tak
-tanpa
-setelah
-telah
-tentang
-tentu
-tentulah
-tentunya
-tertentu
-seterusnya
-tapi
-tetapi
-setiap
-tiap
-setidaknya
-tidak
-tidakkah
-tidaklah
-toh
-waduh
-wah
-wahai
-sewaktu
-walau
-walaupun
-wong
-yaitu
-yakni
-yang
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_it.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_it.txt
deleted file mode 100644
index 1219cc7..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_it.txt
+++ /dev/null
@@ -1,303 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/italian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | An Italian stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-ad             |  a (to) before vowel
-al             |  a + il
-allo           |  a + lo
-ai             |  a + i
-agli           |  a + gli
-all            |  a + l'
-agl            |  a + gl'
-alla           |  a + la
-alle           |  a + le
-con            |  with
-col            |  con + il
-coi            |  con + i (forms collo, cogli etc are now very rare)
-da             |  from
-dal            |  da + il
-dallo          |  da + lo
-dai            |  da + i
-dagli          |  da + gli
-dall           |  da + l'
-dagl           |  da + gll'
-dalla          |  da + la
-dalle          |  da + le
-di             |  of
-del            |  di + il
-dello          |  di + lo
-dei            |  di + i
-degli          |  di + gli
-dell           |  di + l'
-degl           |  di + gl'
-della          |  di + la
-delle          |  di + le
-in             |  in
-nel            |  in + el
-nello          |  in + lo
-nei            |  in + i
-negli          |  in + gli
-nell           |  in + l'
-negl           |  in + gl'
-nella          |  in + la
-nelle          |  in + le
-su             |  on
-sul            |  su + il
-sullo          |  su + lo
-sui            |  su + i
-sugli          |  su + gli
-sull           |  su + l'
-sugl           |  su + gl'
-sulla          |  su + la
-sulle          |  su + le
-per            |  through, by
-tra            |  among
-contro         |  against
-io             |  I
-tu             |  thou
-lui            |  he
-lei            |  she
-noi            |  we
-voi            |  you
-loro           |  they
-mio            |  my
-mia            |
-miei           |
-mie            |
-tuo            |
-tua            |
-tuoi           |  thy
-tue            |
-suo            |
-sua            |
-suoi           |  his, her
-sue            |
-nostro         |  our
-nostra         |
-nostri         |
-nostre         |
-vostro         |  your
-vostra         |
-vostri         |
-vostre         |
-mi             |  me
-ti             |  thee
-ci             |  us, there
-vi             |  you, there
-lo             |  him, the
-la             |  her, the
-li             |  them
-le             |  them, the
-gli            |  to him, the
-ne             |  from there etc
-il             |  the
-un             |  a
-uno            |  a
-una            |  a
-ma             |  but
-ed             |  and
-se             |  if
-perché         |  why, because
-anche          |  also
-come           |  how
-dov            |  where (as dov')
-dove           |  where
-che            |  who, that
-chi            |  who
-cui            |  whom
-non            |  not
-più            |  more
-quale          |  who, that
-quanto         |  how much
-quanti         |
-quanta         |
-quante         |
-quello         |  that
-quelli         |
-quella         |
-quelle         |
-questo         |  this
-questi         |
-questa         |
-queste         |
-si             |  yes
-tutto          |  all
-tutti          |  all
-
-               |  single letter forms:
-
-a              |  at
-c              |  as c' for ce or ci
-e              |  and
-i              |  the
-l              |  as l'
-o              |  or
-
-               | forms of avere, to have (not including the infinitive):
-
-ho
-hai
-ha
-abbiamo
-avete
-hanno
-abbia
-abbiate
-abbiano
-avrò
-avrai
-avrà
-avremo
-avrete
-avranno
-avrei
-avresti
-avrebbe
-avremmo
-avreste
-avrebbero
-avevo
-avevi
-aveva
-avevamo
-avevate
-avevano
-ebbi
-avesti
-ebbe
-avemmo
-aveste
-ebbero
-avessi
-avesse
-avessimo
-avessero
-avendo
-avuto
-avuta
-avuti
-avute
-
-               | forms of essere, to be (not including the infinitive):
-sono
-sei

-siamo
-siete
-sia
-siate
-siano
-sarò
-sarai
-sarà
-saremo
-sarete
-saranno
-sarei
-saresti
-sarebbe
-saremmo
-sareste
-sarebbero
-ero
-eri
-era
-eravamo
-eravate
-erano
-fui
-fosti
-fu
-fummo
-foste
-furono
-fossi
-fosse
-fossimo
-fossero
-essendo
-
-               | forms of fare, to do (not including the infinitive, fa, fat-):
-faccio
-fai
-facciamo
-fanno
-faccia
-facciate
-facciano
-farò
-farai
-farà
-faremo
-farete
-faranno
-farei
-faresti
-farebbe
-faremmo
-fareste
-farebbero
-facevo
-facevi
-faceva
-facevamo
-facevate
-facevano
-feci
-facesti
-fece
-facemmo
-faceste
-fecero
-facessi
-facesse
-facessimo
-facessero
-facendo
-
-               | forms of stare, to be (not including the infinitive):
-sto
-stai
-sta
-stiamo
-stanno
-stia
-stiate
-stiano
-starò
-starai
-starà
-staremo
-starete
-staranno
-starei
-staresti
-starebbe
-staremmo
-stareste
-starebbero
-stavo
-stavi
-stava
-stavamo
-stavate
-stavano
-stetti
-stesti
-stette
-stemmo
-steste
-stettero
-stessi
-stesse
-stessimo
-stessero
-stando
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ja.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ja.txt
deleted file mode 100644
index d4321be..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ja.txt
+++ /dev/null
@@ -1,127 +0,0 @@
-#
-# This file defines a stopword set for Japanese.
-#
-# This set is made up of hand-picked frequent terms from segmented Japanese Wikipedia.
-# Punctuation characters and frequent kanji have mostly been left out.  See LUCENE-3745
-# for frequency lists, etc. that can be useful for making your own set (if desired)
-#
-# Note that there is an overlap between these stopwords and the terms stopped when used
-# in combination with the JapanesePartOfSpeechStopFilter.  When editing this file, note
-# that comments are not allowed on the same line as stopwords.
-#
-# Also note that stopping is done in a case-insensitive manner.  Change your StopFilter
-# configuration if you need case-sensitive stopping.  Lastly, note that stopping is done
-# using the same character width as the entries in this file.  Since this StopFilter is
-# normally done after a CJKWidthFilter in your chain, you would usually want your romaji
-# entries to be in half-width and your kana entries to be in full-width.
-#
-の
-に
-は
-を
-た
-が
-で
-て
-と
-し
-れ
-さ
-ある
-いる
-も
-する
-から
-な
-こと
-として
-い
-や
-れる
-など
-なっ
-ない
-この
-ため
-その
-あっ
-よう
-また
-もの
-という
-あり
-まで
-られ
-なる
-へ
-か
-だ
-これ
-によって
-により
-おり
-より
-による
-ず
-なり
-られる
-において
-ば
-なかっ
-なく
-しかし
-について
-せ
-だっ
-その後
-できる
-それ
-う
-ので
-なお
-のみ
-でき
-き
-つ
-における
-および
-いう
-さらに
-でも
-ら
-たり
-その他
-に関する
-たち
-ます
-ん
-なら
-に対して
-特に
-せる
-及び
-これら
-とき
-では
-にて
-ほか
-ながら
-うち
-そして
-とともに
-ただし
-かつて
-それぞれ
-または
-お
-ほど
-ものの
-に対する
-ほとんど
-と共に
-といった
-です
-とも
-ところ
-ここ
-##### End of file
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_lv.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_lv.txt
deleted file mode 100644
index e21a23c..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_lv.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-# Set of Latvian stopwords from A Stemming Algorithm for Latvian, Karlis Kreslins
-# the original list of over 800 forms was refined: 
-#   pronouns, adverbs, interjections were removed
-# 
-# prepositions
-aiz
-ap
-ar
-apakš
-ārpus
-augšpus
-bez
-caur
-dēļ
-gar
-iekš
-iz
-kopš
-labad
-lejpus
-līdz
-no
-otrpus
-pa
-par
-pār
-pēc
-pie
-pirms
-pret
-priekš
-starp
-šaipus
-uz
-viņpus
-virs
-virspus
-zem
-apakšpus
-# Conjunctions
-un
-bet
-jo
-ja
-ka
-lai
-tomēr
-tikko
-turpretī
-arī
-kaut
-gan
-tādēļ
-tā
-ne
-tikvien
-vien
-kā
-ir
-te
-vai
-kamēr
-# Particles
-ar
-diezin
-droši
-diemžēl
-nebūt
-ik
-it
-taču
-nu
-pat
-tiklab
-iekšpus
-nedz
-tik
-nevis
-turpretim
-jeb
-iekam
-iekām
-iekāms
-kolīdz
-līdzko
-tiklīdz
-jebšu
-tālab
-tāpēc
-nekā
-itin
-jā
-jau
-jel
-nē
-nezin
-tad
-tikai
-vis
-tak
-iekams
-vien
-# modal verbs
-būt  
-biju 
-biji
-bija
-bijām
-bijāt
-esmu
-esi
-esam
-esat 
-būšu     
-būsi
-būs
-būsim
-būsiet
-tikt
-tiku
-tiki
-tika
-tikām
-tikāt
-tieku
-tiec
-tiek
-tiekam
-tiekat
-tikšu
-tiks
-tiksim
-tiksiet
-tapt
-tapi
-tapāt
-topat
-tapšu
-tapsi
-taps
-tapsim
-tapsiet
-kļūt
-kļuvu
-kļuvi
-kļuva
-kļuvām
-kļuvāt
-kļūstu
-kļūsti
-kļūst
-kļūstam
-kļūstat
-kļūšu
-kļūsi
-kļūs
-kļūsim
-kļūsiet
-# verbs
-varēt
-varēju
-varējām
-varēšu
-varēsim
-var
-varēji
-varējāt
-varēsi
-varēsiet
-varat
-varēja
-varēs
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_nl.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_nl.txt
deleted file mode 100644
index 47a2aea..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_nl.txt
+++ /dev/null
@@ -1,119 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/dutch/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Dutch stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large sample of Dutch text.
-
- | Dutch stop words frequently exhibit homonym clashes. These are indicated
- | clearly below.
-
-de             |  the
-en             |  and
-van            |  of, from
-ik             |  I, the ego
-te             |  (1) chez, at etc, (2) to, (3) too
-dat            |  that, which
-die            |  that, those, who, which
-in             |  in, inside
-een            |  a, an, one
-hij            |  he
-het            |  the, it
-niet           |  not, nothing, naught
-zijn           |  (1) to be, being, (2) his, one's, its
-is             |  is
-was            |  (1) was, past tense of all persons sing. of 'zijn' (to be) (2) wax, (3) the washing, (4) rise of river
-op             |  on, upon, at, in, up, used up
-aan            |  on, upon, to (as dative)
-met            |  with, by
-als            |  like, such as, when
-voor           |  (1) before, in front of, (2) furrow
-had            |  had, past tense all persons sing. of 'hebben' (have)
-er             |  there
-maar           |  but, only
-om             |  round, about, for etc
-hem            |  him
-dan            |  then
-zou            |  should/would, past tense all persons sing. of 'zullen'
-of             |  or, whether, if
-wat            |  what, something, anything
-mijn           |  possessive and noun 'mine'
-men            |  people, 'one'
-dit            |  this
-zo             |  so, thus, in this way
-door           |  through by
-over           |  over, across
-ze             |  she, her, they, them
-zich           |  oneself
-bij            |  (1) a bee, (2) by, near, at
-ook            |  also, too
-tot            |  till, until
-je             |  you
-mij            |  me
-uit            |  out of, from
-der            |  Old Dutch form of 'van der' still found in surnames
-daar           |  (1) there, (2) because
-haar           |  (1) her, their, them, (2) hair
-naar           |  (1) unpleasant, unwell etc, (2) towards, (3) as
-heb            |  present first person sing. of 'to have'
-hoe            |  how, why
-heeft          |  present third person sing. of 'to have'
-hebben         |  'to have' and various parts thereof
-deze           |  this
-u              |  you
-want           |  (1) for, (2) mitten, (3) rigging
-nog            |  yet, still
-zal            |  'shall', first and third person sing. of verb 'zullen' (will)
-me             |  me
-zij            |  she, they
-nu             |  now
-ge             |  'thou', still used in Belgium and south Netherlands
-geen           |  none
-omdat          |  because
-iets           |  something, somewhat
-worden         |  to become, grow, get
-toch           |  yet, still
-al             |  all, every, each
-waren          |  (1) 'were' (2) to wander, (3) wares, (3)
-veel           |  much, many
-meer           |  (1) more, (2) lake
-doen           |  to do, to make
-toen           |  then, when
-moet           |  noun 'spot/mote' and present form of 'to must'
-ben            |  (1) am, (2) 'are' in interrogative second person singular of 'to be'
-zonder         |  without
-kan            |  noun 'can' and present form of 'to be able'
-hun            |  their, them
-dus            |  so, consequently
-alles          |  all, everything, anything
-onder          |  under, beneath
-ja             |  yes, of course
-eens           |  once, one day
-hier           |  here
-wie            |  who
-werd           |  imperfect third person sing. of 'become'
-altijd         |  always
-doch           |  yet, but etc
-wordt          |  present third person sing. of 'become'
-wezen          |  (1) to be, (2) 'been' as in 'been fishing', (3) orphans
-kunnen         |  to be able
-ons            |  us/our
-zelf           |  self
-tegen          |  against, towards, at
-na             |  after, near
-reeds          |  already
-wil            |  (1) present tense of 'want', (2) 'will', noun, (3) fender
-kon            |  could; past tense of 'to be able'
-niets          |  nothing
-uw             |  your
-iemand         |  somebody
-geweest        |  been; past participle of 'be'
-andere         |  other
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_no.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_no.txt
deleted file mode 100644
index a7a2c28..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_no.txt
+++ /dev/null
@@ -1,194 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/norwegian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Norwegian stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This stop word list is for the dominant bokmål dialect. Words unique
- | to nynorsk are marked *.
-
- | Revised by Jan Bruusgaard <Jan.Bruusgaard@ssb.no>, Jan 2005
-
-og             | and
-i              | in
-jeg            | I
-det            | it/this/that
-at             | to (w. inf.)
-en             | a/an
-et             | a/an
-den            | it/this/that
-til            | to
-er             | is/am/are
-som            | who/that
-på             | on
-de             | they / you(formal)
-med            | with
-han            | he
-av             | of
-ikke           | not
-ikkje          | not *
-der            | there
-så             | so
-var            | was/were
-meg            | me
-seg            | you
-men            | but
-ett            | one
-har            | have
-om             | about
-vi             | we
-min            | my
-mitt           | my
-ha             | have
-hadde          | had
-hun            | she
-nå             | now
-over           | over
-da             | when/as
-ved            | by/know
-fra            | from
-du             | you
-ut             | out
-sin            | your
-dem            | them
-oss            | us
-opp            | up
-man            | you/one
-kan            | can
-hans           | his
-hvor           | where
-eller          | or
-hva            | what
-skal           | shall/must
-selv           | self (reflective)
-sjøl           | self (reflective)
-her            | here
-alle           | all
-vil            | will
-bli            | become
-ble            | became
-blei           | became *
-blitt          | have become
-kunne          | could
-inn            | in
-når            | when
-være           | be
-kom            | come
-noen           | some
-noe            | some
-ville          | would
-dere           | you
-som            | who/which/that
-deres          | their/theirs
-kun            | only/just
-ja             | yes
-etter          | after
-ned            | down
-skulle         | should
-denne          | this
-for            | for/because
-deg            | you
-si             | hers/his
-sine           | hers/his
-sitt           | hers/his
-mot            | against
-å              | to
-meget          | much
-hvorfor        | why
-dette          | this
-disse          | these/those
-uten           | without
-hvordan        | how
-ingen          | none
-din            | your
-ditt           | your
-blir           | become
-samme          | same
-hvilken        | which
-hvilke         | which (plural)
-sånn           | such a
-inni           | inside/within
-mellom         | between
-vår            | our
-hver           | each
-hvem           | who
-vors           | us/ours
-hvis           | whose
-både           | both
-bare           | only/just
-enn            | than
-fordi          | as/because
-før            | before
-mange          | many
-også           | also
-slik           | just
-vært           | been
-være           | to be
-båe            | both *
-begge          | both
-siden          | since
-dykk           | your *
-dykkar         | yours *
-dei            | they *
-deira          | them *
-deires         | theirs *
-deim           | them *
-di             | your (fem.) *
-då             | as/when *
-eg             | I *
-ein            | a/an *
-eit            | a/an *
-eitt           | a/an *
-elles          | or *
-honom          | he *
-hjå            | at *
-ho             | she *
-hoe            | she *
-henne          | her
-hennar         | her/hers
-hennes         | hers
-hoss           | how *
-hossen         | how *
-ikkje          | not *
-ingi           | noone *
-inkje          | noone *
-korleis        | how *
-korso          | how *
-kva            | what/which *
-kvar           | where *
-kvarhelst      | where *
-kven           | who/whom *
-kvi            | why *
-kvifor         | why *
-me             | we *
-medan          | while *
-mi             | my *
-mine           | my *
-mykje          | much *
-no             | now *
-nokon          | some (masc./neut.) *
-noka           | some (fem.) *
-nokor          | some *
-noko           | some *
-nokre          | some *
-si             | his/hers *
-sia            | since *
-sidan          | since *
-so             | so *
-somt           | some *
-somme          | some *
-um             | about*
-upp            | up *
-vere           | be *
-vore           | was *
-verte          | become *
-vort           | become *
-varte          | became *
-vart           | became *
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_pt.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_pt.txt
deleted file mode 100644
index acfeb01..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_pt.txt
+++ /dev/null
@@ -1,253 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/portuguese/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Portuguese stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
-
- | The following is a ranked list (commonest to rarest) of stopwords
- | deriving from a large sample of text.
-
- | Extra words have been added at the end.
-
-de             |  of, from
-a              |  the; to, at; her
-o              |  the; him
-que            |  who, that
-e              |  and
-do             |  de + o
-da             |  de + a
-em             |  in
-um             |  a
-para           |  for
-  | é          from SER
-com            |  with
-não            |  not, no
-uma            |  a
-os             |  the; them
-no             |  em + o
-se             |  himself etc
-na             |  em + a
-por            |  for
-mais           |  more
-as             |  the; them
-dos            |  de + os
-como           |  as, like
-mas            |  but
-  | foi        from SER
-ao             |  a + o
-ele            |  he
-das            |  de + as
-  | tem        from TER
-à              |  a + a
-seu            |  his
-sua            |  her
-ou             |  or
-  | ser        from SER
-quando         |  when
-muito          |  much
-  | há         from HAV
-nos            |  em + os; us
-já             |  already, now
-  | está       from EST
-eu             |  I
-também         |  also
-só             |  only, just
-pelo           |  per + o
-pela           |  per + a
-até            |  up to
-isso           |  that
-ela            |  he
-entre          |  between
-  | era        from SER
-depois         |  after
-sem            |  without
-mesmo          |  same
-aos            |  a + os
-  | ter        from TER
-seus           |  his
-quem           |  whom
-nas            |  em + as
-me             |  me
-esse           |  that
-eles           |  they
-  | estão      from EST
-você           |  you
-  | tinha      from TER
-  | foram      from SER
-essa           |  that
-num            |  em + um
-nem            |  nor
-suas           |  her
-meu            |  my
-às             |  a + as
-minha          |  my
-  | têm        from TER
-numa           |  em + uma
-pelos          |  per + os
-elas           |  they
-  | havia      from HAV
-  | seja       from SER
-qual           |  which
-  | será       from SER
-nós            |  we
-  | tenho      from TER
-lhe            |  to him, her
-deles          |  of them
-essas          |  those
-esses          |  those
-pelas          |  per + as
-este           |  this
-  | fosse      from SER
-dele           |  of him
-
- | other words. There are many contractions such as naquele = em+aquele,
- | mo = me+o, but they are rare.
- | Indefinite article plural forms are also rare.
-
-tu             |  thou
-te             |  thee
-vocês          |  you (plural)
-vos            |  you
-lhes           |  to them
-meus           |  my
-minhas
-teu            |  thy
-tua
-teus
-tuas
-nosso          | our
-nossa
-nossos
-nossas
-
-dela           |  of her
-delas          |  of them
-
-esta           |  this
-estes          |  these
-estas          |  these
-aquele         |  that
-aquela         |  that
-aqueles        |  those
-aquelas        |  those
-isto           |  this
-aquilo         |  that
-
-               | forms of estar, to be (not including the infinitive):
-estou
-está
-estamos
-estão
-estive
-esteve
-estivemos
-estiveram
-estava
-estávamos
-estavam
-estivera
-estivéramos
-esteja
-estejamos
-estejam
-estivesse
-estivéssemos
-estivessem
-estiver
-estivermos
-estiverem
-
-               | forms of haver, to have (not including the infinitive):
-hei
-há
-havemos
-hão
-houve
-houvemos
-houveram
-houvera
-houvéramos
-haja
-hajamos
-hajam
-houvesse
-houvéssemos
-houvessem
-houver
-houvermos
-houverem
-houverei
-houverá
-houveremos
-houverão
-houveria
-houveríamos
-houveriam
-
-               | forms of ser, to be (not including the infinitive):
-sou
-somos
-são
-era
-éramos
-eram
-fui
-foi
-fomos
-foram
-fora
-fôramos
-seja
-sejamos
-sejam
-fosse
-fôssemos
-fossem
-for
-formos
-forem
-serei
-será
-seremos
-serão
-seria
-seríamos
-seriam
-
-               | forms of ter, to have (not including the infinitive):
-tenho
-tem
-temos
-tém
-tinha
-tínhamos
-tinham
-tive
-teve
-tivemos
-tiveram
-tivera
-tivéramos
-tenha
-tenhamos
-tenham
-tivesse
-tivéssemos
-tivessem
-tiver
-tivermos
-tiverem
-terei
-terá
-teremos
-terão
-teria
-teríamos
-teriam
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ro.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ro.txt
deleted file mode 100644
index 4fdee90..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ro.txt
+++ /dev/null
@@ -1,233 +0,0 @@
-# This file was created by Jacques Savoy and is distributed under the BSD license.
-# See http://members.unine.ch/jacques.savoy/clef/index.html.
-# Also see http://www.opensource.org/licenses/bsd-license.html
-acea
-aceasta
-această
-aceea
-acei
-aceia
-acel
-acela
-acele
-acelea
-acest
-acesta
-aceste
-acestea
-aceşti
-aceştia
-acolo
-acum
-ai
-aia
-aibă
-aici
-al
-ăla
-ale
-alea
-ălea
-altceva
-altcineva
-am
-ar
-are
-aş
-aşadar
-asemenea
-asta
-ăsta
-astăzi
-astea
-ăstea
-ăştia
-asupra
-aţi
-au
-avea
-avem
-aveţi
-azi
-bine
-bucur
-bună
-ca
-că
-căci
-când
-care
-cărei
-căror
-cărui
-cât
-câte
-câţi
-către
-câtva
-ce
-cel
-ceva
-chiar
-cînd
-cine
-cineva
-cît
-cîte
-cîţi
-cîtva
-contra
-cu
-cum
-cumva
-curând
-curînd
-da
-dă
-dacă
-dar
-datorită
-de
-deci
-deja
-deoarece
-departe
-deşi
-din
-dinaintea
-dintr
-dintre
-drept
-după
-ea
-ei
-el
-ele
-eram
-este
-eşti
-eu
-face
-fără
-fi
-fie
-fiecare
-fii
-fim
-fiţi
-iar
-ieri
-îi
-îl
-îmi
-împotriva
-în 
-înainte
-înaintea
-încât
-încît
-încotro
-între
-întrucât
-întrucît
-îţi
-la
-lângă
-le
-li
-lîngă
-lor
-lui
-mă
-mâine
-mea
-mei
-mele
-mereu
-meu
-mi
-mine
-mult
-multă
-mulţi
-ne
-nicăieri
-nici
-nimeni
-nişte
-noastră
-noastre
-noi
-noştri
-nostru
-nu
-ori
-oricând
-oricare
-oricât
-orice
-oricînd
-oricine
-oricît
-oricum
-oriunde
-până
-pe
-pentru
-peste
-pînă
-poate
-pot
-prea
-prima
-primul
-prin
-printr
-sa
-să
-săi
-sale
-sau
-său
-se
-şi
-sînt
-sîntem
-sînteţi
-spre
-sub
-sunt
-suntem
-sunteţi
-ta
-tăi
-tale
-tău
-te
-ţi
-ţie
-tine
-toată
-toate
-tot
-toţi
-totuşi
-tu
-un
-una
-unde
-undeva
-unei
-unele
-uneori
-unor
-vă
-vi
-voastră
-voastre
-voi
-voştri
-vostru
-vouă
-vreo
-vreun
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ru.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ru.txt
deleted file mode 100644
index 5527140..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_ru.txt
+++ /dev/null
@@ -1,243 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/russian/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | a russian stop word list. comments begin with vertical bar. each stop
- | word is at the start of a line.
-
- | this is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
- | letter `ё' is translated to `е'.
-
-и              | and
-в              | in/into
-во             | alternative form
-не             | not
-что            | what/that
-он             | he
-на             | on/onto
-я              | i
-с              | from
-со             | alternative form
-как            | how
-а              | milder form of `no' (but)
-то             | conjunction and form of `that'
-все            | all
-она            | she
-так            | so, thus
-его            | him
-но             | but
-да             | yes/and
-ты             | thou
-к              | towards, by
-у              | around, chez
-же             | intensifier particle
-вы             | you
-за             | beyond, behind
-бы             | conditional/subj. particle
-по             | up to, along
-только         | only
-ее             | her
-мне            | to me
-было           | it was
-вот            | here is/are, particle
-от             | away from
-меня           | me
-еще            | still, yet, more
-нет            | no, there isnt/arent
-о              | about
-из             | out of
-ему            | to him
-теперь         | now
-когда          | when
-даже           | even
-ну             | so, well
-вдруг          | suddenly
-ли             | interrogative particle
-если           | if
-уже            | already, but homonym of `narrower'
-или            | or
-ни             | neither
-быть           | to be
-был            | he was
-него           | prepositional form of его
-до             | up to
-вас            | you accusative
-нибудь         | indef. suffix preceded by hyphen
-опять          | again
-уж             | already, but homonym of `adder'
-вам            | to you
-сказал         | he said
-ведь           | particle `after all'
-там            | there
-потом          | then
-себя           | oneself
-ничего         | nothing
-ей             | to her
-может          | usually with `быть' as `maybe'
-они            | they
-тут            | here
-где            | where
-есть           | there is/are
-надо           | got to, must
-ней            | prepositional form of  ей
-для            | for
-мы             | we
-тебя           | thee
-их             | them, their
-чем            | than
-была           | she was
-сам            | self
-чтоб           | in order to
-без            | without
-будто          | as if
-человек        | man, person, one
-чего           | genitive form of `what'
-раз            | once
-тоже           | also
-себе           | to oneself
-под            | beneath
-жизнь          | life
-будет          | will be
-ж              | short form of intensifer particle `же'
-тогда          | then
-кто            | who
-этот           | this
-говорил        | was saying
-того           | genitive form of `that'
-потому         | for that reason
-этого          | genitive form of `this'
-какой          | which
-совсем         | altogether
-ним            | prepositional form of `его', `они'
-здесь          | here
-этом           | prepositional form of `этот'
-один           | one
-почти          | almost
-мой            | my
-тем            | instrumental/dative plural of `тот', `то'
-чтобы          | full form of `in order that'
-нее            | her (acc.)
-кажется        | it seems
-сейчас         | now
-были           | they were
-куда           | where to
-зачем          | why
-сказать        | to say
-всех           | all (acc., gen. preposn. plural)
-никогда        | never
-сегодня        | today
-можно          | possible, one can
-при            | by
-наконец        | finally
-два            | two
-об             | alternative form of `о', about
-другой         | another
-хоть           | even
-после          | after
-над            | above
-больше         | more
-тот            | that one (masc.)
-через          | across, in
-эти            | these
-нас            | us
-про            | about
-всего          | in all, only, of all
-них            | prepositional form of `они' (they)
-какая          | which, feminine
-много          | lots
-разве          | interrogative particle
-сказала        | she said
-три            | three
-эту            | this, acc. fem. sing.
-моя            | my, feminine
-впрочем        | moreover, besides
-хорошо         | good
-свою           | ones own, acc. fem. sing.
-этой           | oblique form of `эта', fem. `this'
-перед          | in front of
-иногда         | sometimes
-лучше          | better
-чуть           | a little
-том            | preposn. form of `that one'
-нельзя         | one must not
-такой          | such a one
-им             | to them
-более          | more
-всегда         | always
-конечно        | of course
-всю            | acc. fem. sing of `all'
-между          | between
-
-
-  | b: some paradigms
-  |
-  | personal pronouns
-  |
-  | я  меня  мне  мной  [мною]
-  | ты  тебя  тебе  тобой  [тобою]
-  | он  его  ему  им  [него, нему, ним]
-  | она  ее  эи  ею  [нее, нэи, нею]
-  | оно  его  ему  им  [него, нему, ним]
-  |
-  | мы  нас  нам  нами
-  | вы  вас  вам  вами
-  | они  их  им  ими  [них, ним, ними]
-  |
-  |   себя  себе  собой   [собою]
-  |
-  | demonstrative pronouns: этот (this), тот (that)
-  |
-  | этот  эта  это  эти
-  | этого  эты  это  эти
-  | этого  этой  этого  этих
-  | этому  этой  этому  этим
-  | этим  этой  этим  [этою]  этими
-  | этом  этой  этом  этих
-  |
-  | тот  та  то  те
-  | того  ту  то  те
-  | того  той  того  тех
-  | тому  той  тому  тем
-  | тем  той  тем  [тою]  теми
-  | том  той  том  тех
-  |
-  | determinative pronouns
-  |
-  | (a) весь (all)
-  |
-  | весь  вся  все  все
-  | всего  всю  все  все
-  | всего  всей  всего  всех
-  | всему  всей  всему  всем
-  | всем  всей  всем  [всею]  всеми
-  | всем  всей  всем  всех
-  |
-  | (b) сам (himself etc)
-  |
-  | сам  сама  само  сами
-  | самого саму  само  самих
-  | самого самой самого  самих
-  | самому самой самому  самим
-  | самим  самой  самим  [самою]  самими
-  | самом самой самом  самих
-  |
-  | stems of verbs `to be', `to have', `to do' and modal
-  |
-  | быть  бы  буд  быв  есть  суть
-  | име
-  | дел
-  | мог   мож  мочь
-  | уме
-  | хоч  хот
-  | долж
-  | можн
-  | нужн
-  | нельзя
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_sv.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_sv.txt
deleted file mode 100644
index 096f87f..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_sv.txt
+++ /dev/null
@@ -1,133 +0,0 @@
- | From svn.tartarus.org/snowball/trunk/website/algorithms/swedish/stop.txt
- | This file is distributed under the BSD License.
- | See http://snowball.tartarus.org/license.php
- | Also see http://www.opensource.org/licenses/bsd-license.html
- |  - Encoding was converted to UTF-8.
- |  - This notice was added.
- |
- | NOTE: To use this file with StopFilterFactory, you must specify format="snowball"
-
- | A Swedish stop word list. Comments begin with vertical bar. Each stop
- | word is at the start of a line.
-
- | This is a ranked list (commonest to rarest) of stopwords derived from
- | a large text sample.
-
- | Swedish stop words occasionally exhibit homonym clashes. For example
- |  så = so, but also seed. These are indicated clearly below.
-
-och            | and
-det            | it, this/that
-att            | to (with infinitive)
-i              | in, at
-en             | a
-jag            | I
-hon            | she
-som            | who, that
-han            | he
-på             | on
-den            | it, this/that
-med            | with
-var            | where, each
-sig            | him(self) etc
-för            | for
-så             | so (also: seed)
-till           | to
-är             | is
-men            | but
-ett            | a
-om             | if; around, about
-hade           | had
-de             | they, these/those
-av             | of
-icke           | not, no
-mig            | me
-du             | you
-henne          | her
-då             | then, when
-sin            | his
-nu             | now
-har            | have
-inte           | inte någon = no one
-hans           | his
-honom          | him
-skulle         | 'sake'
-hennes         | her
-där            | there
-min            | my
-man            | one (pronoun)
-ej             | nor
-vid            | at, by, on (also: vast)
-kunde          | could
-något          | some etc
-från           | from, off
-ut             | out
-när            | when
-efter          | after, behind
-upp            | up
-vi             | we
-dem            | them
-vara           | be
-vad            | what
-över           | over
-än             | than
-dig            | you
-kan            | can
-sina           | his
-här            | here
-ha             | have
-mot            | towards
-alla           | all
-under          | under (also: wonder)
-någon          | some etc
-eller          | or (else)
-allt           | all
-mycket         | much
-sedan          | since
-ju             | why
-denna          | this/that
-själv          | myself, yourself etc
-detta          | this/that
-åt             | to
-utan           | without
-varit          | was
-hur            | how
-ingen          | no
-mitt           | my
-ni             | you
-bli            | to be, become
-blev           | from bli
-oss            | us
-din            | thy
-dessa          | these/those
-några          | some etc
-deras          | their
-blir           | from bli
-mina           | my
-samma          | (the) same
-vilken         | who, that
-er             | you, your
-sådan          | such a
-vår            | our
-blivit         | from bli
-dess           | its
-inom           | within
-mellan         | between
-sådant         | such a
-varför         | why
-varje          | each
-vilka          | who, that
-ditt           | thy
-vem            | who
-vilket         | who, that
-sitta          | his
-sådana         | such a
-vart           | each
-dina           | thy
-vars           | whose
-vårt           | our
-våra           | our
-ert            | your
-era            | your
-vilkas         | whose
-
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_th.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_th.txt
deleted file mode 100644
index 07f0fab..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_th.txt
+++ /dev/null
@@ -1,119 +0,0 @@
-# Thai stopwords from:
-# "Opinion Detection in Thai Political News Columns
-# Based on Subjectivity Analysis"
-# Khampol Sukhum, Supot Nitsuwat, and Choochart Haruechaiyasak
-ไว้
-ไม่
-ไป
-ได้
-ให้
-ใน
-โดย
-แห่ง
-แล้ว
-และ
-แรก
-แบบ
-แต่
-เอง
-เห็น
-เลย
-เริ่ม
-เรา
-เมื่อ
-เพื่อ
-เพราะ
-เป็นการ
-เป็น
-เปิดเผย
-เปิด
-เนื่องจาก
-เดียวกัน
-เดียว
-เช่น
-เฉพาะ
-เคย
-เข้า
-เขา
-อีก
-อาจ
-อะไร
-ออก
-อย่าง
-อยู่
-อยาก
-หาก
-หลาย
-หลังจาก
-หลัง
-หรือ
-หนึ่ง
-ส่วน
-ส่ง
-สุด
-สําหรับ
-ว่า
-วัน
-ลง
-ร่วม
-ราย
-รับ
-ระหว่าง
-รวม
-ยัง
-มี
-มาก
-มา
-พร้อม
-พบ
-ผ่าน
-ผล
-บาง
-น่า
-นี้
-นํา
-นั้น
-นัก
-นอกจาก
-ทุก
-ที่สุด
-ที่
-ทําให้
-ทํา
-ทาง
-ทั้งนี้
-ทั้ง
-ถ้า
-ถูก
-ถึง
-ต้อง
-ต่างๆ
-ต่าง
-ต่อ
-ตาม
-ตั้งแต่
-ตั้ง
-ด้าน
-ด้วย
-ดัง
-ซึ่ง
-ช่วง
-จึง
-จาก
-จัด
-จะ
-คือ
-ความ
-ครั้ง
-คง
-ขึ้น
-ของ
-ขอ
-ขณะ
-ก่อน
-ก็
-การ
-กับ
-กัน
-กว่า
-กล่าว
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_tr.txt b/solr/example/example-DIH/solr/solr/conf/lang/stopwords_tr.txt
deleted file mode 100644
index 84d9408..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/stopwords_tr.txt
+++ /dev/null
@@ -1,212 +0,0 @@
-# Turkish stopwords from LUCENE-559
-# merged with the list from "Information Retrieval on Turkish Texts"
-#   (http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf)
-acaba
-altmış
-altı
-ama
-ancak
-arada
-aslında
-ayrıca
-bana
-bazı
-belki
-ben
-benden
-beni
-benim
-beri
-beş
-bile
-bin
-bir
-birçok
-biri
-birkaç
-birkez
-birşey
-birşeyi
-biz
-bize
-bizden
-bizi
-bizim
-böyle
-böylece
-bu
-buna
-bunda
-bundan
-bunlar
-bunları
-bunların
-bunu
-bunun
-burada
-çok
-çünkü
-da
-daha
-dahi
-de
-defa
-değil
-diğer
-diye
-doksan
-dokuz
-dolayı
-dolayısıyla
-dört
-edecek
-eden
-ederek
-edilecek
-ediliyor
-edilmesi
-ediyor
-eğer
-elli
-en
-etmesi
-etti
-ettiği
-ettiğini
-gibi
-göre
-halen
-hangi
-hatta
-hem
-henüz
-hep
-hepsi
-her
-herhangi
-herkesin
-hiç
-hiçbir
-için
-iki
-ile
-ilgili
-ise
-işte
-itibaren
-itibariyle
-kadar
-karşın
-katrilyon
-kendi
-kendilerine
-kendini
-kendisi
-kendisine
-kendisini
-kez
-ki
-kim
-kimden
-kime
-kimi
-kimse
-kırk
-milyar
-milyon
-mu
-mü
-mı
-nasıl
-ne
-neden
-nedenle
-nerde
-nerede
-nereye
-niye
-niçin
-o
-olan
-olarak
-oldu
-olduğu
-olduğunu
-olduklarını
-olmadı
-olmadığı
-olmak
-olması
-olmayan
-olmaz
-olsa
-olsun
-olup
-olur
-olursa
-oluyor
-on
-ona
-ondan
-onlar
-onlardan
-onları
-onların
-onu
-onun
-otuz
-oysa
-öyle
-pek
-rağmen
-sadece
-sanki
-sekiz
-seksen
-sen
-senden
-seni
-senin
-siz
-sizden
-sizi
-sizin
-şey
-şeyden
-şeyi
-şeyler
-şöyle
-şu
-şuna
-şunda
-şundan
-şunları
-şunu
-tarafından
-trilyon
-tüm
-üç
-üzere
-var
-vardı
-ve
-veya
-ya
-yani
-yapacak
-yapılan
-yapılması
-yapıyor
-yapmak
-yaptı
-yaptığı
-yaptığını
-yaptıkları
-yedi
-yerine
-yetmiş
-yine
-yirmi
-yoksa
-yüz
-zaten
diff --git a/solr/example/example-DIH/solr/solr/conf/lang/userdict_ja.txt b/solr/example/example-DIH/solr/solr/conf/lang/userdict_ja.txt
deleted file mode 100644
index 6f0368e..0000000
--- a/solr/example/example-DIH/solr/solr/conf/lang/userdict_ja.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-#
-# This is a sample user dictionary for Kuromoji (JapaneseTokenizer)
-#
-# Add entries to this file in order to override the statistical model in terms
-# of segmentation, readings and part-of-speech tags.  Notice that entries do
-# not have weights since they are always used when found.  This is by-design
-# in order to maximize ease-of-use.
-#
-# Entries are defined using the following CSV format:
-#  <text>,<token 1> ... <token n>,<reading 1> ... <reading n>,<part-of-speech tag>
-#
-# Notice that a single half-width space separates tokens and readings, and
-# that the number tokens and readings must match exactly.
-#
-# Also notice that multiple entries with the same <text> is undefined.
-#
-# Whitespace only lines are ignored.  Comments are not allowed on entry lines.
-#
-
-# Custom segmentation for kanji compounds
-日本経済新聞,日本 経済 新聞,ニホン ケイザイ シンブン,カスタム名詞
-関西国際空港,関西 国際 空港,カンサイ コクサイ クウコウ,カスタム名詞
-
-# Custom segmentation for compound katakana
-トートバッグ,トート バッグ,トート バッグ,かずカナ名詞
-ショルダーバッグ,ショルダー バッグ,ショルダー バッグ,かずカナ名詞
-
-# Custom reading for former sumo wrestler
-朝青龍,朝青龍,アサショウリュウ,カスタム人名
diff --git a/solr/example/example-DIH/solr/solr/conf/managed-schema b/solr/example/example-DIH/solr/solr/conf/managed-schema
deleted file mode 100644
index d337bda..0000000
--- a/solr/example/example-DIH/solr/solr/conf/managed-schema
+++ /dev/null
@@ -1,1143 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
-
- PERFORMANCE NOTE: this schema includes many optional features and should not
- be used for benchmarking.  To improve performance one could
-  - set stored="false" for all fields possible (esp large fields) when you
-    only need to search on the field but don't need to return the original
-    value.
-  - set indexed="false" if you don't need to search on the field, but only
-    return the field as a result of searching on other indexed fields.
-  - remove all unneeded copyField statements
-  - for best index size and searching performance, set "index" to false
-    for all general text fields, use copyField to copy them to the
-    catchall "text" field, and use that for searching.
-  - For maximum indexing performance, use the ConcurrentUpdateSolrServer
-    java client.
-  - Remember to run the JVM in server mode, and use a higher logging level
-    that avoids logging every request
--->
-
-<schema name="example-DIH-solr" version="1.6">
-  <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       version="x.y" is Solr's version number for the schema syntax and 
-       semantics.  It should not normally be changed by applications.
-
-       1.0: multiValued attribute did not exist, all fields are multiValued 
-            by nature
-       1.1: multiValued attribute introduced, false by default 
-       1.2: omitTermFreqAndPositions attribute introduced, true by default 
-            except for text fields.
-       1.3: removed optional field compress feature
-       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
-            behavior when a single string produces multiple tokens.  Defaults 
-            to off for version >= 1.4
-       1.5: omitNorms defaults to true for primitive field types 
-            (int, float, boolean, string...)
-       1.6: useDocValuesAsStored defaults to true.            
-     -->
-
-
-    <!-- Valid attributes for fields:
-     name: mandatory - the name for the field
-     type: mandatory - the name of a field type from the 
-       fieldTypes section
-     indexed: true if this field should be indexed (searchable or sortable)
-     stored: true if this field should be retrievable
-     docValues: true if this field should have doc values. Doc values are
-       useful (required, if you are using *Point fields) for faceting, 
-       grouping, sorting and function queries. Doc values will make the index 
-       faster to load, more NRT-friendly and more memory-efficient. 
-       They however come with some limitations: they are currently only 
-       supported by StrField, UUIDField, all *PointFields, and depending
-       on the field type, they might require the field to be single-valued,
-       be required or have a default value (check the documentation
-       of the field type you're interested in for more information)
-     multiValued: true if this field may contain multiple values per document
-     omitNorms: (expert) set to true to omit the norms associated with
-       this field (this disables length normalization and index-time
-       boosting for the field, and saves some memory).  Only full-text
-       fields or fields that need an index-time boost need norms.
-       Norms are omitted for primitive (non-analyzed) types by default.
-     termVectors: [false] set to true to store the term vector for a
-       given field.
-       When using MoreLikeThis, fields used for similarity should be
-       stored for best performance.
-     termPositions: Store position information with the term vector.  
-       This will increase storage costs.
-     termOffsets: Store offset information with the term vector. This 
-       will increase storage costs.
-     required: The field is required.  It will throw an error if the
-       value does not exist
-     default: a value that should be used if no value is specified
-       when adding a document.
-    -->
-
-   <!-- field names should consist of alphanumeric or underscore characters only and
-      not start with a digit.  This is not currently strictly enforced,
-      but other field names will not have first class support from all components
-      and back compatibility is not guaranteed.  Names with both leading and
-      trailing underscores (e.g. _version_) are reserved.
-   -->
-
-   <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
-      or Solr won't start. _version_ and update log are required for SolrCloud
-   --> 
-   <field name="_version_" type="plong" indexed="true" stored="true"/>
-   
-   <!-- points to the root document of a block of nested documents. Required for nested
-      document support, may be removed otherwise
-   -->
-   <field name="_root_" type="string" indexed="true" stored="false"/>
-
-   <!-- Only remove the "id" field if you have a very good reason to. While not strictly
-     required, it is highly recommended. A <uniqueKey> is present in almost all Solr 
-     installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
-   -->   
-   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> 
-        
-   <field name="sku" type="text_en_splitting_tight" indexed="true" stored="true" omitNorms="true"/>
-   <field name="name" type="text_general" indexed="true" stored="true"/>
-   <field name="manu" type="text_general" indexed="true" stored="true" omitNorms="true"/>
-   <field name="cat" type="string" indexed="true" stored="true" multiValued="true"/>
-   <field name="features" type="text_general" indexed="true" stored="true" multiValued="true"/>
-   <field name="includes" type="text_general" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
-
-   <field name="weight" type="pfloat" indexed="true" stored="true"/>
-   <field name="price"  type="pfloat" indexed="true" stored="true"/>
-   <field name="popularity" type="pint" indexed="true" stored="true" />
-   <field name="inStock" type="boolean" indexed="true" stored="true" />
-
-   <field name="store" type="location" indexed="true" stored="true"/>
-
-   <!-- Common metadata fields, named specifically to match up with
-     SolrCell metadata when parsing rich documents such as Word, PDF.
-     Some fields are multiValued only because Tika currently may return
-     multiple values for them. Some metadata is parsed from the documents,
-     but there are some which come from the client context:
-       "content_type": From the HTTP headers of incoming stream
-       "resourcename": From SolrCell request param resource.name
-   -->
-   <field name="title" type="text_general" indexed="true" stored="true" multiValued="true"/>
-   <field name="subject" type="text_general" indexed="true" stored="true"/>
-   <field name="description" type="text_general" indexed="true" stored="true"/>
-   <field name="comments" type="text_general" indexed="true" stored="true"/>
-   <field name="author" type="text_general" indexed="true" stored="true"/>
-   <field name="keywords" type="text_general" indexed="true" stored="true"/>
-   <field name="category" type="text_general" indexed="true" stored="true"/>
-   <field name="resourcename" type="text_general" indexed="true" stored="true"/>
-   <field name="url" type="text_general" indexed="true" stored="true"/>
-   <field name="content_type" type="string" indexed="true" stored="true" multiValued="true"/>
-   <field name="last_modified" type="pdate" indexed="true" stored="true"/>
-   <field name="links" type="string" indexed="true" stored="true" multiValued="true"/>
-
-   <!-- Main body of document extracted by SolrCell.
-        NOTE: This field is not indexed by default, since it is also copied to "text"
-        using copyField below. This is to save space. Use this field for returning and
-        highlighting document content. Use the "text" field to search the content. -->
-   <field name="content" type="text_general" indexed="false" stored="true" multiValued="true"/>
-   
-
-   <!-- catchall field, containing all other searchable text fields (implemented
-        via copyField further on in this schema  -->
-   <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
-
-   <!-- catchall text field that indexes tokens both normally and in reverse for efficient
-        leading wildcard queries. -->
-   <field name="text_rev" type="text_general_rev" indexed="true" stored="false" multiValued="true"/>
-
-   <!-- non-tokenized version of manufacturer to make it easier to sort or group
-        results by manufacturer.  copied from "manu" via copyField -->
-   <field name="manu_exact" type="string" indexed="true" stored="false"/>
-
-   <field name="payloads" type="payloads" indexed="true" stored="true"/>
-
-
-   <!--
-     Some fields such as popularity and manu_exact could be modified to
-     leverage doc values:
-     <field name="popularity" type="pint" indexed="true" stored="true" docValues="true" />
-     <field name="manu_exact" type="string" indexed="false" stored="false" docValues="true" />
-     <field name="cat" type="string" indexed="true" stored="true" docValues="true" multiValued="true"/>
-
-
-     Although it would make indexing slightly slower and the index bigger, it
-     would also make the index faster to load, more memory-efficient and more
-     NRT-friendly.
-     -->
-
-   <!-- Dynamic field definitions allow using convention over configuration
-       for fields via the specification of patterns to match field names.
-       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
-       RESTRICTION: the glob-like pattern in the name attribute must have
-       a "*" only at the start or the end.  -->
-   
-   <dynamicField name="*_i"  type="pint"    indexed="true"  stored="true"/>
-   <dynamicField name="*_is" type="pint"    indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
-   <dynamicField name="*_s_ns"  type="string"  indexed="true"  stored="false" />
-   <dynamicField name="*_ss" type="string"  indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_l"  type="plong"   indexed="true"  stored="true"/>
-   <dynamicField name="*_l_ns"  type="plong"   indexed="true"  stored="false"/>
-   <dynamicField name="*_ls" type="plong"   indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_t"  type="text_general"    indexed="true"  stored="true"/>
-   <dynamicField name="*_txt" type="text_general"   indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_en"  type="text_en"    indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
-   <dynamicField name="*_bs" type="boolean" indexed="true" stored="true"  multiValued="true"/>
-   <dynamicField name="*_f"  type="pfloat"  indexed="true"  stored="true"/>
-   <dynamicField name="*_fs" type="pfloat"  indexed="true"  stored="true"  multiValued="true"/>
-   <dynamicField name="*_d"  type="pdouble" indexed="true"  stored="true"/>
-   <dynamicField name="*_ds" type="pdouble" indexed="true"  stored="true"  multiValued="true"/>
-
-   <!-- Type used to index the lat and lon components for the "location" FieldType -->
-   <dynamicField name="*_coordinate"  type="pdouble" indexed="true"  stored="false" />
-
-   <dynamicField name="*_dt"  type="pdate"    indexed="true"  stored="true"/>
-   <dynamicField name="*_dts" type="pdate"    indexed="true"  stored="true" multiValued="true"/>
-   <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
-
-   <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
-
-   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
-   <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
-
-   <dynamicField name="random_*" type="random" />
-
-   <!-- uncomment the following to ignore any fields that don't already match an existing 
-        field name or dynamic field, rather than reporting them as an error. 
-        alternately, change the type="ignored" to some other type e.g. "text" if you want 
-        unknown fields indexed and/or stored by default --> 
-   <!--dynamicField name="*" type="ignored" multiValued="true" /-->
-   
-
-
-
- <!-- Field to use to determine and enforce document uniqueness. 
-      Unless this field is marked with required="false", it will be a required field
-   -->
- <uniqueKey>id</uniqueKey>
-
-  <!-- copyField commands copy one field to another at the time a document
-        is added to the index.  It's used either to index the same field differently,
-        or to add multiple fields to the same field for easier/faster searching.  -->
-
-   <copyField source="cat" dest="text"/>
-   <copyField source="name" dest="text"/>
-   <copyField source="manu" dest="text"/>
-   <copyField source="features" dest="text"/>
-   <copyField source="includes" dest="text"/>
-   <copyField source="manu" dest="manu_exact"/>
-
-   <!-- Copy the price into a currency enabled field (default USD) -->
-   <copyField source="price" dest="price_c"/>
-
-   <!-- Text fields from SolrCell to search by default in our catch-all field -->
-   <copyField source="title" dest="text"/>
-   <copyField source="author" dest="text"/>
-   <copyField source="description" dest="text"/>
-   <copyField source="keywords" dest="text"/>
-   <copyField source="content" dest="text"/>
-   <copyField source="content_type" dest="text"/>
-   <copyField source="resourcename" dest="text"/>
-   <copyField source="url" dest="text"/>
-
-   <!-- Create a string version of author for faceting -->
-   <copyField source="author" dest="author_s"/>
-
-   <!-- Above, multiple source fields are copied to the [text] field.
-    Another way to map multiple source fields to the same
-    destination field is to use the dynamic field syntax.
-    copyField also supports a maxChars to copy setting.  -->
-
-   <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
-
-   <!-- copy name to alphaNameSort, a field designed for sorting by name -->
-   <!-- <copyField source="name" dest="alphaNameSort"/> -->
-
-  
-    <!-- field type definitions. The "name" attribute is
-       just a label to be used by field definitions.  The "class"
-       attribute and any other attributes determine the real
-       behavior of the fieldType.
-         Class names starting with "solr" refer to java classes in a
-       standard package such as org.apache.solr.analysis
-    -->
-
-    <!-- The StrField type is not analyzed, but indexed/stored verbatim. -->
-    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
-
-    <!-- boolean type: "true" or "false" -->
-    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
-
-    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
-         currently supported on types that are sorted internally as strings
-         and on numeric types.
-	     This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble".
-       - If sortMissingLast="true", then a sort on this field will cause documents
-         without the field to come after documents with the field,
-         regardless of the requested sort order (asc or desc).
-       - If sortMissingFirst="true", then a sort on this field will cause documents
-         without the field to come before documents with the field,
-         regardless of the requested sort order.
-       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
-         then default lucene sorting will be used which places docs without the
-         field first in an ascending sort and last in a descending sort.
-    -->
-
-    <!--
-      Numeric field types that index values using KD-trees.
-      Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc.
-    -->
-    <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
-    <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
-    <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
-    <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
-    
-    <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/>
-    <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/>
-    <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/>
-    <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/>
-
-    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
-         is a more restricted form of the canonical representation of dateTime
-         http://www.w3.org/TR/xmlschema-2/#dateTime    
-         The trailing "Z" designates UTC time and is mandatory.
-         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
-         All other components are mandatory.
-
-         Expressions can also be used to denote calculations that should be
-         performed relative to "NOW" to determine the value, ie...
-
-               NOW/HOUR
-                  ... Round to the start of the current hour
-               NOW-1DAY
-                  ... Exactly 1 day prior to now
-               NOW/DAY+6MONTHS+3DAYS
-                  ... 6 months and 3 days in the future from the start of
-                      the current day
-                      
-         Consult the DatePointField javadocs for more information.
-      -->
-    <!-- KD-tree versions of date fields -->
-    <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
-    <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/>
-    
-    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
-    <fieldType name="binary" class="solr.BinaryField"/>
-
-    <!-- The "RandomSortField" is not used to store or search any
-         data.  You can declare fields of this type it in your schema
-         to generate pseudo-random orderings of your docs for sorting 
-         or function purposes.  The ordering is generated based on the field
-         name and the version of the index. As long as the index version
-         remains unchanged, and the same field name is reused,
-         the ordering of the docs will be consistent.  
-         If you want different psuedo-random orderings of documents,
-         for the same version of the index, use a dynamicField and
-         change the field name in the request.
-     -->
-    <fieldType name="random" class="solr.RandomSortField" indexed="true" />
-
-    <!-- solr.TextField allows the specification of custom text analyzers
-         specified as a tokenizer and a list of token filters. Different
-         analyzers may be specified for indexing and querying.
-
-         The optional positionIncrementGap puts space between multiple fields of
-         this type on the same document, with the purpose of preventing false phrase
-         matching across fields.
-
-         For more info on customizing your analyzer chain, please see
-         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-     -->
-
-    <!-- One can also specify an existing Analyzer class that has a
-         default constructor via the class attribute on the analyzer element.
-         Example:
-    <fieldType name="text_greek" class="solr.TextField">
-      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
-    </fieldType>
-    -->
-
-    <!-- A text field that only splits on whitespace for exact matching of words -->
-    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="whitespace"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A general text field that has reasonable, generic
-         cross-language defaults: it tokenizes with StandardTokenizer,
-   removes stop words from case-insensitive "stopwords.txt"
-   (empty by default), and down cases.  At query time only, it
-   also applies synonyms. -->
-    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="flattenGraph"/>
-        -->
-        <filter name="lowercase"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A text field with defaults appropriate for English: it
-         tokenizes with StandardTokenizer, removes English stop words
-         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
-         finally applies Porter's stemming.  The query time analyzer
-         also applies synonyms from synonyms.txt. -->
-    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="flattenGraph"/>
-        -->
-        <!-- Case insensitive stop word removal.
-        -->
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="lowercase"/>
-  <filter name="englishPossessive"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
-        <filter name="englishMinimalStem"/>
-  -->
-        <filter name="porterStem"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="lowercase"/>
-  <filter name="englishPossessive"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
-        <filter name="englishMinimalStem"/>
-  -->
-        <filter name="porterStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- A text field with defaults appropriate for English, plus
-   aggressive word-splitting and autophrase features enabled.
-   This field is just like text_en, except it adds
-   WordDelimiterGraphFilter to enable splitting and matching of
-   words on case-change, alpha numeric boundaries, and
-   non-alphanumeric chars.  This means certain compound word
-   cases will work, for example query "wi fi" will match
-   document "WiFi" or "wi-fi".
-        -->
-    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
-      <analyzer type="index">
-        <tokenizer name="whitespace"/>
-        <!-- in this example, we will only use synonyms at query time
-        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-        -->
-        <!-- Case insensitive stop word removal.
-        -->
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="porterStem"/>
-        <filter name="flattenGraph" />
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop"
-                ignoreCase="true"
-                words="lang/stopwords_en.txt"
-                />
-        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="porterStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
-         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
-    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
-      <analyzer type="index">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="englishMinimalStem"/>
-        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
-             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
-        <filter name="removeDuplicates"/>
-        <filter name="flattenGraph" />
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="whitespace"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
-        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
-        <filter name="lowercase"/>
-        <filter name="keywordMarker" protected="protwords.txt"/>
-        <filter name="englishMinimalStem"/>
-        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
-             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
-        <filter name="removeDuplicates"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Just like text_general except it reverses the characters of
-   each token, to enable more efficient leading wildcard queries. -->
-    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
-      <analyzer type="index">
-        <tokenizer name="standard"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="lowercase"/>
-        <filter name="reversedWildcard" withOriginal="true"
-           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
-      </analyzer>
-      <analyzer type="query">
-        <tokenizer name="standard"/>
-        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
-        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- charFilter + WhitespaceTokenizer  -->
-    <!--
-    <fieldType name="text_char_norm" class="solr.TextField" positionIncrementGap="100" >
-      <analyzer>
-        <charFilter name="mapping" mapping="mapping-ISOLatin1Accent.txt"/>
-        <tokenizer name="whitespace"/>
-      </analyzer>
-    </fieldType>
-    -->
-
-    <!-- This is an example of using the KeywordTokenizer along
-         With various TokenFilterFactories to produce a sortable field
-         that does not include some properties of the source text
-      -->
-    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
-      <analyzer>
-        <!-- KeywordTokenizer does no actual tokenizing, so the entire
-             input string is preserved as a single token
-          -->
-        <tokenizer name="keyword"/>
-        <!-- The LowerCase TokenFilter does what you expect, which can be
-             when you want your sorting to be case insensitive
-          -->
-        <filter name="lowercase" />
-        <!-- The TrimFilter removes any leading or trailing whitespace -->
-        <filter name="trim" />
-        <!-- The PatternReplaceFilter gives you the flexibility to use
-             Java Regular expression to replace any sequence of characters
-             matching a pattern with an arbitrary replacement string, 
-             which may include back references to portions of the original
-             string matched by the pattern.
-             
-             See the Java Regular Expression documentation for more
-             information on pattern and replacement string syntax.
-             
-             http://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html
-          -->
-        <filter name="patternReplace"
-                pattern="([^a-z])" replacement="" replace="all"
-        />
-      </analyzer>
-    </fieldType>
-    
-    <fieldType name="phonetic" stored="false" indexed="true" class="solr.TextField" >
-      <analyzer>
-        <tokenizer name="standard"/>
-        <filter name="doubleMetaphone" inject="false"/>
-      </analyzer>
-    </fieldType>
-
-    <fieldType name="payloads" stored="false" indexed="true" class="solr.TextField" >
-      <analyzer>
-        <tokenizer name="whitespace"/>
-        <!--
-        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
-        a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
-        Attributes of the DelimitedPayloadTokenFilterFactory : 
-         "delimiter" - a one character delimiter. Default is | (pipe)
-   "encoder" - how to encode the following value into a playload
-      float -> org.apache.lucene.analysis.payloads.FloatEncoder,
-      integer -> o.a.l.a.p.IntegerEncoder
-      identity -> o.a.l.a.p.IdentityEncoder
-            Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.
-         -->
-        <filter name="delimitedPayload" encoder="float"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- lowercases the entire field value, keeping it as a single token.  -->
-    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="keyword"/>
-        <filter name="lowercase" />
-      </analyzer>
-    </fieldType>
-
-    <!-- 
-      Example of using PathHierarchyTokenizerFactory at index time, so
-      queries for paths match documents at that path, or in descendent paths
-    -->
-    <fieldType name="descendent_path" class="solr.TextField">
-      <analyzer type="index">
-  <tokenizer name="pathHierarchy" delimiter="/" />
-      </analyzer>
-      <analyzer type="query">
-  <tokenizer name="keyword" />
-      </analyzer>
-    </fieldType>
-    <!-- 
-      Example of using PathHierarchyTokenizerFactory at query time, so
-      queries for paths match documents at that path, or in ancestor paths
-    -->
-    <fieldType name="ancestor_path" class="solr.TextField">
-      <analyzer type="index">
-  <tokenizer name="keyword" />
-      </analyzer>
-      <analyzer type="query">
-  <tokenizer name="pathHierarchy" delimiter="/" />
-      </analyzer>
-    </fieldType>
-
-    <!-- since fields of this type are by default not stored or indexed,
-         any data added to them will be ignored outright.  --> 
-    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
-
-    <!-- This point type indexes the coordinates as separate fields (subFields)
-      If subFieldType is defined, it references a type, and a dynamic field
-      definition is created matching *___<typename>.  Alternately, if 
-      subFieldSuffix is defined, that is used to create the subFields.
-      Example: if subFieldType="double", then the coordinates would be
-        indexed in fields myloc_0___double,myloc_1___double.
-      Example: if subFieldSuffix="_d" then the coordinates would be indexed
-        in fields myloc_0_d,myloc_1_d
-      The subFields are an implementation detail of the fieldType, and end
-      users normally should not need to know about them.
-     -->
-    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
-
-    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
-    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
-
-    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
-      For more information about this and other Spatial fields new to Solr 4, see:
-      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
-    -->
-    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
-        geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
-
-   <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
-        Parameters:
-          amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field. 
-                              The dynamic field must have a field type that extends LongValueFieldType.
-                              Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
-          codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.
-                              The dynamic field must have a field type that extends StrField.
-                              Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
-          defaultCurrency:  Specifies the default currency if none specified. Defaults to "USD"
-          providerClass:    Lets you plug in other exchange provider backend:
-                            solr.FileExchangeRateProvider is the default and takes one parameter:
-                              currencyConfig: name of an xml file holding exchange rates
-                            solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
-                              ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
-                              refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
-   -->
-    <fieldType name="currency" class="solr.CurrencyFieldType" amountLongSuffix="_l_ns" codeStrSuffix="_s_ns"
-               defaultCurrency="USD" currencyConfig="currency.xml" />
-
-
-   <!-- some examples for different languages (generally ordered by ISO code) -->
-
-    <!-- Arabic -->
-    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- for any non-arabic -->
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />
-        <!-- normalizes ﻯ to ﻱ, etc -->
-        <filter name="arabicNormalization"/>
-        <filter name="arabicStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Bulgarian -->
-    <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/> 
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" /> 
-        <filter name="bulgarianStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Catalan -->
-    <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_ca.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" />
-        <filter name="snowballPorter" language="Catalan"/>       
-      </analyzer>
-    </fieldType>
-    
-    <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->
-    <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="standard"/>
-        <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->
-        <filter name="cjkWidth"/>
-        <!-- for any non-CJK -->
-        <filter name="lowercase"/>
-        <filter name="cjkBigram"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Kurdish -->
-    <fieldType name="text_ckb" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <tokenizer name="standard"/>
-        <filter name="soraniNormalization"/>
-        <!-- for any latin text -->
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ckb.txt"/>
-        <filter name="soraniStem"/>
-      </analyzer>
-    </fieldType>
-
-    <!-- Czech -->
-    <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_cz.txt" />
-        <filter name="czechStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Danish -->
-    <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />
-        <filter name="snowballPorter" language="Danish"/>       
-      </analyzer>
-    </fieldType>
-    
-    <!-- German -->
-    <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />
-        <filter name="germanNormalization"/>
-        <filter name="germanLightStem"/>
-        <!-- less aggressive: <filter name="germanMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="German2"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Greek -->
-    <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- greek specific lowercase for sigma -->
-        <filter name="greekLowercase"/>
-        <filter name="stop" ignoreCase="false" words="lang/stopwords_el.txt" />
-        <filter name="greekStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Spanish -->
-    <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />
-        <filter name="spanishLightStem"/>
-        <!-- more aggressive: <filter name="snowballPorter" language="Spanish"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Basque -->
-    <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_eu.txt" />
-        <filter name="snowballPorter" language="Basque"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Persian -->
-    <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <!-- for ZWNJ -->
-        <charFilter name="persian"/>
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="arabicNormalization"/>
-        <filter name="persianNormalization"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fa.txt" />
-      </analyzer>
-    </fieldType>
-    
-    <!-- Finnish -->
-    <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />
-        <filter name="snowballPorter" language="Finnish"/>
-        <!-- less aggressive: <filter name="finnishLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- French -->
-    <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_fr.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />
-        <filter name="frenchLightStem"/>
-        <!-- less aggressive: <filter name="frenchMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="French"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Irish -->
-    <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes d', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_ga.txt"/>
-        <!-- removes n-, etc. position increments is intentionally false! -->
-        <filter name="stop" ignoreCase="true" words="lang/hyphenations_ga.txt"/>
-        <filter name="irishLowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ga.txt"/>
-        <filter name="snowballPorter" language="Irish"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Galician -->
-    <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_gl.txt" />
-        <filter name="galicianStem"/>
-        <!-- less aggressive: <filter name="galicianMinimalStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Hindi -->
-    <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <!-- normalizes unicode representation -->
-        <filter name="indicNormalization"/>
-        <!-- normalizes variation in spelling -->
-        <filter name="hindiNormalization"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hi.txt" />
-        <filter name="hindiStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Hungarian -->
-    <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />
-        <filter name="snowballPorter" language="Hungarian"/>
-        <!-- less aggressive: <filter name="hungarianLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Armenian -->
-    <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_hy.txt" />
-        <filter name="snowballPorter" language="Armenian"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Indonesian -->
-    <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_id.txt" />
-        <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->
-        <filter name="indonesianStem" stemDerivational="true"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Italian -->
-    <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <!-- removes l', etc -->
-        <filter name="elision" ignoreCase="true" articles="lang/contractions_it.txt"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />
-        <filter name="italianLightStem"/>
-        <!-- more aggressive: <filter name="snowballPorter" language="Italian"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)
-
-         NOTE: If you want to optimize search for precision, use default operator AND in your request
-         handler config (q.op) Use OR if you would like to optimize for recall (default).
-    -->
-    <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
-      <analyzer>
-      <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)
-
-           Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic
-           is used to segment compounds into its parts and the compound itself is kept as synonym.
-
-           Valid values for attribute mode are:
-              normal: regular segmentation
-              search: segmentation useful for search with synonyms compounds (default)
-            extended: same as search mode, but unigrams unknown words (experimental)
-
-           For some applications it might be good to use search mode for indexing and normal mode for
-           queries to reduce recall and prevent parts of compounds from being matched and highlighted.
-           Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.
-
-           Kuromoji also has a convenient user dictionary feature that allows overriding the statistical
-           model with your own entries for segmentation, part-of-speech tags and readings without a need
-           to specify weights.  Notice that user dictionaries have not been subject to extensive testing.
-
-           User dictionary attributes are:
-                     userDictionary: user dictionary filename
-             userDictionaryEncoding: user dictionary encoding (default is UTF-8)
-
-           See lang/userdict_ja.txt for a sample user dictionary file.
-
-           Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.
-
-           See http://wiki.apache.org/solr/JapaneseLanguageSupport for more on Japanese language support.
-        -->
-        <tokenizer name="japanese" mode="search"/>
-        <!--<tokenizer name="japanese" mode="search" userDictionary="lang/userdict_ja.txt"/>-->
-        <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->
-        <filter name="japaneseBaseForm"/>
-        <!-- Removes tokens with certain part-of-speech tags -->
-        <filter name="japanesePartOfSpeechStop" tags="lang/stoptags_ja.txt" />
-        <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->
-        <filter name="cjkWidth"/>
-        <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ja.txt" />
-        <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->
-        <filter name="japaneseKatakanaStem" minimumLength="4"/>
-        <!-- Lower-cases romaji characters -->
-        <filter name="lowercase"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Korean morphological analysis -->
-    <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>
-    <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">
-      <analyzer>
-        <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)
-          The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.
-          It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.
-
-          This dictionary was built with MeCab, it defines a format for the features adapted
-          for the Korean language.
-          
-          Nori also has a convenient user dictionary feature that allows overriding the statistical
-          model with your own entries for segmentation, part-of-speech tags and readings without a need
-          to specify weights. Notice that user dictionaries have not been subject to extensive testing.
-
-          The tokenizer supports multiple schema attributes:
-            * userDictionary: User dictionary path.
-            * userDictionaryEncoding: User dictionary encoding.
-            * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.
-            * outputUnknownUnigrams: If true outputs unigrams for unknown words.
-        -->
-        <tokenizer name="korean" decompoundMode="discard" outputUnknownUnigrams="false"/>
-        <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',
-          listing the tags to remove. By default it removes: 
-          E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV
-          This is basically an equivalent to stemming.
-        -->
-        <filter name="koreanPartOfSpeechStop" />
-        <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->
-        <filter name="koreanReadingForm" />
-        <filter name="lowercase" />
-      </analyzer>
-    </fieldType>
-
-    <!-- Latvian -->
-    <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_lv.txt" />
-        <filter name="latvianStem"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Dutch -->
-    <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />
-        <filter name="stemmerOverride" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>
-        <filter name="snowballPorter" language="Dutch"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Norwegian -->
-    <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />
-        <filter name="snowballPorter" language="Norwegian"/>
-        <!-- less aggressive: <filter name="norwegianLightStem" variant="nb"/> -->
-        <!-- singular/plural: <filter name="norwegianMinimalStem" variant="nb"/> -->
-        <!-- The "light" and "minimal" stemmers support variants: nb=Bokmål, nn=Nynorsk, no=Both -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Portuguese -->
-    <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />
-        <filter name="portugueseLightStem"/>
-        <!-- less aggressive: <filter name="portugueseMinimalStem"/> -->
-        <!-- more aggressive: <filter name="snowballPorter" language="Portuguese"/> -->
-        <!-- most aggressive: <filter name="portugueseStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Romanian -->
-    <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ro.txt" />
-        <filter name="snowballPorter" language="Romanian"/>
-      </analyzer>
-    </fieldType>
-    
-    <!-- Russian -->
-    <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />
-        <filter name="snowballPorter" language="Russian"/>
-        <!-- less aggressive: <filter name="russianLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Swedish -->
-    <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />
-        <filter name="snowballPorter" language="Swedish"/>
-        <!-- less aggressive: <filter name="swedishLightStem"/> -->
-      </analyzer>
-    </fieldType>
-    
-    <!-- Thai -->
-    <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="thai"/>
-        <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_th.txt" />
-      </analyzer>
-    </fieldType>
-    
-    <!-- Turkish -->
-    <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/>
-        <filter name="apostrophe"/>
-        <filter name="turkishLowercase"/>
-        <filter name="stop" ignoreCase="false" words="lang/stopwords_tr.txt" />
-        <filter name="snowballPorter" language="Turkish"/>
-      </analyzer>
-    </fieldType>
-  
-  <!-- Similarity is the scoring routine for each document vs. a query.
-       A custom Similarity or SimilarityFactory may be specified here, but 
-       the default is fine for most applications.  
-       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
-    -->
-  <!--
-     <similarity class="com.example.solr.CustomSimilarityFactory">
-       <str name="paramkey">param value</str>
-     </similarity>
-    -->
-
-</schema>
diff --git a/solr/example/example-DIH/solr/solr/conf/mapping-FoldToASCII.txt b/solr/example/example-DIH/solr/solr/conf/mapping-FoldToASCII.txt
deleted file mode 100644
index 9a84b6e..0000000
--- a/solr/example/example-DIH/solr/solr/conf/mapping-FoldToASCII.txt
+++ /dev/null
@@ -1,3813 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# This map converts alphabetic, numeric, and symbolic Unicode characters
-# which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
-# block) into their ASCII equivalents, if one exists.
-#
-# Characters from the following Unicode blocks are converted; however, only
-# those characters with reasonable ASCII alternatives are converted:
-#
-# - C1 Controls and Latin-1 Supplement: http://www.unicode.org/charts/PDF/U0080.pdf
-# - Latin Extended-A: http://www.unicode.org/charts/PDF/U0100.pdf
-# - Latin Extended-B: http://www.unicode.org/charts/PDF/U0180.pdf
-# - Latin Extended Additional: http://www.unicode.org/charts/PDF/U1E00.pdf
-# - Latin Extended-C: http://www.unicode.org/charts/PDF/U2C60.pdf
-# - Latin Extended-D: http://www.unicode.org/charts/PDF/UA720.pdf
-# - IPA Extensions: http://www.unicode.org/charts/PDF/U0250.pdf
-# - Phonetic Extensions: http://www.unicode.org/charts/PDF/U1D00.pdf
-# - Phonetic Extensions Supplement: http://www.unicode.org/charts/PDF/U1D80.pdf
-# - General Punctuation: http://www.unicode.org/charts/PDF/U2000.pdf
-# - Superscripts and Subscripts: http://www.unicode.org/charts/PDF/U2070.pdf
-# - Enclosed Alphanumerics: http://www.unicode.org/charts/PDF/U2460.pdf
-# - Dingbats: http://www.unicode.org/charts/PDF/U2700.pdf
-# - Supplemental Punctuation: http://www.unicode.org/charts/PDF/U2E00.pdf
-# - Alphabetic Presentation Forms: http://www.unicode.org/charts/PDF/UFB00.pdf
-# - Halfwidth and Fullwidth Forms: http://www.unicode.org/charts/PDF/UFF00.pdf
-#  
-# See: http://en.wikipedia.org/wiki/Latin_characters_in_Unicode
-#
-# The set of character conversions supported by this map is a superset of
-# those supported by the map represented by mapping-ISOLatin1Accent.txt.
-#
-# See the bottom of this file for the Perl script used to generate the contents
-# of this file (without this header) from ASCIIFoldingFilter.java.
-
-
-# Syntax:
-#   "source" => "target"
-#     "source".length() > 0 (source cannot be empty.)
-#     "target".length() >= 0 (target can be empty.)
-
-
-# À  [LATIN CAPITAL LETTER A WITH GRAVE]
-"\u00C0" => "A"
-
-# Á  [LATIN CAPITAL LETTER A WITH ACUTE]
-"\u00C1" => "A"
-
-# Â  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX]
-"\u00C2" => "A"
-
-# Ã  [LATIN CAPITAL LETTER A WITH TILDE]
-"\u00C3" => "A"
-
-# Ä  [LATIN CAPITAL LETTER A WITH DIAERESIS]
-"\u00C4" => "A"
-
-# Å  [LATIN CAPITAL LETTER A WITH RING ABOVE]
-"\u00C5" => "A"
-
-# Ā  [LATIN CAPITAL LETTER A WITH MACRON]
-"\u0100" => "A"
-
-# Ă  [LATIN CAPITAL LETTER A WITH BREVE]
-"\u0102" => "A"
-
-# Ą  [LATIN CAPITAL LETTER A WITH OGONEK]
-"\u0104" => "A"
-
-# Ə  http://en.wikipedia.org/wiki/Schwa  [LATIN CAPITAL LETTER SCHWA]
-"\u018F" => "A"
-
-# Ǎ  [LATIN CAPITAL LETTER A WITH CARON]
-"\u01CD" => "A"
-
-# Ǟ  [LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON]
-"\u01DE" => "A"
-
-# Ǡ  [LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON]
-"\u01E0" => "A"
-
-# Ǻ  [LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE]
-"\u01FA" => "A"
-
-# Ȁ  [LATIN CAPITAL LETTER A WITH DOUBLE GRAVE]
-"\u0200" => "A"
-
-# Ȃ  [LATIN CAPITAL LETTER A WITH INVERTED BREVE]
-"\u0202" => "A"
-
-# Ȧ  [LATIN CAPITAL LETTER A WITH DOT ABOVE]
-"\u0226" => "A"
-
-# Ⱥ  [LATIN CAPITAL LETTER A WITH STROKE]
-"\u023A" => "A"
-
-# ᴀ  [LATIN LETTER SMALL CAPITAL A]
-"\u1D00" => "A"
-
-# Ḁ  [LATIN CAPITAL LETTER A WITH RING BELOW]
-"\u1E00" => "A"
-
-# Ạ  [LATIN CAPITAL LETTER A WITH DOT BELOW]
-"\u1EA0" => "A"
-
-# Ả  [LATIN CAPITAL LETTER A WITH HOOK ABOVE]
-"\u1EA2" => "A"
-
-# Ấ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE]
-"\u1EA4" => "A"
-
-# Ầ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE]
-"\u1EA6" => "A"
-
-# Ẩ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EA8" => "A"
-
-# Ẫ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE]
-"\u1EAA" => "A"
-
-# Ậ  [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EAC" => "A"
-
-# Ắ  [LATIN CAPITAL LETTER A WITH BREVE AND ACUTE]
-"\u1EAE" => "A"
-
-# Ằ  [LATIN CAPITAL LETTER A WITH BREVE AND GRAVE]
-"\u1EB0" => "A"
-
-# Ẳ  [LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE]
-"\u1EB2" => "A"
-
-# Ẵ  [LATIN CAPITAL LETTER A WITH BREVE AND TILDE]
-"\u1EB4" => "A"
-
-# Ặ  [LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW]
-"\u1EB6" => "A"
-
-# Ⓐ  [CIRCLED LATIN CAPITAL LETTER A]
-"\u24B6" => "A"
-
-# A  [FULLWIDTH LATIN CAPITAL LETTER A]
-"\uFF21" => "A"
-
-# à  [LATIN SMALL LETTER A WITH GRAVE]
-"\u00E0" => "a"
-
-# á  [LATIN SMALL LETTER A WITH ACUTE]
-"\u00E1" => "a"
-
-# â  [LATIN SMALL LETTER A WITH CIRCUMFLEX]
-"\u00E2" => "a"
-
-# ã  [LATIN SMALL LETTER A WITH TILDE]
-"\u00E3" => "a"
-
-# ä  [LATIN SMALL LETTER A WITH DIAERESIS]
-"\u00E4" => "a"
-
-# å  [LATIN SMALL LETTER A WITH RING ABOVE]
-"\u00E5" => "a"
-
-# ā  [LATIN SMALL LETTER A WITH MACRON]
-"\u0101" => "a"
-
-# ă  [LATIN SMALL LETTER A WITH BREVE]
-"\u0103" => "a"
-
-# ą  [LATIN SMALL LETTER A WITH OGONEK]
-"\u0105" => "a"
-
-# ǎ  [LATIN SMALL LETTER A WITH CARON]
-"\u01CE" => "a"
-
-# ǟ  [LATIN SMALL LETTER A WITH DIAERESIS AND MACRON]
-"\u01DF" => "a"
-
-# ǡ  [LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON]
-"\u01E1" => "a"
-
-# ǻ  [LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE]
-"\u01FB" => "a"
-
-# ȁ  [LATIN SMALL LETTER A WITH DOUBLE GRAVE]
-"\u0201" => "a"
-
-# ȃ  [LATIN SMALL LETTER A WITH INVERTED BREVE]
-"\u0203" => "a"
-
-# ȧ  [LATIN SMALL LETTER A WITH DOT ABOVE]
-"\u0227" => "a"
-
-# ɐ  [LATIN SMALL LETTER TURNED A]
-"\u0250" => "a"
-
-# ə  [LATIN SMALL LETTER SCHWA]
-"\u0259" => "a"
-
-# ɚ  [LATIN SMALL LETTER SCHWA WITH HOOK]
-"\u025A" => "a"
-
-# ᶏ  [LATIN SMALL LETTER A WITH RETROFLEX HOOK]
-"\u1D8F" => "a"
-
-# ᶕ  [LATIN SMALL LETTER SCHWA WITH RETROFLEX HOOK]
-"\u1D95" => "a"
-
-# ạ  [LATIN SMALL LETTER A WITH RING BELOW]
-"\u1E01" => "a"
-
-# ả  [LATIN SMALL LETTER A WITH RIGHT HALF RING]
-"\u1E9A" => "a"
-
-# ạ  [LATIN SMALL LETTER A WITH DOT BELOW]
-"\u1EA1" => "a"
-
-# ả  [LATIN SMALL LETTER A WITH HOOK ABOVE]
-"\u1EA3" => "a"
-
-# ấ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE]
-"\u1EA5" => "a"
-
-# ầ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE]
-"\u1EA7" => "a"
-
-# ẩ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EA9" => "a"
-
-# ẫ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE]
-"\u1EAB" => "a"
-
-# ậ  [LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EAD" => "a"
-
-# ắ  [LATIN SMALL LETTER A WITH BREVE AND ACUTE]
-"\u1EAF" => "a"
-
-# ằ  [LATIN SMALL LETTER A WITH BREVE AND GRAVE]
-"\u1EB1" => "a"
-
-# ẳ  [LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE]
-"\u1EB3" => "a"
-
-# ẵ  [LATIN SMALL LETTER A WITH BREVE AND TILDE]
-"\u1EB5" => "a"
-
-# ặ  [LATIN SMALL LETTER A WITH BREVE AND DOT BELOW]
-"\u1EB7" => "a"
-
-# ₐ  [LATIN SUBSCRIPT SMALL LETTER A]
-"\u2090" => "a"
-
-# ₔ  [LATIN SUBSCRIPT SMALL LETTER SCHWA]
-"\u2094" => "a"
-
-# ⓐ  [CIRCLED LATIN SMALL LETTER A]
-"\u24D0" => "a"
-
-# ⱥ  [LATIN SMALL LETTER A WITH STROKE]
-"\u2C65" => "a"
-
-# Ɐ  [LATIN CAPITAL LETTER TURNED A]
-"\u2C6F" => "a"
-
-# a  [FULLWIDTH LATIN SMALL LETTER A]
-"\uFF41" => "a"
-
-# Ꜳ  [LATIN CAPITAL LETTER AA]
-"\uA732" => "AA"
-
-# Æ  [LATIN CAPITAL LETTER AE]
-"\u00C6" => "AE"
-
-# Ǣ  [LATIN CAPITAL LETTER AE WITH MACRON]
-"\u01E2" => "AE"
-
-# Ǽ  [LATIN CAPITAL LETTER AE WITH ACUTE]
-"\u01FC" => "AE"
-
-# ᴁ  [LATIN LETTER SMALL CAPITAL AE]
-"\u1D01" => "AE"
-
-# Ꜵ  [LATIN CAPITAL LETTER AO]
-"\uA734" => "AO"
-
-# Ꜷ  [LATIN CAPITAL LETTER AU]
-"\uA736" => "AU"
-
-# Ꜹ  [LATIN CAPITAL LETTER AV]
-"\uA738" => "AV"
-
-# Ꜻ  [LATIN CAPITAL LETTER AV WITH HORIZONTAL BAR]
-"\uA73A" => "AV"
-
-# Ꜽ  [LATIN CAPITAL LETTER AY]
-"\uA73C" => "AY"
-
-# ⒜  [PARENTHESIZED LATIN SMALL LETTER A]
-"\u249C" => "(a)"
-
-# ꜳ  [LATIN SMALL LETTER AA]
-"\uA733" => "aa"
-
-# æ  [LATIN SMALL LETTER AE]
-"\u00E6" => "ae"
-
-# ǣ  [LATIN SMALL LETTER AE WITH MACRON]
-"\u01E3" => "ae"
-
-# ǽ  [LATIN SMALL LETTER AE WITH ACUTE]
-"\u01FD" => "ae"
-
-# ᴂ  [LATIN SMALL LETTER TURNED AE]
-"\u1D02" => "ae"
-
-# ꜵ  [LATIN SMALL LETTER AO]
-"\uA735" => "ao"
-
-# ꜷ  [LATIN SMALL LETTER AU]
-"\uA737" => "au"
-
-# ꜹ  [LATIN SMALL LETTER AV]
-"\uA739" => "av"
-
-# ꜻ  [LATIN SMALL LETTER AV WITH HORIZONTAL BAR]
-"\uA73B" => "av"
-
-# ꜽ  [LATIN SMALL LETTER AY]
-"\uA73D" => "ay"
-
-# Ɓ  [LATIN CAPITAL LETTER B WITH HOOK]
-"\u0181" => "B"
-
-# Ƃ  [LATIN CAPITAL LETTER B WITH TOPBAR]
-"\u0182" => "B"
-
-# Ƀ  [LATIN CAPITAL LETTER B WITH STROKE]
-"\u0243" => "B"
-
-# ʙ  [LATIN LETTER SMALL CAPITAL B]
-"\u0299" => "B"
-
-# ᴃ  [LATIN LETTER SMALL CAPITAL BARRED B]
-"\u1D03" => "B"
-
-# Ḃ  [LATIN CAPITAL LETTER B WITH DOT ABOVE]
-"\u1E02" => "B"
-
-# Ḅ  [LATIN CAPITAL LETTER B WITH DOT BELOW]
-"\u1E04" => "B"
-
-# Ḇ  [LATIN CAPITAL LETTER B WITH LINE BELOW]
-"\u1E06" => "B"
-
-# Ⓑ  [CIRCLED LATIN CAPITAL LETTER B]
-"\u24B7" => "B"
-
-# B  [FULLWIDTH LATIN CAPITAL LETTER B]
-"\uFF22" => "B"
-
-# ƀ  [LATIN SMALL LETTER B WITH STROKE]
-"\u0180" => "b"
-
-# ƃ  [LATIN SMALL LETTER B WITH TOPBAR]
-"\u0183" => "b"
-
-# ɓ  [LATIN SMALL LETTER B WITH HOOK]
-"\u0253" => "b"
-
-# ᵬ  [LATIN SMALL LETTER B WITH MIDDLE TILDE]
-"\u1D6C" => "b"
-
-# ᶀ  [LATIN SMALL LETTER B WITH PALATAL HOOK]
-"\u1D80" => "b"
-
-# ḃ  [LATIN SMALL LETTER B WITH DOT ABOVE]
-"\u1E03" => "b"
-
-# ḅ  [LATIN SMALL LETTER B WITH DOT BELOW]
-"\u1E05" => "b"
-
-# ḇ  [LATIN SMALL LETTER B WITH LINE BELOW]
-"\u1E07" => "b"
-
-# ⓑ  [CIRCLED LATIN SMALL LETTER B]
-"\u24D1" => "b"
-
-# b  [FULLWIDTH LATIN SMALL LETTER B]
-"\uFF42" => "b"
-
-# ⒝  [PARENTHESIZED LATIN SMALL LETTER B]
-"\u249D" => "(b)"
-
-# Ç  [LATIN CAPITAL LETTER C WITH CEDILLA]
-"\u00C7" => "C"
-
-# Ć  [LATIN CAPITAL LETTER C WITH ACUTE]
-"\u0106" => "C"
-
-# Ĉ  [LATIN CAPITAL LETTER C WITH CIRCUMFLEX]
-"\u0108" => "C"
-
-# Ċ  [LATIN CAPITAL LETTER C WITH DOT ABOVE]
-"\u010A" => "C"
-
-# Č  [LATIN CAPITAL LETTER C WITH CARON]
-"\u010C" => "C"
-
-# Ƈ  [LATIN CAPITAL LETTER C WITH HOOK]
-"\u0187" => "C"
-
-# Ȼ  [LATIN CAPITAL LETTER C WITH STROKE]
-"\u023B" => "C"
-
-# ʗ  [LATIN LETTER STRETCHED C]
-"\u0297" => "C"
-
-# ᴄ  [LATIN LETTER SMALL CAPITAL C]
-"\u1D04" => "C"
-
-# Ḉ  [LATIN CAPITAL LETTER C WITH CEDILLA AND ACUTE]
-"\u1E08" => "C"
-
-# Ⓒ  [CIRCLED LATIN CAPITAL LETTER C]
-"\u24B8" => "C"
-
-# C  [FULLWIDTH LATIN CAPITAL LETTER C]
-"\uFF23" => "C"
-
-# ç  [LATIN SMALL LETTER C WITH CEDILLA]
-"\u00E7" => "c"
-
-# ć  [LATIN SMALL LETTER C WITH ACUTE]
-"\u0107" => "c"
-
-# ĉ  [LATIN SMALL LETTER C WITH CIRCUMFLEX]
-"\u0109" => "c"
-
-# ċ  [LATIN SMALL LETTER C WITH DOT ABOVE]
-"\u010B" => "c"
-
-# č  [LATIN SMALL LETTER C WITH CARON]
-"\u010D" => "c"
-
-# ƈ  [LATIN SMALL LETTER C WITH HOOK]
-"\u0188" => "c"
-
-# ȼ  [LATIN SMALL LETTER C WITH STROKE]
-"\u023C" => "c"
-
-# ɕ  [LATIN SMALL LETTER C WITH CURL]
-"\u0255" => "c"
-
-# ḉ  [LATIN SMALL LETTER C WITH CEDILLA AND ACUTE]
-"\u1E09" => "c"
-
-# ↄ  [LATIN SMALL LETTER REVERSED C]
-"\u2184" => "c"
-
-# ⓒ  [CIRCLED LATIN SMALL LETTER C]
-"\u24D2" => "c"
-
-# Ꜿ  [LATIN CAPITAL LETTER REVERSED C WITH DOT]
-"\uA73E" => "c"
-
-# ꜿ  [LATIN SMALL LETTER REVERSED C WITH DOT]
-"\uA73F" => "c"
-
-# c  [FULLWIDTH LATIN SMALL LETTER C]
-"\uFF43" => "c"
-
-# ⒞  [PARENTHESIZED LATIN SMALL LETTER C]
-"\u249E" => "(c)"
-
-# Ð  [LATIN CAPITAL LETTER ETH]
-"\u00D0" => "D"
-
-# Ď  [LATIN CAPITAL LETTER D WITH CARON]
-"\u010E" => "D"
-
-# Đ  [LATIN CAPITAL LETTER D WITH STROKE]
-"\u0110" => "D"
-
-# Ɖ  [LATIN CAPITAL LETTER AFRICAN D]
-"\u0189" => "D"
-
-# Ɗ  [LATIN CAPITAL LETTER D WITH HOOK]
-"\u018A" => "D"
-
-# Ƌ  [LATIN CAPITAL LETTER D WITH TOPBAR]
-"\u018B" => "D"
-
-# ᴅ  [LATIN LETTER SMALL CAPITAL D]
-"\u1D05" => "D"
-
-# ᴆ  [LATIN LETTER SMALL CAPITAL ETH]
-"\u1D06" => "D"
-
-# Ḋ  [LATIN CAPITAL LETTER D WITH DOT ABOVE]
-"\u1E0A" => "D"
-
-# Ḍ  [LATIN CAPITAL LETTER D WITH DOT BELOW]
-"\u1E0C" => "D"
-
-# Ḏ  [LATIN CAPITAL LETTER D WITH LINE BELOW]
-"\u1E0E" => "D"
-
-# Ḑ  [LATIN CAPITAL LETTER D WITH CEDILLA]
-"\u1E10" => "D"
-
-# Ḓ  [LATIN CAPITAL LETTER D WITH CIRCUMFLEX BELOW]
-"\u1E12" => "D"
-
-# Ⓓ  [CIRCLED LATIN CAPITAL LETTER D]
-"\u24B9" => "D"
-
-# Ꝺ  [LATIN CAPITAL LETTER INSULAR D]
-"\uA779" => "D"
-
-# D  [FULLWIDTH LATIN CAPITAL LETTER D]
-"\uFF24" => "D"
-
-# ð  [LATIN SMALL LETTER ETH]
-"\u00F0" => "d"
-
-# ď  [LATIN SMALL LETTER D WITH CARON]
-"\u010F" => "d"
-
-# đ  [LATIN SMALL LETTER D WITH STROKE]
-"\u0111" => "d"
-
-# ƌ  [LATIN SMALL LETTER D WITH TOPBAR]
-"\u018C" => "d"
-
-# ȡ  [LATIN SMALL LETTER D WITH CURL]
-"\u0221" => "d"
-
-# ɖ  [LATIN SMALL LETTER D WITH TAIL]
-"\u0256" => "d"
-
-# ɗ  [LATIN SMALL LETTER D WITH HOOK]
-"\u0257" => "d"
-
-# ᵭ  [LATIN SMALL LETTER D WITH MIDDLE TILDE]
-"\u1D6D" => "d"
-
-# ᶁ  [LATIN SMALL LETTER D WITH PALATAL HOOK]
-"\u1D81" => "d"
-
-# ᶑ  [LATIN SMALL LETTER D WITH HOOK AND TAIL]
-"\u1D91" => "d"
-
-# ḋ  [LATIN SMALL LETTER D WITH DOT ABOVE]
-"\u1E0B" => "d"
-
-# ḍ  [LATIN SMALL LETTER D WITH DOT BELOW]
-"\u1E0D" => "d"
-
-# ḏ  [LATIN SMALL LETTER D WITH LINE BELOW]
-"\u1E0F" => "d"
-
-# ḑ  [LATIN SMALL LETTER D WITH CEDILLA]
-"\u1E11" => "d"
-
-# ḓ  [LATIN SMALL LETTER D WITH CIRCUMFLEX BELOW]
-"\u1E13" => "d"
-
-# ⓓ  [CIRCLED LATIN SMALL LETTER D]
-"\u24D3" => "d"
-
-# ꝺ  [LATIN SMALL LETTER INSULAR D]
-"\uA77A" => "d"
-
-# d  [FULLWIDTH LATIN SMALL LETTER D]
-"\uFF44" => "d"
-
-# DŽ  [LATIN CAPITAL LETTER DZ WITH CARON]
-"\u01C4" => "DZ"
-
-# DZ  [LATIN CAPITAL LETTER DZ]
-"\u01F1" => "DZ"
-
-# Dž  [LATIN CAPITAL LETTER D WITH SMALL LETTER Z WITH CARON]
-"\u01C5" => "Dz"
-
-# Dz  [LATIN CAPITAL LETTER D WITH SMALL LETTER Z]
-"\u01F2" => "Dz"
-
-# ⒟  [PARENTHESIZED LATIN SMALL LETTER D]
-"\u249F" => "(d)"
-
-# ȸ  [LATIN SMALL LETTER DB DIGRAPH]
-"\u0238" => "db"
-
-# dž  [LATIN SMALL LETTER DZ WITH CARON]
-"\u01C6" => "dz"
-
-# dz  [LATIN SMALL LETTER DZ]
-"\u01F3" => "dz"
-
-# ʣ  [LATIN SMALL LETTER DZ DIGRAPH]
-"\u02A3" => "dz"
-
-# ʥ  [LATIN SMALL LETTER DZ DIGRAPH WITH CURL]
-"\u02A5" => "dz"
-
-# È  [LATIN CAPITAL LETTER E WITH GRAVE]
-"\u00C8" => "E"
-
-# É  [LATIN CAPITAL LETTER E WITH ACUTE]
-"\u00C9" => "E"
-
-# Ê  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX]
-"\u00CA" => "E"
-
-# Ë  [LATIN CAPITAL LETTER E WITH DIAERESIS]
-"\u00CB" => "E"
-
-# Ē  [LATIN CAPITAL LETTER E WITH MACRON]
-"\u0112" => "E"
-
-# Ĕ  [LATIN CAPITAL LETTER E WITH BREVE]
-"\u0114" => "E"
-
-# Ė  [LATIN CAPITAL LETTER E WITH DOT ABOVE]
-"\u0116" => "E"
-
-# Ę  [LATIN CAPITAL LETTER E WITH OGONEK]
-"\u0118" => "E"
-
-# Ě  [LATIN CAPITAL LETTER E WITH CARON]
-"\u011A" => "E"
-
-# Ǝ  [LATIN CAPITAL LETTER REVERSED E]
-"\u018E" => "E"
-
-# Ɛ  [LATIN CAPITAL LETTER OPEN E]
-"\u0190" => "E"
-
-# Ȅ  [LATIN CAPITAL LETTER E WITH DOUBLE GRAVE]
-"\u0204" => "E"
-
-# Ȇ  [LATIN CAPITAL LETTER E WITH INVERTED BREVE]
-"\u0206" => "E"
-
-# Ȩ  [LATIN CAPITAL LETTER E WITH CEDILLA]
-"\u0228" => "E"
-
-# Ɇ  [LATIN CAPITAL LETTER E WITH STROKE]
-"\u0246" => "E"
-
-# ᴇ  [LATIN LETTER SMALL CAPITAL E]
-"\u1D07" => "E"
-
-# Ḕ  [LATIN CAPITAL LETTER E WITH MACRON AND GRAVE]
-"\u1E14" => "E"
-
-# Ḗ  [LATIN CAPITAL LETTER E WITH MACRON AND ACUTE]
-"\u1E16" => "E"
-
-# Ḙ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX BELOW]
-"\u1E18" => "E"
-
-# Ḛ  [LATIN CAPITAL LETTER E WITH TILDE BELOW]
-"\u1E1A" => "E"
-
-# Ḝ  [LATIN CAPITAL LETTER E WITH CEDILLA AND BREVE]
-"\u1E1C" => "E"
-
-# Ẹ  [LATIN CAPITAL LETTER E WITH DOT BELOW]
-"\u1EB8" => "E"
-
-# Ẻ  [LATIN CAPITAL LETTER E WITH HOOK ABOVE]
-"\u1EBA" => "E"
-
-# Ẽ  [LATIN CAPITAL LETTER E WITH TILDE]
-"\u1EBC" => "E"
-
-# Ế  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND ACUTE]
-"\u1EBE" => "E"
-
-# Ề  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND GRAVE]
-"\u1EC0" => "E"
-
-# Ể  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EC2" => "E"
-
-# Ễ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE]
-"\u1EC4" => "E"
-
-# Ệ  [LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EC6" => "E"
-
-# Ⓔ  [CIRCLED LATIN CAPITAL LETTER E]
-"\u24BA" => "E"
-
-# ⱻ  [LATIN LETTER SMALL CAPITAL TURNED E]
-"\u2C7B" => "E"
-
-# E  [FULLWIDTH LATIN CAPITAL LETTER E]
-"\uFF25" => "E"
-
-# è  [LATIN SMALL LETTER E WITH GRAVE]
-"\u00E8" => "e"
-
-# é  [LATIN SMALL LETTER E WITH ACUTE]
-"\u00E9" => "e"
-
-# ê  [LATIN SMALL LETTER E WITH CIRCUMFLEX]
-"\u00EA" => "e"
-
-# ë  [LATIN SMALL LETTER E WITH DIAERESIS]
-"\u00EB" => "e"
-
-# ē  [LATIN SMALL LETTER E WITH MACRON]
-"\u0113" => "e"
-
-# ĕ  [LATIN SMALL LETTER E WITH BREVE]
-"\u0115" => "e"
-
-# ė  [LATIN SMALL LETTER E WITH DOT ABOVE]
-"\u0117" => "e"
-
-# ę  [LATIN SMALL LETTER E WITH OGONEK]
-"\u0119" => "e"
-
-# ě  [LATIN SMALL LETTER E WITH CARON]
-"\u011B" => "e"
-
-# ǝ  [LATIN SMALL LETTER TURNED E]
-"\u01DD" => "e"
-
-# ȅ  [LATIN SMALL LETTER E WITH DOUBLE GRAVE]
-"\u0205" => "e"
-
-# ȇ  [LATIN SMALL LETTER E WITH INVERTED BREVE]
-"\u0207" => "e"
-
-# ȩ  [LATIN SMALL LETTER E WITH CEDILLA]
-"\u0229" => "e"
-
-# ɇ  [LATIN SMALL LETTER E WITH STROKE]
-"\u0247" => "e"
-
-# ɘ  [LATIN SMALL LETTER REVERSED E]
-"\u0258" => "e"
-
-# ɛ  [LATIN SMALL LETTER OPEN E]
-"\u025B" => "e"
-
-# ɜ  [LATIN SMALL LETTER REVERSED OPEN E]
-"\u025C" => "e"
-
-# ɝ  [LATIN SMALL LETTER REVERSED OPEN E WITH HOOK]
-"\u025D" => "e"
-
-# ɞ  [LATIN SMALL LETTER CLOSED REVERSED OPEN E]
-"\u025E" => "e"
-
-# ʚ  [LATIN SMALL LETTER CLOSED OPEN E]
-"\u029A" => "e"
-
-# ᴈ  [LATIN SMALL LETTER TURNED OPEN E]
-"\u1D08" => "e"
-
-# ᶒ  [LATIN SMALL LETTER E WITH RETROFLEX HOOK]
-"\u1D92" => "e"
-
-# ᶓ  [LATIN SMALL LETTER OPEN E WITH RETROFLEX HOOK]
-"\u1D93" => "e"
-
-# ᶔ  [LATIN SMALL LETTER REVERSED OPEN E WITH RETROFLEX HOOK]
-"\u1D94" => "e"
-
-# ḕ  [LATIN SMALL LETTER E WITH MACRON AND GRAVE]
-"\u1E15" => "e"
-
-# ḗ  [LATIN SMALL LETTER E WITH MACRON AND ACUTE]
-"\u1E17" => "e"
-
-# ḙ  [LATIN SMALL LETTER E WITH CIRCUMFLEX BELOW]
-"\u1E19" => "e"
-
-# ḛ  [LATIN SMALL LETTER E WITH TILDE BELOW]
-"\u1E1B" => "e"
-
-# ḝ  [LATIN SMALL LETTER E WITH CEDILLA AND BREVE]
-"\u1E1D" => "e"
-
-# ẹ  [LATIN SMALL LETTER E WITH DOT BELOW]
-"\u1EB9" => "e"
-
-# ẻ  [LATIN SMALL LETTER E WITH HOOK ABOVE]
-"\u1EBB" => "e"
-
-# ẽ  [LATIN SMALL LETTER E WITH TILDE]
-"\u1EBD" => "e"
-
-# ế  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND ACUTE]
-"\u1EBF" => "e"
-
-# ề  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND GRAVE]
-"\u1EC1" => "e"
-
-# ể  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1EC3" => "e"
-
-# ễ  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE]
-"\u1EC5" => "e"
-
-# ệ  [LATIN SMALL LETTER E WITH CIRCUMFLEX AND DOT BELOW]
-"\u1EC7" => "e"
-
-# ₑ  [LATIN SUBSCRIPT SMALL LETTER E]
-"\u2091" => "e"
-
-# ⓔ  [CIRCLED LATIN SMALL LETTER E]
-"\u24D4" => "e"
-
-# ⱸ  [LATIN SMALL LETTER E WITH NOTCH]
-"\u2C78" => "e"
-
-# e  [FULLWIDTH LATIN SMALL LETTER E]
-"\uFF45" => "e"
-
-# ⒠  [PARENTHESIZED LATIN SMALL LETTER E]
-"\u24A0" => "(e)"
-
-# Ƒ  [LATIN CAPITAL LETTER F WITH HOOK]
-"\u0191" => "F"
-
-# Ḟ  [LATIN CAPITAL LETTER F WITH DOT ABOVE]
-"\u1E1E" => "F"
-
-# Ⓕ  [CIRCLED LATIN CAPITAL LETTER F]
-"\u24BB" => "F"
-
-# ꜰ  [LATIN LETTER SMALL CAPITAL F]
-"\uA730" => "F"
-
-# Ꝼ  [LATIN CAPITAL LETTER INSULAR F]
-"\uA77B" => "F"
-
-# ꟻ  [LATIN EPIGRAPHIC LETTER REVERSED F]
-"\uA7FB" => "F"
-
-# F  [FULLWIDTH LATIN CAPITAL LETTER F]
-"\uFF26" => "F"
-
-# ƒ  [LATIN SMALL LETTER F WITH HOOK]
-"\u0192" => "f"
-
-# ᵮ  [LATIN SMALL LETTER F WITH MIDDLE TILDE]
-"\u1D6E" => "f"
-
-# ᶂ  [LATIN SMALL LETTER F WITH PALATAL HOOK]
-"\u1D82" => "f"
-
-# ḟ  [LATIN SMALL LETTER F WITH DOT ABOVE]
-"\u1E1F" => "f"
-
-# ẛ  [LATIN SMALL LETTER LONG S WITH DOT ABOVE]
-"\u1E9B" => "f"
-
-# ⓕ  [CIRCLED LATIN SMALL LETTER F]
-"\u24D5" => "f"
-
-# ꝼ  [LATIN SMALL LETTER INSULAR F]
-"\uA77C" => "f"
-
-# f  [FULLWIDTH LATIN SMALL LETTER F]
-"\uFF46" => "f"
-
-# ⒡  [PARENTHESIZED LATIN SMALL LETTER F]
-"\u24A1" => "(f)"
-
-# ff  [LATIN SMALL LIGATURE FF]
-"\uFB00" => "ff"
-
-# ffi  [LATIN SMALL LIGATURE FFI]
-"\uFB03" => "ffi"
-
-# ffl  [LATIN SMALL LIGATURE FFL]
-"\uFB04" => "ffl"
-
-# fi  [LATIN SMALL LIGATURE FI]
-"\uFB01" => "fi"
-
-# fl  [LATIN SMALL LIGATURE FL]
-"\uFB02" => "fl"
-
-# Ĝ  [LATIN CAPITAL LETTER G WITH CIRCUMFLEX]
-"\u011C" => "G"
-
-# Ğ  [LATIN CAPITAL LETTER G WITH BREVE]
-"\u011E" => "G"
-
-# Ġ  [LATIN CAPITAL LETTER G WITH DOT ABOVE]
-"\u0120" => "G"
-
-# Ģ  [LATIN CAPITAL LETTER G WITH CEDILLA]
-"\u0122" => "G"
-
-# Ɠ  [LATIN CAPITAL LETTER G WITH HOOK]
-"\u0193" => "G"
-
-# Ǥ  [LATIN CAPITAL LETTER G WITH STROKE]
-"\u01E4" => "G"
-
-# ǥ  [LATIN SMALL LETTER G WITH STROKE]
-"\u01E5" => "G"
-
-# Ǧ  [LATIN CAPITAL LETTER G WITH CARON]
-"\u01E6" => "G"
-
-# ǧ  [LATIN SMALL LETTER G WITH CARON]
-"\u01E7" => "G"
-
-# Ǵ  [LATIN CAPITAL LETTER G WITH ACUTE]
-"\u01F4" => "G"
-
-# ɢ  [LATIN LETTER SMALL CAPITAL G]
-"\u0262" => "G"
-
-# ʛ  [LATIN LETTER SMALL CAPITAL G WITH HOOK]
-"\u029B" => "G"
-
-# Ḡ  [LATIN CAPITAL LETTER G WITH MACRON]
-"\u1E20" => "G"
-
-# Ⓖ  [CIRCLED LATIN CAPITAL LETTER G]
-"\u24BC" => "G"
-
-# Ᵹ  [LATIN CAPITAL LETTER INSULAR G]
-"\uA77D" => "G"
-
-# Ꝿ  [LATIN CAPITAL LETTER TURNED INSULAR G]
-"\uA77E" => "G"
-
-# G  [FULLWIDTH LATIN CAPITAL LETTER G]
-"\uFF27" => "G"
-
-# ĝ  [LATIN SMALL LETTER G WITH CIRCUMFLEX]
-"\u011D" => "g"
-
-# ğ  [LATIN SMALL LETTER G WITH BREVE]
-"\u011F" => "g"
-
-# ġ  [LATIN SMALL LETTER G WITH DOT ABOVE]
-"\u0121" => "g"
-
-# ģ  [LATIN SMALL LETTER G WITH CEDILLA]
-"\u0123" => "g"
-
-# ǵ  [LATIN SMALL LETTER G WITH ACUTE]
-"\u01F5" => "g"
-
-# ɠ  [LATIN SMALL LETTER G WITH HOOK]
-"\u0260" => "g"
-
-# ɡ  [LATIN SMALL LETTER SCRIPT G]
-"\u0261" => "g"
-
-# ᵷ  [LATIN SMALL LETTER TURNED G]
-"\u1D77" => "g"
-
-# ᵹ  [LATIN SMALL LETTER INSULAR G]
-"\u1D79" => "g"
-
-# ᶃ  [LATIN SMALL LETTER G WITH PALATAL HOOK]
-"\u1D83" => "g"
-
-# ḡ  [LATIN SMALL LETTER G WITH MACRON]
-"\u1E21" => "g"
-
-# ⓖ  [CIRCLED LATIN SMALL LETTER G]
-"\u24D6" => "g"
-
-# ꝿ  [LATIN SMALL LETTER TURNED INSULAR G]
-"\uA77F" => "g"
-
-# g  [FULLWIDTH LATIN SMALL LETTER G]
-"\uFF47" => "g"
-
-# ⒢  [PARENTHESIZED LATIN SMALL LETTER G]
-"\u24A2" => "(g)"
-
-# Ĥ  [LATIN CAPITAL LETTER H WITH CIRCUMFLEX]
-"\u0124" => "H"
-
-# Ħ  [LATIN CAPITAL LETTER H WITH STROKE]
-"\u0126" => "H"
-
-# Ȟ  [LATIN CAPITAL LETTER H WITH CARON]
-"\u021E" => "H"
-
-# ʜ  [LATIN LETTER SMALL CAPITAL H]
-"\u029C" => "H"
-
-# Ḣ  [LATIN CAPITAL LETTER H WITH DOT ABOVE]
-"\u1E22" => "H"
-
-# Ḥ  [LATIN CAPITAL LETTER H WITH DOT BELOW]
-"\u1E24" => "H"
-
-# Ḧ  [LATIN CAPITAL LETTER H WITH DIAERESIS]
-"\u1E26" => "H"
-
-# Ḩ  [LATIN CAPITAL LETTER H WITH CEDILLA]
-"\u1E28" => "H"
-
-# Ḫ  [LATIN CAPITAL LETTER H WITH BREVE BELOW]
-"\u1E2A" => "H"
-
-# Ⓗ  [CIRCLED LATIN CAPITAL LETTER H]
-"\u24BD" => "H"
-
-# Ⱨ  [LATIN CAPITAL LETTER H WITH DESCENDER]
-"\u2C67" => "H"
-
-# Ⱶ  [LATIN CAPITAL LETTER HALF H]
-"\u2C75" => "H"
-
-# H  [FULLWIDTH LATIN CAPITAL LETTER H]
-"\uFF28" => "H"
-
-# ĥ  [LATIN SMALL LETTER H WITH CIRCUMFLEX]
-"\u0125" => "h"
-
-# ħ  [LATIN SMALL LETTER H WITH STROKE]
-"\u0127" => "h"
-
-# ȟ  [LATIN SMALL LETTER H WITH CARON]
-"\u021F" => "h"
-
-# ɥ  [LATIN SMALL LETTER TURNED H]
-"\u0265" => "h"
-
-# ɦ  [LATIN SMALL LETTER H WITH HOOK]
-"\u0266" => "h"
-
-# ʮ  [LATIN SMALL LETTER TURNED H WITH FISHHOOK]
-"\u02AE" => "h"
-
-# ʯ  [LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL]
-"\u02AF" => "h"
-
-# ḣ  [LATIN SMALL LETTER H WITH DOT ABOVE]
-"\u1E23" => "h"
-
-# ḥ  [LATIN SMALL LETTER H WITH DOT BELOW]
-"\u1E25" => "h"
-
-# ḧ  [LATIN SMALL LETTER H WITH DIAERESIS]
-"\u1E27" => "h"
-
-# ḩ  [LATIN SMALL LETTER H WITH CEDILLA]
-"\u1E29" => "h"
-
-# ḫ  [LATIN SMALL LETTER H WITH BREVE BELOW]
-"\u1E2B" => "h"
-
-# ẖ  [LATIN SMALL LETTER H WITH LINE BELOW]
-"\u1E96" => "h"
-
-# ⓗ  [CIRCLED LATIN SMALL LETTER H]
-"\u24D7" => "h"
-
-# ⱨ  [LATIN SMALL LETTER H WITH DESCENDER]
-"\u2C68" => "h"
-
-# ⱶ  [LATIN SMALL LETTER HALF H]
-"\u2C76" => "h"
-
-# h  [FULLWIDTH LATIN SMALL LETTER H]
-"\uFF48" => "h"
-
-# Ƕ  http://en.wikipedia.org/wiki/Hwair  [LATIN CAPITAL LETTER HWAIR]
-"\u01F6" => "HV"
-
-# ⒣  [PARENTHESIZED LATIN SMALL LETTER H]
-"\u24A3" => "(h)"
-
-# ƕ  [LATIN SMALL LETTER HV]
-"\u0195" => "hv"
-
-# Ì  [LATIN CAPITAL LETTER I WITH GRAVE]
-"\u00CC" => "I"
-
-# Í  [LATIN CAPITAL LETTER I WITH ACUTE]
-"\u00CD" => "I"
-
-# Î  [LATIN CAPITAL LETTER I WITH CIRCUMFLEX]
-"\u00CE" => "I"
-
-# Ï  [LATIN CAPITAL LETTER I WITH DIAERESIS]
-"\u00CF" => "I"
-
-# Ĩ  [LATIN CAPITAL LETTER I WITH TILDE]
-"\u0128" => "I"
-
-# Ī  [LATIN CAPITAL LETTER I WITH MACRON]
-"\u012A" => "I"
-
-# Ĭ  [LATIN CAPITAL LETTER I WITH BREVE]
-"\u012C" => "I"
-
-# Į  [LATIN CAPITAL LETTER I WITH OGONEK]
-"\u012E" => "I"
-
-# İ  [LATIN CAPITAL LETTER I WITH DOT ABOVE]
-"\u0130" => "I"
-
-# Ɩ  [LATIN CAPITAL LETTER IOTA]
-"\u0196" => "I"
-
-# Ɨ  [LATIN CAPITAL LETTER I WITH STROKE]
-"\u0197" => "I"
-
-# Ǐ  [LATIN CAPITAL LETTER I WITH CARON]
-"\u01CF" => "I"
-
-# Ȉ  [LATIN CAPITAL LETTER I WITH DOUBLE GRAVE]
-"\u0208" => "I"
-
-# Ȋ  [LATIN CAPITAL LETTER I WITH INVERTED BREVE]
-"\u020A" => "I"
-
-# ɪ  [LATIN LETTER SMALL CAPITAL I]
-"\u026A" => "I"
-
-# ᵻ  [LATIN SMALL CAPITAL LETTER I WITH STROKE]
-"\u1D7B" => "I"
-
-# Ḭ  [LATIN CAPITAL LETTER I WITH TILDE BELOW]
-"\u1E2C" => "I"
-
-# Ḯ  [LATIN CAPITAL LETTER I WITH DIAERESIS AND ACUTE]
-"\u1E2E" => "I"
-
-# Ỉ  [LATIN CAPITAL LETTER I WITH HOOK ABOVE]
-"\u1EC8" => "I"
-
-# Ị  [LATIN CAPITAL LETTER I WITH DOT BELOW]
-"\u1ECA" => "I"
-
-# Ⓘ  [CIRCLED LATIN CAPITAL LETTER I]
-"\u24BE" => "I"
-
-# ꟾ  [LATIN EPIGRAPHIC LETTER I LONGA]
-"\uA7FE" => "I"
-
-# I  [FULLWIDTH LATIN CAPITAL LETTER I]
-"\uFF29" => "I"
-
-# ì  [LATIN SMALL LETTER I WITH GRAVE]
-"\u00EC" => "i"
-
-# í  [LATIN SMALL LETTER I WITH ACUTE]
-"\u00ED" => "i"
-
-# î  [LATIN SMALL LETTER I WITH CIRCUMFLEX]
-"\u00EE" => "i"
-
-# ï  [LATIN SMALL LETTER I WITH DIAERESIS]
-"\u00EF" => "i"
-
-# ĩ  [LATIN SMALL LETTER I WITH TILDE]
-"\u0129" => "i"
-
-# ī  [LATIN SMALL LETTER I WITH MACRON]
-"\u012B" => "i"
-
-# ĭ  [LATIN SMALL LETTER I WITH BREVE]
-"\u012D" => "i"
-
-# į  [LATIN SMALL LETTER I WITH OGONEK]
-"\u012F" => "i"
-
-# ı  [LATIN SMALL LETTER DOTLESS I]
-"\u0131" => "i"
-
-# ǐ  [LATIN SMALL LETTER I WITH CARON]
-"\u01D0" => "i"
-
-# ȉ  [LATIN SMALL LETTER I WITH DOUBLE GRAVE]
-"\u0209" => "i"
-
-# ȋ  [LATIN SMALL LETTER I WITH INVERTED BREVE]
-"\u020B" => "i"
-
-# ɨ  [LATIN SMALL LETTER I WITH STROKE]
-"\u0268" => "i"
-
-# ᴉ  [LATIN SMALL LETTER TURNED I]
-"\u1D09" => "i"
-
-# ᵢ  [LATIN SUBSCRIPT SMALL LETTER I]
-"\u1D62" => "i"
-
-# ᵼ  [LATIN SMALL LETTER IOTA WITH STROKE]
-"\u1D7C" => "i"
-
-# ᶖ  [LATIN SMALL LETTER I WITH RETROFLEX HOOK]
-"\u1D96" => "i"
-
-# ḭ  [LATIN SMALL LETTER I WITH TILDE BELOW]
-"\u1E2D" => "i"
-
-# ḯ  [LATIN SMALL LETTER I WITH DIAERESIS AND ACUTE]
-"\u1E2F" => "i"
-
-# ỉ  [LATIN SMALL LETTER I WITH HOOK ABOVE]
-"\u1EC9" => "i"
-
-# ị  [LATIN SMALL LETTER I WITH DOT BELOW]
-"\u1ECB" => "i"
-
-# ⁱ  [SUPERSCRIPT LATIN SMALL LETTER I]
-"\u2071" => "i"
-
-# ⓘ  [CIRCLED LATIN SMALL LETTER I]
-"\u24D8" => "i"
-
-# i  [FULLWIDTH LATIN SMALL LETTER I]
-"\uFF49" => "i"
-
-# IJ  [LATIN CAPITAL LIGATURE IJ]
-"\u0132" => "IJ"
-
-# ⒤  [PARENTHESIZED LATIN SMALL LETTER I]
-"\u24A4" => "(i)"
-
-# ij  [LATIN SMALL LIGATURE IJ]
-"\u0133" => "ij"
-
-# Ĵ  [LATIN CAPITAL LETTER J WITH CIRCUMFLEX]
-"\u0134" => "J"
-
-# Ɉ  [LATIN CAPITAL LETTER J WITH STROKE]
-"\u0248" => "J"
-
-# ᴊ  [LATIN LETTER SMALL CAPITAL J]
-"\u1D0A" => "J"
-
-# Ⓙ  [CIRCLED LATIN CAPITAL LETTER J]
-"\u24BF" => "J"
-
-# J  [FULLWIDTH LATIN CAPITAL LETTER J]
-"\uFF2A" => "J"
-
-# ĵ  [LATIN SMALL LETTER J WITH CIRCUMFLEX]
-"\u0135" => "j"
-
-# ǰ  [LATIN SMALL LETTER J WITH CARON]
-"\u01F0" => "j"
-
-# ȷ  [LATIN SMALL LETTER DOTLESS J]
-"\u0237" => "j"
-
-# ɉ  [LATIN SMALL LETTER J WITH STROKE]
-"\u0249" => "j"
-
-# ɟ  [LATIN SMALL LETTER DOTLESS J WITH STROKE]
-"\u025F" => "j"
-
-# ʄ  [LATIN SMALL LETTER DOTLESS J WITH STROKE AND HOOK]
-"\u0284" => "j"
-
-# ʝ  [LATIN SMALL LETTER J WITH CROSSED-TAIL]
-"\u029D" => "j"
-
-# ⓙ  [CIRCLED LATIN SMALL LETTER J]
-"\u24D9" => "j"
-
-# ⱼ  [LATIN SUBSCRIPT SMALL LETTER J]
-"\u2C7C" => "j"
-
-# j  [FULLWIDTH LATIN SMALL LETTER J]
-"\uFF4A" => "j"
-
-# ⒥  [PARENTHESIZED LATIN SMALL LETTER J]
-"\u24A5" => "(j)"
-
-# Ķ  [LATIN CAPITAL LETTER K WITH CEDILLA]
-"\u0136" => "K"
-
-# Ƙ  [LATIN CAPITAL LETTER K WITH HOOK]
-"\u0198" => "K"
-
-# Ǩ  [LATIN CAPITAL LETTER K WITH CARON]
-"\u01E8" => "K"
-
-# ᴋ  [LATIN LETTER SMALL CAPITAL K]
-"\u1D0B" => "K"
-
-# Ḱ  [LATIN CAPITAL LETTER K WITH ACUTE]
-"\u1E30" => "K"
-
-# Ḳ  [LATIN CAPITAL LETTER K WITH DOT BELOW]
-"\u1E32" => "K"
-
-# Ḵ  [LATIN CAPITAL LETTER K WITH LINE BELOW]
-"\u1E34" => "K"
-
-# Ⓚ  [CIRCLED LATIN CAPITAL LETTER K]
-"\u24C0" => "K"
-
-# Ⱪ  [LATIN CAPITAL LETTER K WITH DESCENDER]
-"\u2C69" => "K"
-
-# Ꝁ  [LATIN CAPITAL LETTER K WITH STROKE]
-"\uA740" => "K"
-
-# Ꝃ  [LATIN CAPITAL LETTER K WITH DIAGONAL STROKE]
-"\uA742" => "K"
-
-# Ꝅ  [LATIN CAPITAL LETTER K WITH STROKE AND DIAGONAL STROKE]
-"\uA744" => "K"
-
-# K  [FULLWIDTH LATIN CAPITAL LETTER K]
-"\uFF2B" => "K"
-
-# ķ  [LATIN SMALL LETTER K WITH CEDILLA]
-"\u0137" => "k"
-
-# ƙ  [LATIN SMALL LETTER K WITH HOOK]
-"\u0199" => "k"
-
-# ǩ  [LATIN SMALL LETTER K WITH CARON]
-"\u01E9" => "k"
-
-# ʞ  [LATIN SMALL LETTER TURNED K]
-"\u029E" => "k"
-
-# ᶄ  [LATIN SMALL LETTER K WITH PALATAL HOOK]
-"\u1D84" => "k"
-
-# ḱ  [LATIN SMALL LETTER K WITH ACUTE]
-"\u1E31" => "k"
-
-# ḳ  [LATIN SMALL LETTER K WITH DOT BELOW]
-"\u1E33" => "k"
-
-# ḵ  [LATIN SMALL LETTER K WITH LINE BELOW]
-"\u1E35" => "k"
-
-# ⓚ  [CIRCLED LATIN SMALL LETTER K]
-"\u24DA" => "k"
-
-# ⱪ  [LATIN SMALL LETTER K WITH DESCENDER]
-"\u2C6A" => "k"
-
-# ꝁ  [LATIN SMALL LETTER K WITH STROKE]
-"\uA741" => "k"
-
-# ꝃ  [LATIN SMALL LETTER K WITH DIAGONAL STROKE]
-"\uA743" => "k"
-
-# ꝅ  [LATIN SMALL LETTER K WITH STROKE AND DIAGONAL STROKE]
-"\uA745" => "k"
-
-# k  [FULLWIDTH LATIN SMALL LETTER K]
-"\uFF4B" => "k"
-
-# ⒦  [PARENTHESIZED LATIN SMALL LETTER K]
-"\u24A6" => "(k)"
-
-# Ĺ  [LATIN CAPITAL LETTER L WITH ACUTE]
-"\u0139" => "L"
-
-# Ļ  [LATIN CAPITAL LETTER L WITH CEDILLA]
-"\u013B" => "L"
-
-# Ľ  [LATIN CAPITAL LETTER L WITH CARON]
-"\u013D" => "L"
-
-# Ŀ  [LATIN CAPITAL LETTER L WITH MIDDLE DOT]
-"\u013F" => "L"
-
-# Ł  [LATIN CAPITAL LETTER L WITH STROKE]
-"\u0141" => "L"
-
-# Ƚ  [LATIN CAPITAL LETTER L WITH BAR]
-"\u023D" => "L"
-
-# ʟ  [LATIN LETTER SMALL CAPITAL L]
-"\u029F" => "L"
-
-# ᴌ  [LATIN LETTER SMALL CAPITAL L WITH STROKE]
-"\u1D0C" => "L"
-
-# Ḷ  [LATIN CAPITAL LETTER L WITH DOT BELOW]
-"\u1E36" => "L"
-
-# Ḹ  [LATIN CAPITAL LETTER L WITH DOT BELOW AND MACRON]
-"\u1E38" => "L"
-
-# Ḻ  [LATIN CAPITAL LETTER L WITH LINE BELOW]
-"\u1E3A" => "L"
-
-# Ḽ  [LATIN CAPITAL LETTER L WITH CIRCUMFLEX BELOW]
-"\u1E3C" => "L"
-
-# Ⓛ  [CIRCLED LATIN CAPITAL LETTER L]
-"\u24C1" => "L"
-
-# Ⱡ  [LATIN CAPITAL LETTER L WITH DOUBLE BAR]
-"\u2C60" => "L"
-
-# Ɫ  [LATIN CAPITAL LETTER L WITH MIDDLE TILDE]
-"\u2C62" => "L"
-
-# Ꝇ  [LATIN CAPITAL LETTER BROKEN L]
-"\uA746" => "L"
-
-# Ꝉ  [LATIN CAPITAL LETTER L WITH HIGH STROKE]
-"\uA748" => "L"
-
-# Ꞁ  [LATIN CAPITAL LETTER TURNED L]
-"\uA780" => "L"
-
-# L  [FULLWIDTH LATIN CAPITAL LETTER L]
-"\uFF2C" => "L"
-
-# ĺ  [LATIN SMALL LETTER L WITH ACUTE]
-"\u013A" => "l"
-
-# ļ  [LATIN SMALL LETTER L WITH CEDILLA]
-"\u013C" => "l"
-
-# ľ  [LATIN SMALL LETTER L WITH CARON]
-"\u013E" => "l"
-
-# ŀ  [LATIN SMALL LETTER L WITH MIDDLE DOT]
-"\u0140" => "l"
-
-# ł  [LATIN SMALL LETTER L WITH STROKE]
-"\u0142" => "l"
-
-# ƚ  [LATIN SMALL LETTER L WITH BAR]
-"\u019A" => "l"
-
-# ȴ  [LATIN SMALL LETTER L WITH CURL]
-"\u0234" => "l"
-
-# ɫ  [LATIN SMALL LETTER L WITH MIDDLE TILDE]
-"\u026B" => "l"
-
-# ɬ  [LATIN SMALL LETTER L WITH BELT]
-"\u026C" => "l"
-
-# ɭ  [LATIN SMALL LETTER L WITH RETROFLEX HOOK]
-"\u026D" => "l"
-
-# ᶅ  [LATIN SMALL LETTER L WITH PALATAL HOOK]
-"\u1D85" => "l"
-
-# ḷ  [LATIN SMALL LETTER L WITH DOT BELOW]
-"\u1E37" => "l"
-
-# ḹ  [LATIN SMALL LETTER L WITH DOT BELOW AND MACRON]
-"\u1E39" => "l"
-
-# ḻ  [LATIN SMALL LETTER L WITH LINE BELOW]
-"\u1E3B" => "l"
-
-# ḽ  [LATIN SMALL LETTER L WITH CIRCUMFLEX BELOW]
-"\u1E3D" => "l"
-
-# ⓛ  [CIRCLED LATIN SMALL LETTER L]
-"\u24DB" => "l"
-
-# ⱡ  [LATIN SMALL LETTER L WITH DOUBLE BAR]
-"\u2C61" => "l"
-
-# ꝇ  [LATIN SMALL LETTER BROKEN L]
-"\uA747" => "l"
-
-# ꝉ  [LATIN SMALL LETTER L WITH HIGH STROKE]
-"\uA749" => "l"
-
-# ꞁ  [LATIN SMALL LETTER TURNED L]
-"\uA781" => "l"
-
-# l  [FULLWIDTH LATIN SMALL LETTER L]
-"\uFF4C" => "l"
-
-# LJ  [LATIN CAPITAL LETTER LJ]
-"\u01C7" => "LJ"
-
-# Ỻ  [LATIN CAPITAL LETTER MIDDLE-WELSH LL]
-"\u1EFA" => "LL"
-
-# Lj  [LATIN CAPITAL LETTER L WITH SMALL LETTER J]
-"\u01C8" => "Lj"
-
-# ⒧  [PARENTHESIZED LATIN SMALL LETTER L]
-"\u24A7" => "(l)"
-
-# lj  [LATIN SMALL LETTER LJ]
-"\u01C9" => "lj"
-
-# ỻ  [LATIN SMALL LETTER MIDDLE-WELSH LL]
-"\u1EFB" => "ll"
-
-# ʪ  [LATIN SMALL LETTER LS DIGRAPH]
-"\u02AA" => "ls"
-
-# ʫ  [LATIN SMALL LETTER LZ DIGRAPH]
-"\u02AB" => "lz"
-
-# Ɯ  [LATIN CAPITAL LETTER TURNED M]
-"\u019C" => "M"
-
-# ᴍ  [LATIN LETTER SMALL CAPITAL M]
-"\u1D0D" => "M"
-
-# Ḿ  [LATIN CAPITAL LETTER M WITH ACUTE]
-"\u1E3E" => "M"
-
-# Ṁ  [LATIN CAPITAL LETTER M WITH DOT ABOVE]
-"\u1E40" => "M"
-
-# Ṃ  [LATIN CAPITAL LETTER M WITH DOT BELOW]
-"\u1E42" => "M"
-
-# Ⓜ  [CIRCLED LATIN CAPITAL LETTER M]
-"\u24C2" => "M"
-
-# Ɱ  [LATIN CAPITAL LETTER M WITH HOOK]
-"\u2C6E" => "M"
-
-# ꟽ  [LATIN EPIGRAPHIC LETTER INVERTED M]
-"\uA7FD" => "M"
-
-# ꟿ  [LATIN EPIGRAPHIC LETTER ARCHAIC M]
-"\uA7FF" => "M"
-
-# M  [FULLWIDTH LATIN CAPITAL LETTER M]
-"\uFF2D" => "M"
-
-# ɯ  [LATIN SMALL LETTER TURNED M]
-"\u026F" => "m"
-
-# ɰ  [LATIN SMALL LETTER TURNED M WITH LONG LEG]
-"\u0270" => "m"
-
-# ɱ  [LATIN SMALL LETTER M WITH HOOK]
-"\u0271" => "m"
-
-# ᵯ  [LATIN SMALL LETTER M WITH MIDDLE TILDE]
-"\u1D6F" => "m"
-
-# ᶆ  [LATIN SMALL LETTER M WITH PALATAL HOOK]
-"\u1D86" => "m"
-
-# ḿ  [LATIN SMALL LETTER M WITH ACUTE]
-"\u1E3F" => "m"
-
-# ṁ  [LATIN SMALL LETTER M WITH DOT ABOVE]
-"\u1E41" => "m"
-
-# ṃ  [LATIN SMALL LETTER M WITH DOT BELOW]
-"\u1E43" => "m"
-
-# ⓜ  [CIRCLED LATIN SMALL LETTER M]
-"\u24DC" => "m"
-
-# m  [FULLWIDTH LATIN SMALL LETTER M]
-"\uFF4D" => "m"
-
-# ⒨  [PARENTHESIZED LATIN SMALL LETTER M]
-"\u24A8" => "(m)"
-
-# Ñ  [LATIN CAPITAL LETTER N WITH TILDE]
-"\u00D1" => "N"
-
-# Ń  [LATIN CAPITAL LETTER N WITH ACUTE]
-"\u0143" => "N"
-
-# Ņ  [LATIN CAPITAL LETTER N WITH CEDILLA]
-"\u0145" => "N"
-
-# Ň  [LATIN CAPITAL LETTER N WITH CARON]
-"\u0147" => "N"
-
-# Ŋ  http://en.wikipedia.org/wiki/Eng_(letter)  [LATIN CAPITAL LETTER ENG]
-"\u014A" => "N"
-
-# Ɲ  [LATIN CAPITAL LETTER N WITH LEFT HOOK]
-"\u019D" => "N"
-
-# Ǹ  [LATIN CAPITAL LETTER N WITH GRAVE]
-"\u01F8" => "N"
-
-# Ƞ  [LATIN CAPITAL LETTER N WITH LONG RIGHT LEG]
-"\u0220" => "N"
-
-# ɴ  [LATIN LETTER SMALL CAPITAL N]
-"\u0274" => "N"
-
-# ᴎ  [LATIN LETTER SMALL CAPITAL REVERSED N]
-"\u1D0E" => "N"
-
-# Ṅ  [LATIN CAPITAL LETTER N WITH DOT ABOVE]
-"\u1E44" => "N"
-
-# Ṇ  [LATIN CAPITAL LETTER N WITH DOT BELOW]
-"\u1E46" => "N"
-
-# Ṉ  [LATIN CAPITAL LETTER N WITH LINE BELOW]
-"\u1E48" => "N"
-
-# Ṋ  [LATIN CAPITAL LETTER N WITH CIRCUMFLEX BELOW]
-"\u1E4A" => "N"
-
-# Ⓝ  [CIRCLED LATIN CAPITAL LETTER N]
-"\u24C3" => "N"
-
-# N  [FULLWIDTH LATIN CAPITAL LETTER N]
-"\uFF2E" => "N"
-
-# ñ  [LATIN SMALL LETTER N WITH TILDE]
-"\u00F1" => "n"
-
-# ń  [LATIN SMALL LETTER N WITH ACUTE]
-"\u0144" => "n"
-
-# ņ  [LATIN SMALL LETTER N WITH CEDILLA]
-"\u0146" => "n"
-
-# ň  [LATIN SMALL LETTER N WITH CARON]
-"\u0148" => "n"
-
-# ʼn  [LATIN SMALL LETTER N PRECEDED BY APOSTROPHE]
-"\u0149" => "n"
-
-# ŋ  http://en.wikipedia.org/wiki/Eng_(letter)  [LATIN SMALL LETTER ENG]
-"\u014B" => "n"
-
-# ƞ  [LATIN SMALL LETTER N WITH LONG RIGHT LEG]
-"\u019E" => "n"
-
-# ǹ  [LATIN SMALL LETTER N WITH GRAVE]
-"\u01F9" => "n"
-
-# ȵ  [LATIN SMALL LETTER N WITH CURL]
-"\u0235" => "n"
-
-# ɲ  [LATIN SMALL LETTER N WITH LEFT HOOK]
-"\u0272" => "n"
-
-# ɳ  [LATIN SMALL LETTER N WITH RETROFLEX HOOK]
-"\u0273" => "n"
-
-# ᵰ  [LATIN SMALL LETTER N WITH MIDDLE TILDE]
-"\u1D70" => "n"
-
-# ᶇ  [LATIN SMALL LETTER N WITH PALATAL HOOK]
-"\u1D87" => "n"
-
-# ṅ  [LATIN SMALL LETTER N WITH DOT ABOVE]
-"\u1E45" => "n"
-
-# ṇ  [LATIN SMALL LETTER N WITH DOT BELOW]
-"\u1E47" => "n"
-
-# ṉ  [LATIN SMALL LETTER N WITH LINE BELOW]
-"\u1E49" => "n"
-
-# ṋ  [LATIN SMALL LETTER N WITH CIRCUMFLEX BELOW]
-"\u1E4B" => "n"
-
-# ⁿ  [SUPERSCRIPT LATIN SMALL LETTER N]
-"\u207F" => "n"
-
-# ⓝ  [CIRCLED LATIN SMALL LETTER N]
-"\u24DD" => "n"
-
-# n  [FULLWIDTH LATIN SMALL LETTER N]
-"\uFF4E" => "n"
-
-# NJ  [LATIN CAPITAL LETTER NJ]
-"\u01CA" => "NJ"
-
-# Nj  [LATIN CAPITAL LETTER N WITH SMALL LETTER J]
-"\u01CB" => "Nj"
-
-# ⒩  [PARENTHESIZED LATIN SMALL LETTER N]
-"\u24A9" => "(n)"
-
-# nj  [LATIN SMALL LETTER NJ]
-"\u01CC" => "nj"
-
-# Ò  [LATIN CAPITAL LETTER O WITH GRAVE]
-"\u00D2" => "O"
-
-# Ó  [LATIN CAPITAL LETTER O WITH ACUTE]
-"\u00D3" => "O"
-
-# Ô  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX]
-"\u00D4" => "O"
-
-# Õ  [LATIN CAPITAL LETTER O WITH TILDE]
-"\u00D5" => "O"
-
-# Ö  [LATIN CAPITAL LETTER O WITH DIAERESIS]
-"\u00D6" => "O"
-
-# Ø  [LATIN CAPITAL LETTER O WITH STROKE]
-"\u00D8" => "O"
-
-# Ō  [LATIN CAPITAL LETTER O WITH MACRON]
-"\u014C" => "O"
-
-# Ŏ  [LATIN CAPITAL LETTER O WITH BREVE]
-"\u014E" => "O"
-
-# Ő  [LATIN CAPITAL LETTER O WITH DOUBLE ACUTE]
-"\u0150" => "O"
-
-# Ɔ  [LATIN CAPITAL LETTER OPEN O]
-"\u0186" => "O"
-
-# Ɵ  [LATIN CAPITAL LETTER O WITH MIDDLE TILDE]
-"\u019F" => "O"
-
-# Ơ  [LATIN CAPITAL LETTER O WITH HORN]
-"\u01A0" => "O"
-
-# Ǒ  [LATIN CAPITAL LETTER O WITH CARON]
-"\u01D1" => "O"
-
-# Ǫ  [LATIN CAPITAL LETTER O WITH OGONEK]
-"\u01EA" => "O"
-
-# Ǭ  [LATIN CAPITAL LETTER O WITH OGONEK AND MACRON]
-"\u01EC" => "O"
-
-# Ǿ  [LATIN CAPITAL LETTER O WITH STROKE AND ACUTE]
-"\u01FE" => "O"
-
-# Ȍ  [LATIN CAPITAL LETTER O WITH DOUBLE GRAVE]
-"\u020C" => "O"
-
-# Ȏ  [LATIN CAPITAL LETTER O WITH INVERTED BREVE]
-"\u020E" => "O"
-
-# Ȫ  [LATIN CAPITAL LETTER O WITH DIAERESIS AND MACRON]
-"\u022A" => "O"
-
-# Ȭ  [LATIN CAPITAL LETTER O WITH TILDE AND MACRON]
-"\u022C" => "O"
-
-# Ȯ  [LATIN CAPITAL LETTER O WITH DOT ABOVE]
-"\u022E" => "O"
-
-# Ȱ  [LATIN CAPITAL LETTER O WITH DOT ABOVE AND MACRON]
-"\u0230" => "O"
-
-# ᴏ  [LATIN LETTER SMALL CAPITAL O]
-"\u1D0F" => "O"
-
-# ᴐ  [LATIN LETTER SMALL CAPITAL OPEN O]
-"\u1D10" => "O"
-
-# Ṍ  [LATIN CAPITAL LETTER O WITH TILDE AND ACUTE]
-"\u1E4C" => "O"
-
-# Ṏ  [LATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS]
-"\u1E4E" => "O"
-
-# Ṑ  [LATIN CAPITAL LETTER O WITH MACRON AND GRAVE]
-"\u1E50" => "O"
-
-# Ṓ  [LATIN CAPITAL LETTER O WITH MACRON AND ACUTE]
-"\u1E52" => "O"
-
-# Ọ  [LATIN CAPITAL LETTER O WITH DOT BELOW]
-"\u1ECC" => "O"
-
-# Ỏ  [LATIN CAPITAL LETTER O WITH HOOK ABOVE]
-"\u1ECE" => "O"
-
-# Ố  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND ACUTE]
-"\u1ED0" => "O"
-
-# Ồ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND GRAVE]
-"\u1ED2" => "O"
-
-# Ổ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1ED4" => "O"
-
-# Ỗ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE]
-"\u1ED6" => "O"
-
-# Ộ  [LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND DOT BELOW]
-"\u1ED8" => "O"
-
-# Ớ  [LATIN CAPITAL LETTER O WITH HORN AND ACUTE]
-"\u1EDA" => "O"
-
-# Ờ  [LATIN CAPITAL LETTER O WITH HORN AND GRAVE]
-"\u1EDC" => "O"
-
-# Ở  [LATIN CAPITAL LETTER O WITH HORN AND HOOK ABOVE]
-"\u1EDE" => "O"
-
-# Ỡ  [LATIN CAPITAL LETTER O WITH HORN AND TILDE]
-"\u1EE0" => "O"
-
-# Ợ  [LATIN CAPITAL LETTER O WITH HORN AND DOT BELOW]
-"\u1EE2" => "O"
-
-# Ⓞ  [CIRCLED LATIN CAPITAL LETTER O]
-"\u24C4" => "O"
-
-# Ꝋ  [LATIN CAPITAL LETTER O WITH LONG STROKE OVERLAY]
-"\uA74A" => "O"
-
-# Ꝍ  [LATIN CAPITAL LETTER O WITH LOOP]
-"\uA74C" => "O"
-
-# O  [FULLWIDTH LATIN CAPITAL LETTER O]
-"\uFF2F" => "O"
-
-# ò  [LATIN SMALL LETTER O WITH GRAVE]
-"\u00F2" => "o"
-
-# ó  [LATIN SMALL LETTER O WITH ACUTE]
-"\u00F3" => "o"
-
-# ô  [LATIN SMALL LETTER O WITH CIRCUMFLEX]
-"\u00F4" => "o"
-
-# õ  [LATIN SMALL LETTER O WITH TILDE]
-"\u00F5" => "o"
-
-# ö  [LATIN SMALL LETTER O WITH DIAERESIS]
-"\u00F6" => "o"
-
-# ø  [LATIN SMALL LETTER O WITH STROKE]
-"\u00F8" => "o"
-
-# ō  [LATIN SMALL LETTER O WITH MACRON]
-"\u014D" => "o"
-
-# ŏ  [LATIN SMALL LETTER O WITH BREVE]
-"\u014F" => "o"
-
-# ő  [LATIN SMALL LETTER O WITH DOUBLE ACUTE]
-"\u0151" => "o"
-
-# ơ  [LATIN SMALL LETTER O WITH HORN]
-"\u01A1" => "o"
-
-# ǒ  [LATIN SMALL LETTER O WITH CARON]
-"\u01D2" => "o"
-
-# ǫ  [LATIN SMALL LETTER O WITH OGONEK]
-"\u01EB" => "o"
-
-# ǭ  [LATIN SMALL LETTER O WITH OGONEK AND MACRON]
-"\u01ED" => "o"
-
-# ǿ  [LATIN SMALL LETTER O WITH STROKE AND ACUTE]
-"\u01FF" => "o"
-
-# ȍ  [LATIN SMALL LETTER O WITH DOUBLE GRAVE]
-"\u020D" => "o"
-
-# ȏ  [LATIN SMALL LETTER O WITH INVERTED BREVE]
-"\u020F" => "o"
-
-# ȫ  [LATIN SMALL LETTER O WITH DIAERESIS AND MACRON]
-"\u022B" => "o"
-
-# ȭ  [LATIN SMALL LETTER O WITH TILDE AND MACRON]
-"\u022D" => "o"
-
-# ȯ  [LATIN SMALL LETTER O WITH DOT ABOVE]
-"\u022F" => "o"
-
-# ȱ  [LATIN SMALL LETTER O WITH DOT ABOVE AND MACRON]
-"\u0231" => "o"
-
-# ɔ  [LATIN SMALL LETTER OPEN O]
-"\u0254" => "o"
-
-# ɵ  [LATIN SMALL LETTER BARRED O]
-"\u0275" => "o"
-
-# ᴖ  [LATIN SMALL LETTER TOP HALF O]
-"\u1D16" => "o"
-
-# ᴗ  [LATIN SMALL LETTER BOTTOM HALF O]
-"\u1D17" => "o"
-
-# ᶗ  [LATIN SMALL LETTER OPEN O WITH RETROFLEX HOOK]
-"\u1D97" => "o"
-
-# ṍ  [LATIN SMALL LETTER O WITH TILDE AND ACUTE]
-"\u1E4D" => "o"
-
-# ṏ  [LATIN SMALL LETTER O WITH TILDE AND DIAERESIS]
-"\u1E4F" => "o"
-
-# ṑ  [LATIN SMALL LETTER O WITH MACRON AND GRAVE]
-"\u1E51" => "o"
-
-# ṓ  [LATIN SMALL LETTER O WITH MACRON AND ACUTE]
-"\u1E53" => "o"
-
-# ọ  [LATIN SMALL LETTER O WITH DOT BELOW]
-"\u1ECD" => "o"
-
-# ỏ  [LATIN SMALL LETTER O WITH HOOK ABOVE]
-"\u1ECF" => "o"
-
-# ố  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND ACUTE]
-"\u1ED1" => "o"
-
-# ồ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND GRAVE]
-"\u1ED3" => "o"
-
-# ổ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE]
-"\u1ED5" => "o"
-
-# ỗ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE]
-"\u1ED7" => "o"
-
-# ộ  [LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW]
-"\u1ED9" => "o"
-
-# ớ  [LATIN SMALL LETTER O WITH HORN AND ACUTE]
-"\u1EDB" => "o"
-
-# ờ  [LATIN SMALL LETTER O WITH HORN AND GRAVE]
-"\u1EDD" => "o"
-
-# ở  [LATIN SMALL LETTER O WITH HORN AND HOOK ABOVE]
-"\u1EDF" => "o"
-
-# ỡ  [LATIN SMALL LETTER O WITH HORN AND TILDE]
-"\u1EE1" => "o"
-
-# ợ  [LATIN SMALL LETTER O WITH HORN AND DOT BELOW]
-"\u1EE3" => "o"
-
-# ₒ  [LATIN SUBSCRIPT SMALL LETTER O]
-"\u2092" => "o"
-
-# ⓞ  [CIRCLED LATIN SMALL LETTER O]
-"\u24DE" => "o"
-
-# ⱺ  [LATIN SMALL LETTER O WITH LOW RING INSIDE]
-"\u2C7A" => "o"
-
-# ꝋ  [LATIN SMALL LETTER O WITH LONG STROKE OVERLAY]
-"\uA74B" => "o"
-
-# ꝍ  [LATIN SMALL LETTER O WITH LOOP]
-"\uA74D" => "o"
-
-# o  [FULLWIDTH LATIN SMALL LETTER O]
-"\uFF4F" => "o"
-
-# Π [LATIN CAPITAL LIGATURE OE]
-"\u0152" => "OE"
-
-# ɶ  [LATIN LETTER SMALL CAPITAL OE]
-"\u0276" => "OE"
-
-# Ꝏ  [LATIN CAPITAL LETTER OO]
-"\uA74E" => "OO"
-
-# Ȣ  http://en.wikipedia.org/wiki/OU  [LATIN CAPITAL LETTER OU]
-"\u0222" => "OU"
-
-# ᴕ  [LATIN LETTER SMALL CAPITAL OU]
-"\u1D15" => "OU"
-
-# ⒪  [PARENTHESIZED LATIN SMALL LETTER O]
-"\u24AA" => "(o)"
-
-# œ  [LATIN SMALL LIGATURE OE]
-"\u0153" => "oe"
-
-# ᴔ  [LATIN SMALL LETTER TURNED OE]
-"\u1D14" => "oe"
-
-# ꝏ  [LATIN SMALL LETTER OO]
-"\uA74F" => "oo"
-
-# ȣ  http://en.wikipedia.org/wiki/OU  [LATIN SMALL LETTER OU]
-"\u0223" => "ou"
-
-# Ƥ  [LATIN CAPITAL LETTER P WITH HOOK]
-"\u01A4" => "P"
-
-# ᴘ  [LATIN LETTER SMALL CAPITAL P]
-"\u1D18" => "P"
-
-# Ṕ  [LATIN CAPITAL LETTER P WITH ACUTE]
-"\u1E54" => "P"
-
-# Ṗ  [LATIN CAPITAL LETTER P WITH DOT ABOVE]
-"\u1E56" => "P"
-
-# Ⓟ  [CIRCLED LATIN CAPITAL LETTER P]
-"\u24C5" => "P"
-
-# Ᵽ  [LATIN CAPITAL LETTER P WITH STROKE]
-"\u2C63" => "P"
-
-# Ꝑ  [LATIN CAPITAL LETTER P WITH STROKE THROUGH DESCENDER]
-"\uA750" => "P"
-
-# Ꝓ  [LATIN CAPITAL LETTER P WITH FLOURISH]
-"\uA752" => "P"
-
-# Ꝕ  [LATIN CAPITAL LETTER P WITH SQUIRREL TAIL]
-"\uA754" => "P"
-
-# P  [FULLWIDTH LATIN CAPITAL LETTER P]
-"\uFF30" => "P"
-
-# ƥ  [LATIN SMALL LETTER P WITH HOOK]
-"\u01A5" => "p"
-
-# ᵱ  [LATIN SMALL LETTER P WITH MIDDLE TILDE]
-"\u1D71" => "p"
-
-# ᵽ  [LATIN SMALL LETTER P WITH STROKE]
-"\u1D7D" => "p"
-
-# ᶈ  [LATIN SMALL LETTER P WITH PALATAL HOOK]
-"\u1D88" => "p"
-
-# ṕ  [LATIN SMALL LETTER P WITH ACUTE]
-"\u1E55" => "p"
-
-# ṗ  [LATIN SMALL LETTER P WITH DOT ABOVE]
-"\u1E57" => "p"
-
-# ⓟ  [CIRCLED LATIN SMALL LETTER P]
-"\u24DF" => "p"
-
-# ꝑ  [LATIN SMALL LETTER P WITH STROKE THROUGH DESCENDER]
-"\uA751" => "p"
-
-# ꝓ  [LATIN SMALL LETTER P WITH FLOURISH]
-"\uA753" => "p"
-
-# ꝕ  [LATIN SMALL LETTER P WITH SQUIRREL TAIL]
-"\uA755" => "p"
-
-# ꟼ  [LATIN EPIGRAPHIC LETTER REVERSED P]
-"\uA7FC" => "p"
-
-# p  [FULLWIDTH LATIN SMALL LETTER P]
-"\uFF50" => "p"
-
-# ⒫  [PARENTHESIZED LATIN SMALL LETTER P]
-"\u24AB" => "(p)"
-
-# Ɋ  [LATIN CAPITAL LETTER SMALL Q WITH HOOK TAIL]
-"\u024A" => "Q"
-
-# Ⓠ  [CIRCLED LATIN CAPITAL LETTER Q]
-"\u24C6" => "Q"
-
-# Ꝗ  [LATIN CAPITAL LETTER Q WITH STROKE THROUGH DESCENDER]
-"\uA756" => "Q"
-
-# Ꝙ  [LATIN CAPITAL LETTER Q WITH DIAGONAL STROKE]
-"\uA758" => "Q"
-
-# Q  [FULLWIDTH LATIN CAPITAL LETTER Q]
-"\uFF31" => "Q"
-
-# ĸ  http://en.wikipedia.org/wiki/Kra_(letter)  [LATIN SMALL LETTER KRA]
-"\u0138" => "q"
-
-# ɋ  [LATIN SMALL LETTER Q WITH HOOK TAIL]
-"\u024B" => "q"
-
-# ʠ  [LATIN SMALL LETTER Q WITH HOOK]
-"\u02A0" => "q"
-
-# ⓠ  [CIRCLED LATIN SMALL LETTER Q]
-"\u24E0" => "q"
-
-# ꝗ  [LATIN SMALL LETTER Q WITH STROKE THROUGH DESCENDER]
-"\uA757" => "q"
-
-# ꝙ  [LATIN SMALL LETTER Q WITH DIAGONAL STROKE]
-"\uA759" => "q"
-
-# q  [FULLWIDTH LATIN SMALL LETTER Q]
-"\uFF51" => "q"
-
-# ⒬  [PARENTHESIZED LATIN SMALL LETTER Q]
-"\u24AC" => "(q)"
-
-# ȹ  [LATIN SMALL LETTER QP DIGRAPH]
-"\u0239" => "qp"
-
-# Ŕ  [LATIN CAPITAL LETTER R WITH ACUTE]
-"\u0154" => "R"
-
-# Ŗ  [LATIN CAPITAL LETTER R WITH CEDILLA]
-"\u0156" => "R"
-
-# Ř  [LATIN CAPITAL LETTER R WITH CARON]
-"\u0158" => "R"
-
-# Ȓ  [LATIN CAPITAL LETTER R WITH DOUBLE GRAVE]
-"\u0210" => "R"
-
-# Ȓ  [LATIN CAPITAL LETTER R WITH INVERTED BREVE]
-"\u0212" => "R"
-
-# Ɍ  [LATIN CAPITAL LETTER R WITH STROKE]
-"\u024C" => "R"
-
-# ʀ  [LATIN LETTER SMALL CAPITAL R]
-"\u0280" => "R"
-
-# ʁ  [LATIN LETTER SMALL CAPITAL INVERTED R]
-"\u0281" => "R"
-
-# ᴙ  [LATIN LETTER SMALL CAPITAL REVERSED R]
-"\u1D19" => "R"
-
-# ᴚ  [LATIN LETTER SMALL CAPITAL TURNED R]
-"\u1D1A" => "R"
-
-# Ṙ  [LATIN CAPITAL LETTER R WITH DOT ABOVE]
-"\u1E58" => "R"
-
-# Ṛ  [LATIN CAPITAL LETTER R WITH DOT BELOW]
-"\u1E5A" => "R"
-
-# Ṝ  [LATIN CAPITAL LETTER R WITH DOT BELOW AND MACRON]
-"\u1E5C" => "R"
-
-# Ṟ  [LATIN CAPITAL LETTER R WITH LINE BELOW]
-"\u1E5E" => "R"
-
-# Ⓡ  [CIRCLED LATIN CAPITAL LETTER R]
-"\u24C7" => "R"
-
-# Ɽ  [LATIN CAPITAL LETTER R WITH TAIL]
-"\u2C64" => "R"
-
-# Ꝛ  [LATIN CAPITAL LETTER R ROTUNDA]
-"\uA75A" => "R"
-
-# Ꞃ  [LATIN CAPITAL LETTER INSULAR R]
-"\uA782" => "R"
-
-# R  [FULLWIDTH LATIN CAPITAL LETTER R]
-"\uFF32" => "R"
-
-# ŕ  [LATIN SMALL LETTER R WITH ACUTE]
-"\u0155" => "r"
-
-# ŗ  [LATIN SMALL LETTER R WITH CEDILLA]
-"\u0157" => "r"
-
-# ř  [LATIN SMALL LETTER R WITH CARON]
-"\u0159" => "r"
-
-# ȑ  [LATIN SMALL LETTER R WITH DOUBLE GRAVE]
-"\u0211" => "r"
-
-# ȓ  [LATIN SMALL LETTER R WITH INVERTED BREVE]
-"\u0213" => "r"
-
-# ɍ  [LATIN SMALL LETTER R WITH STROKE]
-"\u024D" => "r"
-
-# ɼ  [LATIN SMALL LETTER R WITH LONG LEG]
-"\u027C" => "r"
-
-# ɽ  [LATIN SMALL LETTER R WITH TAIL]
-"\u027D" => "r"
-
-# ɾ  [LATIN SMALL LETTER R WITH FISHHOOK]
-"\u027E" => "r"
-
-# ɿ  [LATIN SMALL LETTER REVERSED R WITH FISHHOOK]
-"\u027F" => "r"
-
-# ᵣ  [LATIN SUBSCRIPT SMALL LETTER R]
-"\u1D63" => "r"
-
-# ᵲ  [LATIN SMALL LETTER R WITH MIDDLE TILDE]
-"\u1D72" => "r"
-
-# ᵳ  [LATIN SMALL LETTER R WITH FISHHOOK AND MIDDLE TILDE]
-"\u1D73" => "r"
-
-# ᶉ  [LATIN SMALL LETTER R WITH PALATAL HOOK]
-"\u1D89" => "r"
-
-# ṙ  [LATIN SMALL LETTER R WITH DOT ABOVE]
-"\u1E59" => "r"
-
-# ṛ  [LATIN SMALL LETTER R WITH DOT BELOW]
-"\u1E5B" => "r"
-
-# ṝ  [LATIN SMALL LETTER R WITH DOT BELOW AND MACRON]
-"\u1E5D" => "r"
-
-# ṟ  [LATIN SMALL LETTER R WITH LINE BELOW]
-"\u1E5F" => "r"
-
-# ⓡ  [CIRCLED LATIN SMALL LETTER R]
-"\u24E1" => "r"
-
-# ꝛ  [LATIN SMALL LETTER R ROTUNDA]
-"\uA75B" => "r"
-
-# ꞃ  [LATIN SMALL LETTER INSULAR R]
-"\uA783" => "r"
-
-# r  [FULLWIDTH LATIN SMALL LETTER R]
-"\uFF52" => "r"
-
-# ⒭  [PARENTHESIZED LATIN SMALL LETTER R]
-"\u24AD" => "(r)"
-
-# Ś  [LATIN CAPITAL LETTER S WITH ACUTE]
-"\u015A" => "S"
-
-# Ŝ  [LATIN CAPITAL LETTER S WITH CIRCUMFLEX]
-"\u015C" => "S"
-
-# Ş  [LATIN CAPITAL LETTER S WITH CEDILLA]
-"\u015E" => "S"
-
-# Š  [LATIN CAPITAL LETTER S WITH CARON]
-"\u0160" => "S"
-
-# Ș  [LATIN CAPITAL LETTER S WITH COMMA BELOW]
-"\u0218" => "S"
-
-# Ṡ  [LATIN CAPITAL LETTER S WITH DOT ABOVE]
-"\u1E60" => "S"
-
-# Ṣ  [LATIN CAPITAL LETTER S WITH DOT BELOW]
-"\u1E62" => "S"
-
-# Ṥ  [LATIN CAPITAL LETTER S WITH ACUTE AND DOT ABOVE]
-"\u1E64" => "S"
-
-# Ṧ  [LATIN CAPITAL LETTER S WITH CARON AND DOT ABOVE]
-"\u1E66" => "S"
-
-# Ṩ  [LATIN CAPITAL LETTER S WITH DOT BELOW AND DOT ABOVE]
-"\u1E68" => "S"
-
-# Ⓢ  [CIRCLED LATIN CAPITAL LETTER S]
-"\u24C8" => "S"
-
-# ꜱ  [LATIN LETTER SMALL CAPITAL S]
-"\uA731" => "S"
-
-# ꞅ  [LATIN SMALL LETTER INSULAR S]
-"\uA785" => "S"
-
-# S  [FULLWIDTH LATIN CAPITAL LETTER S]
-"\uFF33" => "S"
-
-# ś  [LATIN SMALL LETTER S WITH ACUTE]
-"\u015B" => "s"
-
-# ŝ  [LATIN SMALL LETTER S WITH CIRCUMFLEX]
-"\u015D" => "s"
-
-# ş  [LATIN SMALL LETTER S WITH CEDILLA]
-"\u015F" => "s"
-
-# š  [LATIN SMALL LETTER S WITH CARON]
-"\u0161" => "s"
-
-# ſ  http://en.wikipedia.org/wiki/Long_S  [LATIN SMALL LETTER LONG S]
-"\u017F" => "s"
-
-# ș  [LATIN SMALL LETTER S WITH COMMA BELOW]
-"\u0219" => "s"
-
-# ȿ  [LATIN SMALL LETTER S WITH SWASH TAIL]
-"\u023F" => "s"
-
-# ʂ  [LATIN SMALL LETTER S WITH HOOK]
-"\u0282" => "s"
-
-# ᵴ  [LATIN SMALL LETTER S WITH MIDDLE TILDE]
-"\u1D74" => "s"
-
-# ᶊ  [LATIN SMALL LETTER S WITH PALATAL HOOK]
-"\u1D8A" => "s"
-
-# ṡ  [LATIN SMALL LETTER S WITH DOT ABOVE]
-"\u1E61" => "s"
-
-# ṣ  [LATIN SMALL LETTER S WITH DOT BELOW]
-"\u1E63" => "s"
-
-# ṥ  [LATIN SMALL LETTER S WITH ACUTE AND DOT ABOVE]
-"\u1E65" => "s"
-
-# ṧ  [LATIN SMALL LETTER S WITH CARON AND DOT ABOVE]
-"\u1E67" => "s"
-
-# ṩ  [LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE]
-"\u1E69" => "s"
-
-# ẜ  [LATIN SMALL LETTER LONG S WITH DIAGONAL STROKE]
-"\u1E9C" => "s"
-
-# ẝ  [LATIN SMALL LETTER LONG S WITH HIGH STROKE]
-"\u1E9D" => "s"
-
-# ⓢ  [CIRCLED LATIN SMALL LETTER S]
-"\u24E2" => "s"
-
-# Ꞅ  [LATIN CAPITAL LETTER INSULAR S]
-"\uA784" => "s"
-
-# s  [FULLWIDTH LATIN SMALL LETTER S]
-"\uFF53" => "s"
-
-# ẞ  [LATIN CAPITAL LETTER SHARP S]
-"\u1E9E" => "SS"
-
-# ⒮  [PARENTHESIZED LATIN SMALL LETTER S]
-"\u24AE" => "(s)"
-
-# ß  [LATIN SMALL LETTER SHARP S]
-"\u00DF" => "ss"
-
-# st  [LATIN SMALL LIGATURE ST]
-"\uFB06" => "st"
-
-# Ţ  [LATIN CAPITAL LETTER T WITH CEDILLA]
-"\u0162" => "T"
-
-# Ť  [LATIN CAPITAL LETTER T WITH CARON]
-"\u0164" => "T"
-
-# Ŧ  [LATIN CAPITAL LETTER T WITH STROKE]
-"\u0166" => "T"
-
-# Ƭ  [LATIN CAPITAL LETTER T WITH HOOK]
-"\u01AC" => "T"
-
-# Ʈ  [LATIN CAPITAL LETTER T WITH RETROFLEX HOOK]
-"\u01AE" => "T"
-
-# Ț  [LATIN CAPITAL LETTER T WITH COMMA BELOW]
-"\u021A" => "T"
-
-# Ⱦ  [LATIN CAPITAL LETTER T WITH DIAGONAL STROKE]
-"\u023E" => "T"
-
-# ᴛ  [LATIN LETTER SMALL CAPITAL T]
-"\u1D1B" => "T"
-
-# Ṫ  [LATIN CAPITAL LETTER T WITH DOT ABOVE]
-"\u1E6A" => "T"
-
-# Ṭ  [LATIN CAPITAL LETTER T WITH DOT BELOW]
-"\u1E6C" => "T"
-
-# Ṯ  [LATIN CAPITAL LETTER T WITH LINE BELOW]
-"\u1E6E" => "T"
-
-# Ṱ  [LATIN CAPITAL LETTER T WITH CIRCUMFLEX BELOW]
-"\u1E70" => "T"
-
-# Ⓣ  [CIRCLED LATIN CAPITAL LETTER T]
-"\u24C9" => "T"
-
-# Ꞇ  [LATIN CAPITAL LETTER INSULAR T]
-"\uA786" => "T"
-
-# T  [FULLWIDTH LATIN CAPITAL LETTER T]
-"\uFF34" => "T"
-
-# ţ  [LATIN SMALL LETTER T WITH CEDILLA]
-"\u0163" => "t"
-
-# ť  [LATIN SMALL LETTER T WITH CARON]
-"\u0165" => "t"
-
-# ŧ  [LATIN SMALL LETTER T WITH STROKE]
-"\u0167" => "t"
-
-# ƫ  [LATIN SMALL LETTER T WITH PALATAL HOOK]
-"\u01AB" => "t"
-
-# ƭ  [LATIN SMALL LETTER T WITH HOOK]
-"\u01AD" => "t"
-
-# ț  [LATIN SMALL LETTER T WITH COMMA BELOW]
-"\u021B" => "t"
-
-# ȶ  [LATIN SMALL LETTER T WITH CURL]
-"\u0236" => "t"
-
-# ʇ  [LATIN SMALL LETTER TURNED T]
-"\u0287" => "t"
-
-# ʈ  [LATIN SMALL LETTER T WITH RETROFLEX HOOK]
-"\u0288" => "t"
-
-# ᵵ  [LATIN SMALL LETTER T WITH MIDDLE TILDE]
-"\u1D75" => "t"
-
-# ṫ  [LATIN SMALL LETTER T WITH DOT ABOVE]
-"\u1E6B" => "t"
-
-# ṭ  [LATIN SMALL LETTER T WITH DOT BELOW]
-"\u1E6D" => "t"
-
-# ṯ  [LATIN SMALL LETTER T WITH LINE BELOW]
-"\u1E6F" => "t"
-
-# ṱ  [LATIN SMALL LETTER T WITH CIRCUMFLEX BELOW]
-"\u1E71" => "t"
-
-# ẗ  [LATIN SMALL LETTER T WITH DIAERESIS]
-"\u1E97" => "t"
-
-# ⓣ  [CIRCLED LATIN SMALL LETTER T]
-"\u24E3" => "t"
-
-# ⱦ  [LATIN SMALL LETTER T WITH DIAGONAL STROKE]
-"\u2C66" => "t"
-
-# t  [FULLWIDTH LATIN SMALL LETTER T]
-"\uFF54" => "t"
-
-# Þ  [LATIN CAPITAL LETTER THORN]
-"\u00DE" => "TH"
-
-# Ꝧ  [LATIN CAPITAL LETTER THORN WITH STROKE THROUGH DESCENDER]
-"\uA766" => "TH"
-
-# Ꜩ  [LATIN CAPITAL LETTER TZ]
-"\uA728" => "TZ"
-
-# ⒯  [PARENTHESIZED LATIN SMALL LETTER T]
-"\u24AF" => "(t)"
-
-# ʨ  [LATIN SMALL LETTER TC DIGRAPH WITH CURL]
-"\u02A8" => "tc"
-
-# þ  [LATIN SMALL LETTER THORN]
-"\u00FE" => "th"
-
-# ᵺ  [LATIN SMALL LETTER TH WITH STRIKETHROUGH]
-"\u1D7A" => "th"
-
-# ꝧ  [LATIN SMALL LETTER THORN WITH STROKE THROUGH DESCENDER]
-"\uA767" => "th"
-
-# ʦ  [LATIN SMALL LETTER TS DIGRAPH]
-"\u02A6" => "ts"
-
-# ꜩ  [LATIN SMALL LETTER TZ]
-"\uA729" => "tz"
-
-# Ù  [LATIN CAPITAL LETTER U WITH GRAVE]
-"\u00D9" => "U"
-
-# Ú  [LATIN CAPITAL LETTER U WITH ACUTE]
-"\u00DA" => "U"
-
-# Û  [LATIN CAPITAL LETTER U WITH CIRCUMFLEX]
-"\u00DB" => "U"
-
-# Ü  [LATIN CAPITAL LETTER U WITH DIAERESIS]
-"\u00DC" => "U"
-
-# Ũ  [LATIN CAPITAL LETTER U WITH TILDE]
-"\u0168" => "U"
-
-# Ū  [LATIN CAPITAL LETTER U WITH MACRON]
-"\u016A" => "U"
-
-# Ŭ  [LATIN CAPITAL LETTER U WITH BREVE]
-"\u016C" => "U"
-
-# Ů  [LATIN CAPITAL LETTER U WITH RING ABOVE]
-"\u016E" => "U"
-
-# Ű  [LATIN CAPITAL LETTER U WITH DOUBLE ACUTE]
-"\u0170" => "U"
-
-# Ų  [LATIN CAPITAL LETTER U WITH OGONEK]
-"\u0172" => "U"
-
-# Ư  [LATIN CAPITAL LETTER U WITH HORN]
-"\u01AF" => "U"
-
-# Ǔ  [LATIN CAPITAL LETTER U WITH CARON]
-"\u01D3" => "U"
-
-# Ǖ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND MACRON]
-"\u01D5" => "U"
-
-# Ǘ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND ACUTE]
-"\u01D7" => "U"
-
-# Ǚ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND CARON]
-"\u01D9" => "U"
-
-# Ǜ  [LATIN CAPITAL LETTER U WITH DIAERESIS AND GRAVE]
-"\u01DB" => "U"
-
-# Ȕ  [LATIN CAPITAL LETTER U WITH DOUBLE GRAVE]
-"\u0214" => "U"
-
-# Ȗ  [LATIN CAPITAL LETTER U WITH INVERTED BREVE]
-"\u0216" => "U"
-
-# Ʉ  [LATIN CAPITAL LETTER U BAR]
-"\u0244" => "U"
-
-# ᴜ  [LATIN LETTER SMALL CAPITAL U]
-"\u1D1C" => "U"
-
-# ᵾ  [LATIN SMALL CAPITAL LETTER U WITH STROKE]
-"\u1D7E" => "U"
-
-# Ṳ  [LATIN CAPITAL LETTER U WITH DIAERESIS BELOW]
-"\u1E72" => "U"
-
-# Ṵ  [LATIN CAPITAL LETTER U WITH TILDE BELOW]
-"\u1E74" => "U"
-
-# Ṷ  [LATIN CAPITAL LETTER U WITH CIRCUMFLEX BELOW]
-"\u1E76" => "U"
-
-# Ṹ  [LATIN CAPITAL LETTER U WITH TILDE AND ACUTE]
-"\u1E78" => "U"
-
-# Ṻ  [LATIN CAPITAL LETTER U WITH MACRON AND DIAERESIS]
-"\u1E7A" => "U"
-
-# Ụ  [LATIN CAPITAL LETTER U WITH DOT BELOW]
-"\u1EE4" => "U"
-
-# Ủ  [LATIN CAPITAL LETTER U WITH HOOK ABOVE]
-"\u1EE6" => "U"
-
-# Ứ  [LATIN CAPITAL LETTER U WITH HORN AND ACUTE]
-"\u1EE8" => "U"
-
-# Ừ  [LATIN CAPITAL LETTER U WITH HORN AND GRAVE]
-"\u1EEA" => "U"
-
-# Ử  [LATIN CAPITAL LETTER U WITH HORN AND HOOK ABOVE]
-"\u1EEC" => "U"
-
-# Ữ  [LATIN CAPITAL LETTER U WITH HORN AND TILDE]
-"\u1EEE" => "U"
-
-# Ự  [LATIN CAPITAL LETTER U WITH HORN AND DOT BELOW]
-"\u1EF0" => "U"
-
-# Ⓤ  [CIRCLED LATIN CAPITAL LETTER U]
-"\u24CA" => "U"
-
-# U  [FULLWIDTH LATIN CAPITAL LETTER U]
-"\uFF35" => "U"
-
-# ù  [LATIN SMALL LETTER U WITH GRAVE]
-"\u00F9" => "u"
-
-# ú  [LATIN SMALL LETTER U WITH ACUTE]
-"\u00FA" => "u"
-
-# û  [LATIN SMALL LETTER U WITH CIRCUMFLEX]
-"\u00FB" => "u"
-
-# ü  [LATIN SMALL LETTER U WITH DIAERESIS]
-"\u00FC" => "u"
-
-# ũ  [LATIN SMALL LETTER U WITH TILDE]
-"\u0169" => "u"
-
-# ū  [LATIN SMALL LETTER U WITH MACRON]
-"\u016B" => "u"
-
-# ŭ  [LATIN SMALL LETTER U WITH BREVE]
-"\u016D" => "u"
-
-# ů  [LATIN SMALL LETTER U WITH RING ABOVE]
-"\u016F" => "u"
-
-# ű  [LATIN SMALL LETTER U WITH DOUBLE ACUTE]
-"\u0171" => "u"
-
-# ų  [LATIN SMALL LETTER U WITH OGONEK]
-"\u0173" => "u"
-
-# ư  [LATIN SMALL LETTER U WITH HORN]
-"\u01B0" => "u"
-
-# ǔ  [LATIN SMALL LETTER U WITH CARON]
-"\u01D4" => "u"
-
-# ǖ  [LATIN SMALL LETTER U WITH DIAERESIS AND MACRON]
-"\u01D6" => "u"
-
-# ǘ  [LATIN SMALL LETTER U WITH DIAERESIS AND ACUTE]
-"\u01D8" => "u"
-
-# ǚ  [LATIN SMALL LETTER U WITH DIAERESIS AND CARON]
-"\u01DA" => "u"
-
-# ǜ  [LATIN SMALL LETTER U WITH DIAERESIS AND GRAVE]
-"\u01DC" => "u"
-
-# ȕ  [LATIN SMALL LETTER U WITH DOUBLE GRAVE]
-"\u0215" => "u"
-
-# ȗ  [LATIN SMALL LETTER U WITH INVERTED BREVE]
-"\u0217" => "u"
-
-# ʉ  [LATIN SMALL LETTER U BAR]
-"\u0289" => "u"
-
-# ᵤ  [LATIN SUBSCRIPT SMALL LETTER U]
-"\u1D64" => "u"
-
-# ᶙ  [LATIN SMALL LETTER U WITH RETROFLEX HOOK]
-"\u1D99" => "u"
-
-# ṳ  [LATIN SMALL LETTER U WITH DIAERESIS BELOW]
-"\u1E73" => "u"
-
-# ṵ  [LATIN SMALL LETTER U WITH TILDE BELOW]
-"\u1E75" => "u"
-
-# ṷ  [LATIN SMALL LETTER U WITH CIRCUMFLEX BELOW]
-"\u1E77" => "u"
-
-# ṹ  [LATIN SMALL LETTER U WITH TILDE AND ACUTE]
-"\u1E79" => "u"
-
-# ṻ  [LATIN SMALL LETTER U WITH MACRON AND DIAERESIS]
-"\u1E7B" => "u"
-
-# ụ  [LATIN SMALL LETTER U WITH DOT BELOW]
-"\u1EE5" => "u"
-
-# ủ  [LATIN SMALL LETTER U WITH HOOK ABOVE]
-"\u1EE7" => "u"
-
-# ứ  [LATIN SMALL LETTER U WITH HORN AND ACUTE]
-"\u1EE9" => "u"
-
-# ừ  [LATIN SMALL LETTER U WITH HORN AND GRAVE]
-"\u1EEB" => "u"
-
-# ử  [LATIN SMALL LETTER U WITH HORN AND HOOK ABOVE]
-"\u1EED" => "u"
-
-# ữ  [LATIN SMALL LETTER U WITH HORN AND TILDE]
-"\u1EEF" => "u"
-
-# ự  [LATIN SMALL LETTER U WITH HORN AND DOT BELOW]
-"\u1EF1" => "u"
-
-# ⓤ  [CIRCLED LATIN SMALL LETTER U]
-"\u24E4" => "u"
-
-# u  [FULLWIDTH LATIN SMALL LETTER U]
-"\uFF55" => "u"
-
-# ⒰  [PARENTHESIZED LATIN SMALL LETTER U]
-"\u24B0" => "(u)"
-
-# ᵫ  [LATIN SMALL LETTER UE]
-"\u1D6B" => "ue"
-
-# Ʋ  [LATIN CAPITAL LETTER V WITH HOOK]
-"\u01B2" => "V"
-
-# Ʌ  [LATIN CAPITAL LETTER TURNED V]
-"\u0245" => "V"
-
-# ᴠ  [LATIN LETTER SMALL CAPITAL V]
-"\u1D20" => "V"
-
-# Ṽ  [LATIN CAPITAL LETTER V WITH TILDE]
-"\u1E7C" => "V"
-
-# Ṿ  [LATIN CAPITAL LETTER V WITH DOT BELOW]
-"\u1E7E" => "V"
-
-# Ỽ  [LATIN CAPITAL LETTER MIDDLE-WELSH V]
-"\u1EFC" => "V"
-
-# Ⓥ  [CIRCLED LATIN CAPITAL LETTER V]
-"\u24CB" => "V"
-
-# Ꝟ  [LATIN CAPITAL LETTER V WITH DIAGONAL STROKE]
-"\uA75E" => "V"
-
-# Ꝩ  [LATIN CAPITAL LETTER VEND]
-"\uA768" => "V"
-
-# V  [FULLWIDTH LATIN CAPITAL LETTER V]
-"\uFF36" => "V"
-
-# ʋ  [LATIN SMALL LETTER V WITH HOOK]
-"\u028B" => "v"
-
-# ʌ  [LATIN SMALL LETTER TURNED V]
-"\u028C" => "v"
-
-# ᵥ  [LATIN SUBSCRIPT SMALL LETTER V]
-"\u1D65" => "v"
-
-# ᶌ  [LATIN SMALL LETTER V WITH PALATAL HOOK]
-"\u1D8C" => "v"
-
-# ṽ  [LATIN SMALL LETTER V WITH TILDE]
-"\u1E7D" => "v"
-
-# ṿ  [LATIN SMALL LETTER V WITH DOT BELOW]
-"\u1E7F" => "v"
-
-# ⓥ  [CIRCLED LATIN SMALL LETTER V]
-"\u24E5" => "v"
-
-# ⱱ  [LATIN SMALL LETTER V WITH RIGHT HOOK]
-"\u2C71" => "v"
-
-# ⱴ  [LATIN SMALL LETTER V WITH CURL]
-"\u2C74" => "v"
-
-# ꝟ  [LATIN SMALL LETTER V WITH DIAGONAL STROKE]
-"\uA75F" => "v"
-
-# v  [FULLWIDTH LATIN SMALL LETTER V]
-"\uFF56" => "v"
-
-# Ꝡ  [LATIN CAPITAL LETTER VY]
-"\uA760" => "VY"
-
-# ⒱  [PARENTHESIZED LATIN SMALL LETTER V]
-"\u24B1" => "(v)"
-
-# ꝡ  [LATIN SMALL LETTER VY]
-"\uA761" => "vy"
-
-# Ŵ  [LATIN CAPITAL LETTER W WITH CIRCUMFLEX]
-"\u0174" => "W"
-
-# Ƿ  http://en.wikipedia.org/wiki/Wynn  [LATIN CAPITAL LETTER WYNN]
-"\u01F7" => "W"
-
-# ᴡ  [LATIN LETTER SMALL CAPITAL W]
-"\u1D21" => "W"
-
-# Ẁ  [LATIN CAPITAL LETTER W WITH GRAVE]
-"\u1E80" => "W"
-
-# Ẃ  [LATIN CAPITAL LETTER W WITH ACUTE]
-"\u1E82" => "W"
-
-# Ẅ  [LATIN CAPITAL LETTER W WITH DIAERESIS]
-"\u1E84" => "W"
-
-# Ẇ  [LATIN CAPITAL LETTER W WITH DOT ABOVE]
-"\u1E86" => "W"
-
-# Ẉ  [LATIN CAPITAL LETTER W WITH DOT BELOW]
-"\u1E88" => "W"
-
-# Ⓦ  [CIRCLED LATIN CAPITAL LETTER W]
-"\u24CC" => "W"
-
-# Ⱳ  [LATIN CAPITAL LETTER W WITH HOOK]
-"\u2C72" => "W"
-
-# W  [FULLWIDTH LATIN CAPITAL LETTER W]
-"\uFF37" => "W"
-
-# ŵ  [LATIN SMALL LETTER W WITH CIRCUMFLEX]
-"\u0175" => "w"
-
-# ƿ  http://en.wikipedia.org/wiki/Wynn  [LATIN LETTER WYNN]
-"\u01BF" => "w"
-
-# ʍ  [LATIN SMALL LETTER TURNED W]
-"\u028D" => "w"
-
-# ẁ  [LATIN SMALL LETTER W WITH GRAVE]
-"\u1E81" => "w"
-
-# ẃ  [LATIN SMALL LETTER W WITH ACUTE]
-"\u1E83" => "w"
-
-# ẅ  [LATIN SMALL LETTER W WITH DIAERESIS]
-"\u1E85" => "w"
-
-# ẇ  [LATIN SMALL LETTER W WITH DOT ABOVE]
-"\u1E87" => "w"
-
-# ẉ  [LATIN SMALL LETTER W WITH DOT BELOW]
-"\u1E89" => "w"
-
-# ẘ  [LATIN SMALL LETTER W WITH RING ABOVE]
-"\u1E98" => "w"
-
-# ⓦ  [CIRCLED LATIN SMALL LETTER W]
-"\u24E6" => "w"
-
-# ⱳ  [LATIN SMALL LETTER W WITH HOOK]
-"\u2C73" => "w"
-
-# w  [FULLWIDTH LATIN SMALL LETTER W]
-"\uFF57" => "w"
-
-# ⒲  [PARENTHESIZED LATIN SMALL LETTER W]
-"\u24B2" => "(w)"
-
-# Ẋ  [LATIN CAPITAL LETTER X WITH DOT ABOVE]
-"\u1E8A" => "X"
-
-# Ẍ  [LATIN CAPITAL LETTER X WITH DIAERESIS]
-"\u1E8C" => "X"
-
-# Ⓧ  [CIRCLED LATIN CAPITAL LETTER X]
-"\u24CD" => "X"
-
-# X  [FULLWIDTH LATIN CAPITAL LETTER X]
-"\uFF38" => "X"
-
-# ᶍ  [LATIN SMALL LETTER X WITH PALATAL HOOK]
-"\u1D8D" => "x"
-
-# ẋ  [LATIN SMALL LETTER X WITH DOT ABOVE]
-"\u1E8B" => "x"
-
-# ẍ  [LATIN SMALL LETTER X WITH DIAERESIS]
-"\u1E8D" => "x"
-
-# ₓ  [LATIN SUBSCRIPT SMALL LETTER X]
-"\u2093" => "x"
-
-# ⓧ  [CIRCLED LATIN SMALL LETTER X]
-"\u24E7" => "x"
-
-# x  [FULLWIDTH LATIN SMALL LETTER X]
-"\uFF58" => "x"
-
-# ⒳  [PARENTHESIZED LATIN SMALL LETTER X]
-"\u24B3" => "(x)"
-
-# Ý  [LATIN CAPITAL LETTER Y WITH ACUTE]
-"\u00DD" => "Y"
-
-# Ŷ  [LATIN CAPITAL LETTER Y WITH CIRCUMFLEX]
-"\u0176" => "Y"
-
-# Ÿ  [LATIN CAPITAL LETTER Y WITH DIAERESIS]
-"\u0178" => "Y"
-
-# Ƴ  [LATIN CAPITAL LETTER Y WITH HOOK]
-"\u01B3" => "Y"
-
-# Ȳ  [LATIN CAPITAL LETTER Y WITH MACRON]
-"\u0232" => "Y"
-
-# Ɏ  [LATIN CAPITAL LETTER Y WITH STROKE]
-"\u024E" => "Y"
-
-# ʏ  [LATIN LETTER SMALL CAPITAL Y]
-"\u028F" => "Y"
-
-# Ẏ  [LATIN CAPITAL LETTER Y WITH DOT ABOVE]
-"\u1E8E" => "Y"
-
-# Ỳ  [LATIN CAPITAL LETTER Y WITH GRAVE]
-"\u1EF2" => "Y"
-
-# Ỵ  [LATIN CAPITAL LETTER Y WITH DOT BELOW]
-"\u1EF4" => "Y"
-
-# Ỷ  [LATIN CAPITAL LETTER Y WITH HOOK ABOVE]
-"\u1EF6" => "Y"
-
-# Ỹ  [LATIN CAPITAL LETTER Y WITH TILDE]
-"\u1EF8" => "Y"
-
-# Ỿ  [LATIN CAPITAL LETTER Y WITH LOOP]
-"\u1EFE" => "Y"
-
-# Ⓨ  [CIRCLED LATIN CAPITAL LETTER Y]
-"\u24CE" => "Y"
-
-# Y  [FULLWIDTH LATIN CAPITAL LETTER Y]
-"\uFF39" => "Y"
-
-# ý  [LATIN SMALL LETTER Y WITH ACUTE]
-"\u00FD" => "y"
-
-# ÿ  [LATIN SMALL LETTER Y WITH DIAERESIS]
-"\u00FF" => "y"
-
-# ŷ  [LATIN SMALL LETTER Y WITH CIRCUMFLEX]
-"\u0177" => "y"
-
-# ƴ  [LATIN SMALL LETTER Y WITH HOOK]
-"\u01B4" => "y"
-
-# ȳ  [LATIN SMALL LETTER Y WITH MACRON]
-"\u0233" => "y"
-
-# ɏ  [LATIN SMALL LETTER Y WITH STROKE]
-"\u024F" => "y"
-
-# ʎ  [LATIN SMALL LETTER TURNED Y]
-"\u028E" => "y"
-
-# ẏ  [LATIN SMALL LETTER Y WITH DOT ABOVE]
-"\u1E8F" => "y"
-
-# ẙ  [LATIN SMALL LETTER Y WITH RING ABOVE]
-"\u1E99" => "y"
-
-# ỳ  [LATIN SMALL LETTER Y WITH GRAVE]
-"\u1EF3" => "y"
-
-# ỵ  [LATIN SMALL LETTER Y WITH DOT BELOW]
-"\u1EF5" => "y"
-
-# ỷ  [LATIN SMALL LETTER Y WITH HOOK ABOVE]
-"\u1EF7" => "y"
-
-# ỹ  [LATIN SMALL LETTER Y WITH TILDE]
-"\u1EF9" => "y"
-
-# ỿ  [LATIN SMALL LETTER Y WITH LOOP]
-"\u1EFF" => "y"
-
-# ⓨ  [CIRCLED LATIN SMALL LETTER Y]
-"\u24E8" => "y"
-
-# y  [FULLWIDTH LATIN SMALL LETTER Y]
-"\uFF59" => "y"
-
-# ⒴  [PARENTHESIZED LATIN SMALL LETTER Y]
-"\u24B4" => "(y)"
-
-# Ź  [LATIN CAPITAL LETTER Z WITH ACUTE]
-"\u0179" => "Z"
-
-# Ż  [LATIN CAPITAL LETTER Z WITH DOT ABOVE]
-"\u017B" => "Z"
-
-# Ž  [LATIN CAPITAL LETTER Z WITH CARON]
-"\u017D" => "Z"
-
-# Ƶ  [LATIN CAPITAL LETTER Z WITH STROKE]
-"\u01B5" => "Z"
-
-# Ȝ  http://en.wikipedia.org/wiki/Yogh  [LATIN CAPITAL LETTER YOGH]
-"\u021C" => "Z"
-
-# Ȥ  [LATIN CAPITAL LETTER Z WITH HOOK]
-"\u0224" => "Z"
-
-# ᴢ  [LATIN LETTER SMALL CAPITAL Z]
-"\u1D22" => "Z"
-
-# Ẑ  [LATIN CAPITAL LETTER Z WITH CIRCUMFLEX]
-"\u1E90" => "Z"
-
-# Ẓ  [LATIN CAPITAL LETTER Z WITH DOT BELOW]
-"\u1E92" => "Z"
-
-# Ẕ  [LATIN CAPITAL LETTER Z WITH LINE BELOW]
-"\u1E94" => "Z"
-
-# Ⓩ  [CIRCLED LATIN CAPITAL LETTER Z]
-"\u24CF" => "Z"
-
-# Ⱬ  [LATIN CAPITAL LETTER Z WITH DESCENDER]
-"\u2C6B" => "Z"
-
-# Ꝣ  [LATIN CAPITAL LETTER VISIGOTHIC Z]
-"\uA762" => "Z"
-
-# Z  [FULLWIDTH LATIN CAPITAL LETTER Z]
-"\uFF3A" => "Z"
-
-# ź  [LATIN SMALL LETTER Z WITH ACUTE]
-"\u017A" => "z"
-
-# ż  [LATIN SMALL LETTER Z WITH DOT ABOVE]
-"\u017C" => "z"
-
-# ž  [LATIN SMALL LETTER Z WITH CARON]
-"\u017E" => "z"
-
-# ƶ  [LATIN SMALL LETTER Z WITH STROKE]
-"\u01B6" => "z"
-
-# ȝ  http://en.wikipedia.org/wiki/Yogh  [LATIN SMALL LETTER YOGH]
-"\u021D" => "z"
-
-# ȥ  [LATIN SMALL LETTER Z WITH HOOK]
-"\u0225" => "z"
-
-# ɀ  [LATIN SMALL LETTER Z WITH SWASH TAIL]
-"\u0240" => "z"
-
-# ʐ  [LATIN SMALL LETTER Z WITH RETROFLEX HOOK]
-"\u0290" => "z"
-
-# ʑ  [LATIN SMALL LETTER Z WITH CURL]
-"\u0291" => "z"
-
-# ᵶ  [LATIN SMALL LETTER Z WITH MIDDLE TILDE]
-"\u1D76" => "z"
-
-# ᶎ  [LATIN SMALL LETTER Z WITH PALATAL HOOK]
-"\u1D8E" => "z"
-
-# ẑ  [LATIN SMALL LETTER Z WITH CIRCUMFLEX]
-"\u1E91" => "z"
-
-# ẓ  [LATIN SMALL LETTER Z WITH DOT BELOW]
-"\u1E93" => "z"
-
-# ẕ  [LATIN SMALL LETTER Z WITH LINE BELOW]
-"\u1E95" => "z"
-
-# ⓩ  [CIRCLED LATIN SMALL LETTER Z]
-"\u24E9" => "z"
-
-# ⱬ  [LATIN SMALL LETTER Z WITH DESCENDER]
-"\u2C6C" => "z"
-
-# ꝣ  [LATIN SMALL LETTER VISIGOTHIC Z]
-"\uA763" => "z"
-
-# z  [FULLWIDTH LATIN SMALL LETTER Z]
-"\uFF5A" => "z"
-
-# ⒵  [PARENTHESIZED LATIN SMALL LETTER Z]
-"\u24B5" => "(z)"
-
-# ⁰  [SUPERSCRIPT ZERO]
-"\u2070" => "0"
-
-# ₀  [SUBSCRIPT ZERO]
-"\u2080" => "0"
-
-# ⓪  [CIRCLED DIGIT ZERO]
-"\u24EA" => "0"
-
-# ⓿  [NEGATIVE CIRCLED DIGIT ZERO]
-"\u24FF" => "0"
-
-# 0  [FULLWIDTH DIGIT ZERO]
-"\uFF10" => "0"
-
-# ¹  [SUPERSCRIPT ONE]
-"\u00B9" => "1"
-
-# ₁  [SUBSCRIPT ONE]
-"\u2081" => "1"
-
-# ①  [CIRCLED DIGIT ONE]
-"\u2460" => "1"
-
-# ⓵  [DOUBLE CIRCLED DIGIT ONE]
-"\u24F5" => "1"
-
-# ❶  [DINGBAT NEGATIVE CIRCLED DIGIT ONE]
-"\u2776" => "1"
-
-# ➀  [DINGBAT CIRCLED SANS-SERIF DIGIT ONE]
-"\u2780" => "1"
-
-# ➊  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT ONE]
-"\u278A" => "1"
-
-# 1  [FULLWIDTH DIGIT ONE]
-"\uFF11" => "1"
-
-# ⒈  [DIGIT ONE FULL STOP]
-"\u2488" => "1."
-
-# ⑴  [PARENTHESIZED DIGIT ONE]
-"\u2474" => "(1)"
-
-# ²  [SUPERSCRIPT TWO]
-"\u00B2" => "2"
-
-# ₂  [SUBSCRIPT TWO]
-"\u2082" => "2"
-
-# ②  [CIRCLED DIGIT TWO]
-"\u2461" => "2"
-
-# ⓶  [DOUBLE CIRCLED DIGIT TWO]
-"\u24F6" => "2"
-
-# ❷  [DINGBAT NEGATIVE CIRCLED DIGIT TWO]
-"\u2777" => "2"
-
-# ➁  [DINGBAT CIRCLED SANS-SERIF DIGIT TWO]
-"\u2781" => "2"
-
-# ➋  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT TWO]
-"\u278B" => "2"
-
-# 2  [FULLWIDTH DIGIT TWO]
-"\uFF12" => "2"
-
-# ⒉  [DIGIT TWO FULL STOP]
-"\u2489" => "2."
-
-# ⑵  [PARENTHESIZED DIGIT TWO]
-"\u2475" => "(2)"
-
-# ³  [SUPERSCRIPT THREE]
-"\u00B3" => "3"
-
-# ₃  [SUBSCRIPT THREE]
-"\u2083" => "3"
-
-# ③  [CIRCLED DIGIT THREE]
-"\u2462" => "3"
-
-# ⓷  [DOUBLE CIRCLED DIGIT THREE]
-"\u24F7" => "3"
-
-# ❸  [DINGBAT NEGATIVE CIRCLED DIGIT THREE]
-"\u2778" => "3"
-
-# ➂  [DINGBAT CIRCLED SANS-SERIF DIGIT THREE]
-"\u2782" => "3"
-
-# ➌  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT THREE]
-"\u278C" => "3"
-
-# 3  [FULLWIDTH DIGIT THREE]
-"\uFF13" => "3"
-
-# ⒊  [DIGIT THREE FULL STOP]
-"\u248A" => "3."
-
-# ⑶  [PARENTHESIZED DIGIT THREE]
-"\u2476" => "(3)"
-
-# ⁴  [SUPERSCRIPT FOUR]
-"\u2074" => "4"
-
-# ₄  [SUBSCRIPT FOUR]
-"\u2084" => "4"
-
-# ④  [CIRCLED DIGIT FOUR]
-"\u2463" => "4"
-
-# ⓸  [DOUBLE CIRCLED DIGIT FOUR]
-"\u24F8" => "4"
-
-# ❹  [DINGBAT NEGATIVE CIRCLED DIGIT FOUR]
-"\u2779" => "4"
-
-# ➃  [DINGBAT CIRCLED SANS-SERIF DIGIT FOUR]
-"\u2783" => "4"
-
-# ➍  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT FOUR]
-"\u278D" => "4"
-
-# 4  [FULLWIDTH DIGIT FOUR]
-"\uFF14" => "4"
-
-# ⒋  [DIGIT FOUR FULL STOP]
-"\u248B" => "4."
-
-# ⑷  [PARENTHESIZED DIGIT FOUR]
-"\u2477" => "(4)"
-
-# ⁵  [SUPERSCRIPT FIVE]
-"\u2075" => "5"
-
-# ₅  [SUBSCRIPT FIVE]
-"\u2085" => "5"
-
-# ⑤  [CIRCLED DIGIT FIVE]
-"\u2464" => "5"
-
-# ⓹  [DOUBLE CIRCLED DIGIT FIVE]
-"\u24F9" => "5"
-
-# ❺  [DINGBAT NEGATIVE CIRCLED DIGIT FIVE]
-"\u277A" => "5"
-
-# ➄  [DINGBAT CIRCLED SANS-SERIF DIGIT FIVE]
-"\u2784" => "5"
-
-# ➎  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT FIVE]
-"\u278E" => "5"
-
-# 5  [FULLWIDTH DIGIT FIVE]
-"\uFF15" => "5"
-
-# ⒌  [DIGIT FIVE FULL STOP]
-"\u248C" => "5."
-
-# ⑸  [PARENTHESIZED DIGIT FIVE]
-"\u2478" => "(5)"
-
-# ⁶  [SUPERSCRIPT SIX]
-"\u2076" => "6"
-
-# ₆  [SUBSCRIPT SIX]
-"\u2086" => "6"
-
-# ⑥  [CIRCLED DIGIT SIX]
-"\u2465" => "6"
-
-# ⓺  [DOUBLE CIRCLED DIGIT SIX]
-"\u24FA" => "6"
-
-# ❻  [DINGBAT NEGATIVE CIRCLED DIGIT SIX]
-"\u277B" => "6"
-
-# ➅  [DINGBAT CIRCLED SANS-SERIF DIGIT SIX]
-"\u2785" => "6"
-
-# ➏  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT SIX]
-"\u278F" => "6"
-
-# 6  [FULLWIDTH DIGIT SIX]
-"\uFF16" => "6"
-
-# ⒍  [DIGIT SIX FULL STOP]
-"\u248D" => "6."
-
-# ⑹  [PARENTHESIZED DIGIT SIX]
-"\u2479" => "(6)"
-
-# ⁷  [SUPERSCRIPT SEVEN]
-"\u2077" => "7"
-
-# ₇  [SUBSCRIPT SEVEN]
-"\u2087" => "7"
-
-# ⑦  [CIRCLED DIGIT SEVEN]
-"\u2466" => "7"
-
-# ⓻  [DOUBLE CIRCLED DIGIT SEVEN]
-"\u24FB" => "7"
-
-# ❼  [DINGBAT NEGATIVE CIRCLED DIGIT SEVEN]
-"\u277C" => "7"
-
-# ➆  [DINGBAT CIRCLED SANS-SERIF DIGIT SEVEN]
-"\u2786" => "7"
-
-# ➐  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT SEVEN]
-"\u2790" => "7"
-
-# 7  [FULLWIDTH DIGIT SEVEN]
-"\uFF17" => "7"
-
-# ⒎  [DIGIT SEVEN FULL STOP]
-"\u248E" => "7."
-
-# ⑺  [PARENTHESIZED DIGIT SEVEN]
-"\u247A" => "(7)"
-
-# ⁸  [SUPERSCRIPT EIGHT]
-"\u2078" => "8"
-
-# ₈  [SUBSCRIPT EIGHT]
-"\u2088" => "8"
-
-# ⑧  [CIRCLED DIGIT EIGHT]
-"\u2467" => "8"
-
-# ⓼  [DOUBLE CIRCLED DIGIT EIGHT]
-"\u24FC" => "8"
-
-# ❽  [DINGBAT NEGATIVE CIRCLED DIGIT EIGHT]
-"\u277D" => "8"
-
-# ➇  [DINGBAT CIRCLED SANS-SERIF DIGIT EIGHT]
-"\u2787" => "8"
-
-# ➑  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT EIGHT]
-"\u2791" => "8"
-
-# 8  [FULLWIDTH DIGIT EIGHT]
-"\uFF18" => "8"
-
-# ⒏  [DIGIT EIGHT FULL STOP]
-"\u248F" => "8."
-
-# ⑻  [PARENTHESIZED DIGIT EIGHT]
-"\u247B" => "(8)"
-
-# ⁹  [SUPERSCRIPT NINE]
-"\u2079" => "9"
-
-# ₉  [SUBSCRIPT NINE]
-"\u2089" => "9"
-
-# ⑨  [CIRCLED DIGIT NINE]
-"\u2468" => "9"
-
-# ⓽  [DOUBLE CIRCLED DIGIT NINE]
-"\u24FD" => "9"
-
-# ❾  [DINGBAT NEGATIVE CIRCLED DIGIT NINE]
-"\u277E" => "9"
-
-# ➈  [DINGBAT CIRCLED SANS-SERIF DIGIT NINE]
-"\u2788" => "9"
-
-# ➒  [DINGBAT NEGATIVE CIRCLED SANS-SERIF DIGIT NINE]
-"\u2792" => "9"
-
-# 9  [FULLWIDTH DIGIT NINE]
-"\uFF19" => "9"
-
-# ⒐  [DIGIT NINE FULL STOP]
-"\u2490" => "9."
-
-# ⑼  [PARENTHESIZED DIGIT NINE]
-"\u247C" => "(9)"
-
-# ⑩  [CIRCLED NUMBER TEN]
-"\u2469" => "10"
-
-# ⓾  [DOUBLE CIRCLED NUMBER TEN]
-"\u24FE" => "10"
-
-# ❿  [DINGBAT NEGATIVE CIRCLED NUMBER TEN]
-"\u277F" => "10"
-
-# ➉  [DINGBAT CIRCLED SANS-SERIF NUMBER TEN]
-"\u2789" => "10"
-
-# ➓  [DINGBAT NEGATIVE CIRCLED SANS-SERIF NUMBER TEN]
-"\u2793" => "10"
-
-# ⒑  [NUMBER TEN FULL STOP]
-"\u2491" => "10."
-
-# ⑽  [PARENTHESIZED NUMBER TEN]
-"\u247D" => "(10)"
-
-# ⑪  [CIRCLED NUMBER ELEVEN]
-"\u246A" => "11"
-
-# ⓫  [NEGATIVE CIRCLED NUMBER ELEVEN]
-"\u24EB" => "11"
-
-# ⒒  [NUMBER ELEVEN FULL STOP]
-"\u2492" => "11."
-
-# ⑾  [PARENTHESIZED NUMBER ELEVEN]
-"\u247E" => "(11)"
-
-# ⑫  [CIRCLED NUMBER TWELVE]
-"\u246B" => "12"
-
-# ⓬  [NEGATIVE CIRCLED NUMBER TWELVE]
-"\u24EC" => "12"
-
-# ⒓  [NUMBER TWELVE FULL STOP]
-"\u2493" => "12."
-
-# ⑿  [PARENTHESIZED NUMBER TWELVE]
-"\u247F" => "(12)"
-
-# ⑬  [CIRCLED NUMBER THIRTEEN]
-"\u246C" => "13"
-
-# ⓭  [NEGATIVE CIRCLED NUMBER THIRTEEN]
-"\u24ED" => "13"
-
-# ⒔  [NUMBER THIRTEEN FULL STOP]
-"\u2494" => "13."
-
-# ⒀  [PARENTHESIZED NUMBER THIRTEEN]
-"\u2480" => "(13)"
-
-# ⑭  [CIRCLED NUMBER FOURTEEN]
-"\u246D" => "14"
-
-# ⓮  [NEGATIVE CIRCLED NUMBER FOURTEEN]
-"\u24EE" => "14"
-
-# ⒕  [NUMBER FOURTEEN FULL STOP]
-"\u2495" => "14."
-
-# ⒁  [PARENTHESIZED NUMBER FOURTEEN]
-"\u2481" => "(14)"
-
-# ⑮  [CIRCLED NUMBER FIFTEEN]
-"\u246E" => "15"
-
-# ⓯  [NEGATIVE CIRCLED NUMBER FIFTEEN]
-"\u24EF" => "15"
-
-# ⒖  [NUMBER FIFTEEN FULL STOP]
-"\u2496" => "15."
-
-# ⒂  [PARENTHESIZED NUMBER FIFTEEN]
-"\u2482" => "(15)"
-
-# ⑯  [CIRCLED NUMBER SIXTEEN]
-"\u246F" => "16"
-
-# ⓰  [NEGATIVE CIRCLED NUMBER SIXTEEN]
-"\u24F0" => "16"
-
-# ⒗  [NUMBER SIXTEEN FULL STOP]
-"\u2497" => "16."
-
-# ⒃  [PARENTHESIZED NUMBER SIXTEEN]
-"\u2483" => "(16)"
-
-# ⑰  [CIRCLED NUMBER SEVENTEEN]
-"\u2470" => "17"
-
-# ⓱  [NEGATIVE CIRCLED NUMBER SEVENTEEN]
-"\u24F1" => "17"
-
-# ⒘  [NUMBER SEVENTEEN FULL STOP]
-"\u2498" => "17."
-
-# ⒄  [PARENTHESIZED NUMBER SEVENTEEN]
-"\u2484" => "(17)"
-
-# ⑱  [CIRCLED NUMBER EIGHTEEN]
-"\u2471" => "18"
-
-# ⓲  [NEGATIVE CIRCLED NUMBER EIGHTEEN]
-"\u24F2" => "18"
-
-# ⒙  [NUMBER EIGHTEEN FULL STOP]
-"\u2499" => "18."
-
-# ⒅  [PARENTHESIZED NUMBER EIGHTEEN]
-"\u2485" => "(18)"
-
-# ⑲  [CIRCLED NUMBER NINETEEN]
-"\u2472" => "19"
-
-# ⓳  [NEGATIVE CIRCLED NUMBER NINETEEN]
-"\u24F3" => "19"
-
-# ⒚  [NUMBER NINETEEN FULL STOP]
-"\u249A" => "19."
-
-# ⒆  [PARENTHESIZED NUMBER NINETEEN]
-"\u2486" => "(19)"
-
-# ⑳  [CIRCLED NUMBER TWENTY]
-"\u2473" => "20"
-
-# ⓴  [NEGATIVE CIRCLED NUMBER TWENTY]
-"\u24F4" => "20"
-
-# ⒛  [NUMBER TWENTY FULL STOP]
-"\u249B" => "20."
-
-# ⒇  [PARENTHESIZED NUMBER TWENTY]
-"\u2487" => "(20)"
-
-# «  [LEFT-POINTING DOUBLE ANGLE QUOTATION MARK]
-"\u00AB" => "\""
-
-# »  [RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK]
-"\u00BB" => "\""
-
-# “  [LEFT DOUBLE QUOTATION MARK]
-"\u201C" => "\""
-
-# ”  [RIGHT DOUBLE QUOTATION MARK]
-"\u201D" => "\""
-
-# „  [DOUBLE LOW-9 QUOTATION MARK]
-"\u201E" => "\""
-
-# ″  [DOUBLE PRIME]
-"\u2033" => "\""
-
-# ‶  [REVERSED DOUBLE PRIME]
-"\u2036" => "\""
-
-# ❝  [HEAVY DOUBLE TURNED COMMA QUOTATION MARK ORNAMENT]
-"\u275D" => "\""
-
-# ❞  [HEAVY DOUBLE COMMA QUOTATION MARK ORNAMENT]
-"\u275E" => "\""
-
-# ❮  [HEAVY LEFT-POINTING ANGLE QUOTATION MARK ORNAMENT]
-"\u276E" => "\""
-
-# ❯  [HEAVY RIGHT-POINTING ANGLE QUOTATION MARK ORNAMENT]
-"\u276F" => "\""
-
-# "  [FULLWIDTH QUOTATION MARK]
-"\uFF02" => "\""
-
-# ‘  [LEFT SINGLE QUOTATION MARK]
-"\u2018" => "\'"
-
-# ’  [RIGHT SINGLE QUOTATION MARK]
-"\u2019" => "\'"
-
-# ‚  [SINGLE LOW-9 QUOTATION MARK]
-"\u201A" => "\'"
-
-# ‛  [SINGLE HIGH-REVERSED-9 QUOTATION MARK]
-"\u201B" => "\'"
-
-# ′  [PRIME]
-"\u2032" => "\'"
-
-# ‵  [REVERSED PRIME]
-"\u2035" => "\'"
-
-# ‹  [SINGLE LEFT-POINTING ANGLE QUOTATION MARK]
-"\u2039" => "\'"
-
-# ›  [SINGLE RIGHT-POINTING ANGLE QUOTATION MARK]
-"\u203A" => "\'"
-
-# ❛  [HEAVY SINGLE TURNED COMMA QUOTATION MARK ORNAMENT]
-"\u275B" => "\'"
-
-# ❜  [HEAVY SINGLE COMMA QUOTATION MARK ORNAMENT]
-"\u275C" => "\'"
-
-# '  [FULLWIDTH APOSTROPHE]
-"\uFF07" => "\'"
-
-# ‐  [HYPHEN]
-"\u2010" => "-"
-
-# ‑  [NON-BREAKING HYPHEN]
-"\u2011" => "-"
-
-# ‒  [FIGURE DASH]
-"\u2012" => "-"
-
-# –  [EN DASH]
-"\u2013" => "-"
-
-# —  [EM DASH]
-"\u2014" => "-"
-
-# ⁻  [SUPERSCRIPT MINUS]
-"\u207B" => "-"
-
-# ₋  [SUBSCRIPT MINUS]
-"\u208B" => "-"
-
-# -  [FULLWIDTH HYPHEN-MINUS]
-"\uFF0D" => "-"
-
-# ⁅  [LEFT SQUARE BRACKET WITH QUILL]
-"\u2045" => "["
-
-# ❲  [LIGHT LEFT TORTOISE SHELL BRACKET ORNAMENT]
-"\u2772" => "["
-
-# [  [FULLWIDTH LEFT SQUARE BRACKET]
-"\uFF3B" => "["
-
-# ⁆  [RIGHT SQUARE BRACKET WITH QUILL]
-"\u2046" => "]"
-
-# ❳  [LIGHT RIGHT TORTOISE SHELL BRACKET ORNAMENT]
-"\u2773" => "]"
-
-# ]  [FULLWIDTH RIGHT SQUARE BRACKET]
-"\uFF3D" => "]"
-
-# ⁽  [SUPERSCRIPT LEFT PARENTHESIS]
-"\u207D" => "("
-
-# ₍  [SUBSCRIPT LEFT PARENTHESIS]
-"\u208D" => "("
-
-# ❨  [MEDIUM LEFT PARENTHESIS ORNAMENT]
-"\u2768" => "("
-
-# ❪  [MEDIUM FLATTENED LEFT PARENTHESIS ORNAMENT]
-"\u276A" => "("
-
-# (  [FULLWIDTH LEFT PARENTHESIS]
-"\uFF08" => "("
-
-# ⸨  [LEFT DOUBLE PARENTHESIS]
-"\u2E28" => "(("
-
-# ⁾  [SUPERSCRIPT RIGHT PARENTHESIS]
-"\u207E" => ")"
-
-# ₎  [SUBSCRIPT RIGHT PARENTHESIS]
-"\u208E" => ")"
-
-# ❩  [MEDIUM RIGHT PARENTHESIS ORNAMENT]
-"\u2769" => ")"
-
-# ❫  [MEDIUM FLATTENED RIGHT PARENTHESIS ORNAMENT]
-"\u276B" => ")"
-
-# )  [FULLWIDTH RIGHT PARENTHESIS]
-"\uFF09" => ")"
-
-# ⸩  [RIGHT DOUBLE PARENTHESIS]
-"\u2E29" => "))"
-
-# ❬  [MEDIUM LEFT-POINTING ANGLE BRACKET ORNAMENT]
-"\u276C" => "<"
-
-# ❰  [HEAVY LEFT-POINTING ANGLE BRACKET ORNAMENT]
-"\u2770" => "<"
-
-# <  [FULLWIDTH LESS-THAN SIGN]
-"\uFF1C" => "<"
-
-# ❭  [MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT]
-"\u276D" => ">"
-
-# ❱  [HEAVY RIGHT-POINTING ANGLE BRACKET ORNAMENT]
-"\u2771" => ">"
-
-# >  [FULLWIDTH GREATER-THAN SIGN]
-"\uFF1E" => ">"
-
-# ❴  [MEDIUM LEFT CURLY BRACKET ORNAMENT]
-"\u2774" => "{"
-
-# {  [FULLWIDTH LEFT CURLY BRACKET]
-"\uFF5B" => "{"
-
-# ❵  [MEDIUM RIGHT CURLY BRACKET ORNAMENT]
-"\u2775" => "}"
-
-# }  [FULLWIDTH RIGHT CURLY BRACKET]
-"\uFF5D" => "}"
-
-# ⁺  [SUPERSCRIPT PLUS SIGN]
-"\u207A" => "+"
-
-# ₊  [SUBSCRIPT PLUS SIGN]
-"\u208A" => "+"
-
-# +  [FULLWIDTH PLUS SIGN]
-"\uFF0B" => "+"
-
-# ⁼  [SUPERSCRIPT EQUALS SIGN]
-"\u207C" => "="
-
-# ₌  [SUBSCRIPT EQUALS SIGN]
-"\u208C" => "="
-
-# =  [FULLWIDTH EQUALS SIGN]
-"\uFF1D" => "="
-
-# !  [FULLWIDTH EXCLAMATION MARK]
-"\uFF01" => "!"
-
-# ‼  [DOUBLE EXCLAMATION MARK]
-"\u203C" => "!!"
-
-# ⁉  [EXCLAMATION QUESTION MARK]
-"\u2049" => "!?"
-
-# #  [FULLWIDTH NUMBER SIGN]
-"\uFF03" => "#"
-
-# $  [FULLWIDTH DOLLAR SIGN]
-"\uFF04" => "$"
-
-# ⁒  [COMMERCIAL MINUS SIGN]
-"\u2052" => "%"
-
-# %  [FULLWIDTH PERCENT SIGN]
-"\uFF05" => "%"
-
-# &  [FULLWIDTH AMPERSAND]
-"\uFF06" => "&"
-
-# ⁎  [LOW ASTERISK]
-"\u204E" => "*"
-
-# *  [FULLWIDTH ASTERISK]
-"\uFF0A" => "*"
-
-# ,  [FULLWIDTH COMMA]
-"\uFF0C" => ","
-
-# .  [FULLWIDTH FULL STOP]
-"\uFF0E" => "."
-
-# ⁄  [FRACTION SLASH]
-"\u2044" => "/"
-
-# /  [FULLWIDTH SOLIDUS]
-"\uFF0F" => "/"
-
-# :  [FULLWIDTH COLON]
-"\uFF1A" => ":"
-
-# ⁏  [REVERSED SEMICOLON]
-"\u204F" => ";"
-
-# ;  [FULLWIDTH SEMICOLON]
-"\uFF1B" => ";"
-
-# ?  [FULLWIDTH QUESTION MARK]
-"\uFF1F" => "?"
-
-# ⁇  [DOUBLE QUESTION MARK]
-"\u2047" => "??"
-
-# ⁈  [QUESTION EXCLAMATION MARK]
-"\u2048" => "?!"
-
-# @  [FULLWIDTH COMMERCIAL AT]
-"\uFF20" => "@"
-
-# \  [FULLWIDTH REVERSE SOLIDUS]
-"\uFF3C" => "\\"
-
-# ‸  [CARET]
-"\u2038" => "^"
-
-# ^  [FULLWIDTH CIRCUMFLEX ACCENT]
-"\uFF3E" => "^"
-
-# _  [FULLWIDTH LOW LINE]
-"\uFF3F" => "_"
-
-# ⁓  [SWUNG DASH]
-"\u2053" => "~"
-
-# ~  [FULLWIDTH TILDE]
-"\uFF5E" => "~"
-
-################################################################
-# Below is the Perl script used to generate the above mappings #
-# from ASCIIFoldingFilter.java:                                #
-################################################################
-#
-# #!/usr/bin/perl
-#
-# use warnings;
-# use strict;
-# 
-# my @source_chars = ();
-# my @source_char_descriptions = ();
-# my $target = '';
-# 
-# while (<>) {
-#   if (/case\s+'(\\u[A-F0-9]+)':\s*\/\/\s*(.*)/i) {
-#     push @source_chars, $1;
-#	  push @source_char_descriptions, $2;
-#	  next;
-#   }
-#   if (/output\[[^\]]+\]\s*=\s*'(\\'|\\\\|.)'/) {
-#     $target .= $1;
-#     next;
-#   }
-#   if (/break;/) {
-#     $target = "\\\"" if ($target eq '"');
-#     for my $source_char_num (0..$#source_chars) {
-#	    print "# $source_char_descriptions[$source_char_num]\n";
-#	    print "\"$source_chars[$source_char_num]\" => \"$target\"\n\n";
-#	  }
-#	  @source_chars = ();
-#	  @source_char_descriptions = ();
-#	  $target = '';
-#   }
-# }
diff --git a/solr/example/example-DIH/solr/solr/conf/mapping-ISOLatin1Accent.txt b/solr/example/example-DIH/solr/solr/conf/mapping-ISOLatin1Accent.txt
deleted file mode 100644
index ede7742..0000000
--- a/solr/example/example-DIH/solr/solr/conf/mapping-ISOLatin1Accent.txt
+++ /dev/null
@@ -1,246 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Syntax:
-#   "source" => "target"
-#     "source".length() > 0 (source cannot be empty.)
-#     "target".length() >= 0 (target can be empty.)
-
-# example:
-#   "À" => "A"
-#   "\u00C0" => "A"
-#   "\u00C0" => "\u0041"
-#   "ß" => "ss"
-#   "\t" => " "
-#   "\n" => ""
-
-# À => A
-"\u00C0" => "A"
-
-# Á => A
-"\u00C1" => "A"
-
-# Â => A
-"\u00C2" => "A"
-
-# Ã => A
-"\u00C3" => "A"
-
-# Ä => A
-"\u00C4" => "A"
-
-# Å => A
-"\u00C5" => "A"
-
-# Æ => AE
-"\u00C6" => "AE"
-
-# Ç => C
-"\u00C7" => "C"
-
-# È => E
-"\u00C8" => "E"
-
-# É => E
-"\u00C9" => "E"
-
-# Ê => E
-"\u00CA" => "E"
-
-# Ë => E
-"\u00CB" => "E"
-
-# Ì => I
-"\u00CC" => "I"
-
-# Í => I
-"\u00CD" => "I"
-
-# Î => I
-"\u00CE" => "I"
-
-# Ï => I
-"\u00CF" => "I"
-
-# IJ => IJ
-"\u0132" => "IJ"
-
-# Ð => D
-"\u00D0" => "D"
-
-# Ñ => N
-"\u00D1" => "N"
-
-# Ò => O
-"\u00D2" => "O"
-
-# Ó => O
-"\u00D3" => "O"
-
-# Ô => O
-"\u00D4" => "O"
-
-# Õ => O
-"\u00D5" => "O"
-
-# Ö => O
-"\u00D6" => "O"
-
-# Ø => O
-"\u00D8" => "O"
-
-# Π=> OE
-"\u0152" => "OE"
-
-# Þ
-"\u00DE" => "TH"
-
-# Ù => U
-"\u00D9" => "U"
-
-# Ú => U
-"\u00DA" => "U"
-
-# Û => U
-"\u00DB" => "U"
-
-# Ü => U
-"\u00DC" => "U"
-
-# Ý => Y
-"\u00DD" => "Y"
-
-# Ÿ => Y
-"\u0178" => "Y"
-
-# à => a
-"\u00E0" => "a"
-
-# á => a
-"\u00E1" => "a"
-
-# â => a
-"\u00E2" => "a"
-
-# ã => a
-"\u00E3" => "a"
-
-# ä => a
-"\u00E4" => "a"
-
-# å => a
-"\u00E5" => "a"
-
-# æ => ae
-"\u00E6" => "ae"
-
-# ç => c
-"\u00E7" => "c"
-
-# è => e
-"\u00E8" => "e"
-
-# é => e
-"\u00E9" => "e"
-
-# ê => e
-"\u00EA" => "e"
-
-# ë => e
-"\u00EB" => "e"
-
-# ì => i
-"\u00EC" => "i"
-
-# í => i
-"\u00ED" => "i"
-
-# î => i
-"\u00EE" => "i"
-
-# ï => i
-"\u00EF" => "i"
-
-# ij => ij
-"\u0133" => "ij"
-
-# ð => d
-"\u00F0" => "d"
-
-# ñ => n
-"\u00F1" => "n"
-
-# ò => o
-"\u00F2" => "o"
-
-# ó => o
-"\u00F3" => "o"
-
-# ô => o
-"\u00F4" => "o"
-
-# õ => o
-"\u00F5" => "o"
-
-# ö => o
-"\u00F6" => "o"
-
-# ø => o
-"\u00F8" => "o"
-
-# œ => oe
-"\u0153" => "oe"
-
-# ß => ss
-"\u00DF" => "ss"
-
-# þ => th
-"\u00FE" => "th"
-
-# ù => u
-"\u00F9" => "u"
-
-# ú => u
-"\u00FA" => "u"
-
-# û => u
-"\u00FB" => "u"
-
-# ü => u
-"\u00FC" => "u"
-
-# ý => y
-"\u00FD" => "y"
-
-# ÿ => y
-"\u00FF" => "y"
-
-# ff => ff
-"\uFB00" => "ff"
-
-# fi => fi
-"\uFB01" => "fi"
-
-# fl => fl
-"\uFB02" => "fl"
-
-# ffi => ffi
-"\uFB03" => "ffi"
-
-# ffl => ffl
-"\uFB04" => "ffl"
-
-# ſt => ft
-"\uFB05" => "ft"
-
-# st => st
-"\uFB06" => "st"
diff --git a/solr/example/example-DIH/solr/solr/conf/protwords.txt b/solr/example/example-DIH/solr/solr/conf/protwords.txt
deleted file mode 100644
index 1dfc0ab..0000000
--- a/solr/example/example-DIH/solr/solr/conf/protwords.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-# Use a protected word file to protect against the stemmer reducing two
-# unrelated words to the same base word.
-
-# Some non-words that normally won't be encountered,
-# just to test that they won't be stemmed.
-dontstems
-zwhacky
-
diff --git a/solr/example/example-DIH/solr/solr/conf/solr-data-config.xml b/solr/example/example-DIH/solr/solr/conf/solr-data-config.xml
deleted file mode 100644
index ee7d6cf..0000000
--- a/solr/example/example-DIH/solr/solr/conf/solr-data-config.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-  -->
-
-<dataConfig>
-  <document>
-    <entity name="sep" processor="SolrEntityProcessor"
-            url="http://127.0.0.1:8983/solr/db"
-            query="*:*"
-            fl="*,orig_version_l:_version_,ignored_price_c:price_c"/>
-  </document>
-</dataConfig>
diff --git a/solr/example/example-DIH/solr/solr/conf/solrconfig.xml b/solr/example/example-DIH/solr/solr/conf/solrconfig.xml
deleted file mode 100644
index 56e7ed6..0000000
--- a/solr/example/example-DIH/solr/solr/conf/solrconfig.xml
+++ /dev/null
@@ -1,1340 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
-     For more details about configurations options that may appear in
-     this file, see http://wiki.apache.org/solr/SolrConfigXml.
--->
-<config>
-  <!-- In all configuration below, a prefix of "solr." for class names
-       is an alias that causes solr to search appropriate packages,
-       including org.apache.solr.(search|update|request|core|analysis)
-
-       You may also specify a fully qualified Java classname if you
-       have your own custom plugins.
-    -->
-
-  <!-- Controls what version of Lucene various components of Solr
-       adhere to.  Generally, you want to use the latest version to
-       get all bug fixes and improvements. It is highly recommended
-       that you fully re-index after changing this setting as it can
-       affect both how text is indexed and queried.
-  -->
-  <luceneMatchVersion>9.0.0</luceneMatchVersion>
-
-  <!-- <lib/> directives can be used to instruct Solr to load any Jars
-       identified and use them to resolve any "plugins" specified in
-       your solrconfig.xml or schema.xml (ie: Analyzers, Request
-       Handlers, etc...).
-
-       All directories and paths are resolved relative to the
-       instanceDir.
-
-       Please note that <lib/> directives are processed in the order
-       that they appear in your solrconfig.xml file, and are "stacked"
-       on top of each other when building a ClassLoader - so if you have
-       plugin jars with dependencies on other jars, the "lower level"
-       dependency jars should be loaded first.
-
-       If a "./lib" directory exists in your instanceDir, all files
-       found in it are included as if you had used the following
-       syntax...
-
-              <lib dir="./lib" />
-    -->
-
-  <!-- A 'dir' option by itself adds any files found in the directory
-       to the classpath, this is useful for including all jars in a
-       directory.
-
-       When a 'regex' is specified in addition to a 'dir', only the
-       files in that directory which completely match the regex
-       (anchored on both ends) will be included.
-
-       If a 'dir' option (with or without a regex) is used and nothing
-       is found that matches, a warning will be logged.
-
-       The examples below can be used to load some solr-contribs along
-       with their external dependencies.
-    -->
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
-
-  <lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
-
-  <!-- an exact 'path' can be used instead of a 'dir' to specify a
-       specific jar file.  This will cause a serious error to be logged
-       if it can't be loaded.
-    -->
-  <!--
-     <lib path="../a-jar-that-does-not-exist.jar" />
-  -->
-
-  <!-- Data Directory
-
-       Used to specify an alternate directory to hold all index data
-       other than the default ./data under the Solr home.  If
-       replication is in use, this should match the replication
-       configuration.
-    -->
-  <dataDir>${solr.data.dir:}</dataDir>
-
-
-  <!-- The DirectoryFactory to use for indexes.
-
-       solr.StandardDirectoryFactory is filesystem
-       based and tries to pick the best implementation for the current
-       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,
-       wraps solr.StandardDirectoryFactory and caches small files in memory
-       for better NRT performance.
-
-       One can force a particular implementation via solr.MMapDirectoryFactory
-       or solr.NIOFSDirectoryFactory.
-
-       solr.RAMDirectoryFactory is memory based and not persistent.
-    -->
-  <directoryFactory name="DirectoryFactory"
-                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
-
-  <!-- The CodecFactory for defining the format of the inverted index.
-       The default implementation is SchemaCodecFactory, which is the official Lucene
-       index format, but hooks into the schema to provide per-field customization of
-       the postings lists and per-document values in the fieldType element
-       (postingsFormat/docValuesFormat). Note that most of the alternative implementations
-       are experimental, so if you choose to customize the index format, it's a good
-       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
-       before upgrading to a newer version to avoid unnecessary reindexing.
-  -->
-  <codecFactory class="solr.SchemaCodecFactory"/>
-
-  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-       Index Config - These settings control low-level behavior of indexing
-       Most example settings here show the default value, but are commented
-       out, to more easily see where customizations have been made.
-
-       Note: This replaces <indexDefaults> and <mainIndex> from older versions
-       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <indexConfig>
-    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a
-         LimitTokenCountFilterFactory in your fieldType definition. E.g.
-     <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
-    -->
-    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
-    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->
-
-    <!-- Expert: Enabling compound file will use less files for the index,
-         using fewer file descriptors on the expense of performance decrease.
-         Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
-    <!-- <useCompoundFile>false</useCompoundFile> -->
-
-    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene
-         indexing for buffering added documents and deletions before they are
-         flushed to the Directory.
-         maxBufferedDocs sets a limit on the number of documents buffered
-         before flushing.
-         If both ramBufferSizeMB and maxBufferedDocs is set, then
-         Lucene will flush based on whichever limit is hit first.
-         The default is 100 MB.  -->
-    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
-    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
-
-    <!-- Expert: Merge Policy
-         The Merge Policy in Lucene controls how merging of segments is done.
-         The default since Solr/Lucene 3.3 is TieredMergePolicy.
-         The default since Lucene 2.3 was the LogByteSizeMergePolicy,
-         Even older versions of Lucene used LogDocMergePolicy.
-      -->
-    <!--
-        <mergePolicyFactory class="solr.TieredMergePolicyFactory">
-          <int name="maxMergeAtOnce">10</int>
-          <int name="segmentsPerTier">10</int>
-        </mergePolicyFactory>
-     -->
-
-    <!-- Expert: Merge Scheduler
-         The Merge Scheduler in Lucene controls how merges are
-         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)
-         can perform merges in the background using separate threads.
-         The SerialMergeScheduler (Lucene 2.2 default) does not.
-     -->
-    <!--
-       <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-       -->
-
-    <!-- LockFactory
-
-         This option specifies which Lucene LockFactory implementation
-         to use.
-
-         single = SingleInstanceLockFactory - suggested for a
-                  read-only index or when there is no possibility of
-                  another process trying to modify the index.
-         native = NativeFSLockFactory - uses OS native file locking.
-                  Do not use when multiple solr webapps in the same
-                  JVM are attempting to share a single index.
-         simple = SimpleFSLockFactory  - uses a plain file for locking
-
-         Defaults: 'native' is default for Solr3.6 and later, otherwise
-                   'simple' is the default
-
-         More details on the nuances of each LockFactory...
-         http://wiki.apache.org/lucene-java/AvailableLockFactories
-    -->
-    <lockType>${solr.lock.type:native}</lockType>
-
-    <!-- Commit Deletion Policy
-         Custom deletion policies can be specified here. The class must
-         implement org.apache.lucene.index.IndexDeletionPolicy.
-
-         The default Solr IndexDeletionPolicy implementation supports
-         deleting index commit points on number of commits, age of
-         commit point and optimized status.
-
-         The latest commit point should always be preserved regardless
-         of the criteria.
-    -->
-    <!--
-    <deletionPolicy class="solr.SolrDeletionPolicy">
-    -->
-      <!-- The number of commit points to be kept -->
-      <!-- <str name="maxCommitsToKeep">1</str> -->
-      <!-- The number of optimized commit points to be kept -->
-      <!-- <str name="maxOptimizedCommitsToKeep">0</str> -->
-      <!--
-          Delete all commit points once they have reached the given age.
-          Supports DateMathParser syntax e.g.
-        -->
-      <!--
-         <str name="maxCommitAge">30MINUTES</str>
-         <str name="maxCommitAge">1DAY</str>
-      -->
-    <!--
-    </deletionPolicy>
-    -->
-
-    <!-- Lucene Infostream
-
-         To aid in advanced debugging, Lucene provides an "InfoStream"
-         of detailed information when indexing.
-
-         Setting the value to true will instruct the underlying Lucene
-         IndexWriter to write its info stream to solr's log. By default,
-         this is enabled here, and controlled through log4j2.xml
-      -->
-     <infoStream>true</infoStream>
-  </indexConfig>
-
-
-  <!-- JMX
-
-       This example enables JMX if and only if an existing MBeanServer
-       is found, use this if you want to configure JMX through JVM
-       parameters. Remove this to disable exposing Solr configuration
-       and statistics to JMX.
-
-       For more details see http://wiki.apache.org/solr/SolrJmx
-    -->
-  <jmx />
-  <!-- If you want to connect to a particular server, specify the
-       agentId
-    -->
-  <!-- <jmx agentId="myAgent" /> -->
-  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->
-  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
-    -->
-
-  <!-- The default high-performance update handler -->
-  <updateHandler class="solr.DirectUpdateHandler2">
-
-    <!-- Enables a transaction log, used for real-time get, durability, and
-         and solr cloud replica recovery.  The log can grow as big as
-         uncommitted changes to the index, so use of a hard autoCommit
-         is recommended (see below).
-         "dir" - the target directory for transaction logs, defaults to the
-                solr data directory.  -->
-    <updateLog>
-      <str name="dir">${solr.ulog.dir:}</str>
-    </updateLog>
-
-    <!-- AutoCommit
-
-         Perform a hard commit automatically under certain conditions.
-         Instead of enabling autoCommit, consider using "commitWithin"
-         when adding documents.
-
-         http://wiki.apache.org/solr/UpdateXmlMessages
-
-         maxDocs - Maximum number of documents to add since the last
-                   commit before automatically triggering a new commit.
-
-         maxTime - Maximum amount of time in ms that is allowed to pass
-                   since a document was added before automatically
-                   triggering a new commit.
-         openSearcher - if false, the commit causes recent index changes
-           to be flushed to stable storage, but does not cause a new
-           searcher to be opened to make those changes visible.
-
-         If the updateLog is enabled, then it's highly recommended to
-         have some sort of hard autoCommit to limit the log size.
-      -->
-     <autoCommit>
-       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
-       <openSearcher>false</openSearcher>
-     </autoCommit>
-
-    <!-- softAutoCommit is like autoCommit except it causes a
-         'soft' commit which only ensures that changes are visible
-         but does not ensure that data is synced to disk.  This is
-         faster and more near-realtime friendly than a hard commit.
-      -->
-
-     <autoSoftCommit>
-       <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
-     </autoSoftCommit>
-
-    <!-- Update Related Event Listeners
-
-         Various IndexWriter related events can trigger Listeners to
-         take actions.
-
-         postCommit - fired after every commit or optimize command
-         postOptimize - fired after every optimize command
-      -->
-
-  </updateHandler>
-
-  <!-- IndexReaderFactory
-
-       Use the following format to specify a custom IndexReaderFactory,
-       which allows for alternate IndexReader implementations.
-
-       ** Experimental Feature **
-
-       Please note - Using a custom IndexReaderFactory may prevent
-       certain other features from working. The API to
-       IndexReaderFactory may change without warning or may even be
-       removed from future releases if the problems cannot be
-       resolved.
-
-
-       ** Features that may not work with custom IndexReaderFactory **
-
-       The ReplicationHandler assumes a disk-resident index. Using a
-       custom IndexReader implementation may cause incompatibility
-       with ReplicationHandler and may cause replication to not work
-       correctly. See SOLR-1366 for details.
-
-    -->
-  <!--
-  <indexReaderFactory name="IndexReaderFactory" class="package.class">
-    <str name="someArg">Some Value</str>
-  </indexReaderFactory >
-  -->
-
-  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-       Query section - these settings control query time things like caches
-       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <query>
-    <!-- Max Boolean Clauses
-
-         Maximum number of clauses in each BooleanQuery,  an exception
-         is thrown if exceeded.
-
-         ** WARNING **
-
-         This option actually modifies a global Lucene property that
-         will affect all SolrCores.  If multiple solrconfig.xml files
-         disagree on this property, the value at any given moment will
-         be based on the last SolrCore to be initialized.
-
-      -->
-    <maxBooleanClauses>${solr.max.booleanClauses:1024}</maxBooleanClauses>
-
-
-    <!-- Solr Internal Query Caches
-         Starting with Solr 9.0 the default cache implementation used is CaffeineCache.
-    -->
-
-    <!-- Filter Cache
-
-         Cache used by SolrIndexSearcher for filters (DocSets),
-         unordered sets of *all* documents that match a query.  When a
-         new searcher is opened, its caches may be prepopulated or
-         "autowarmed" using data from caches in the old searcher.
-         autowarmCount is the number of items to prepopulate.
-
-         Parameters:
-           class - the SolrCache implementation
-           size - the maximum number of entries in the cache
-           initialSize - the initial capacity (number of entries) of
-               the cache.  (see java.util.HashMap)
-           autowarmCount - the number of entries to prepopulate from
-               and old cache.
-      -->
-    <filterCache class="solr.CaffeineCache"
-                 size="512"
-                 initialSize="512"
-                 autowarmCount="0"/>
-
-    <!-- Query Result Cache
-
-         Caches results of searches - ordered lists of document ids
-         (DocList) based on a query, a sort, and the range of documents requested.
-      -->
-    <queryResultCache class="solr.CaffeineCache"
-                     size="512"
-                     initialSize="512"
-                     autowarmCount="0"/>
-
-    <!-- Document Cache
-
-         Caches Lucene Document objects (the stored fields for each
-         document).  Since Lucene internal document ids are transient,
-         this cache will not be autowarmed.
-      -->
-    <documentCache class="solr.CaffeineCache"
-                   size="512"
-                   initialSize="512"
-                   autowarmCount="0"/>
-
-    <!-- custom cache currently used by block join -->
-    <cache name="perSegFilter"
-      class="solr.search.CaffeineCache"
-      size="10"
-      initialSize="0"
-      autowarmCount="10"
-      regenerator="solr.NoOpRegenerator" />
-
-    <!-- Field Value Cache
-
-         Cache used to hold field values that are quickly accessible
-         by document id.  The fieldValueCache is created by default
-         even if not configured here.
-      -->
-    <!--
-       <fieldValueCache class="solr.CaffeineCache"
-                        size="512"
-                        autowarmCount="128"
-                        showItems="32" />
-      -->
-
-    <!-- Custom Cache
-
-         Example of a generic cache.  These caches may be accessed by
-         name through SolrIndexSearcher.getCache(),cacheLookup(), and
-         cacheInsert().  The purpose is to enable easy caching of
-         user/application level data.  The regenerator argument should
-         be specified as an implementation of solr.CacheRegenerator
-         if autowarming is desired.
-      -->
-    <!--
-       <cache name="myUserCache"
-              class="solr.CaffeineCache"
-              size="4096"
-              initialSize="1024"
-              autowarmCount="1024"
-              regenerator="com.mycompany.MyRegenerator"
-              />
-      -->
-
-
-    <!-- Lazy Field Loading
-
-         If true, stored fields that are not requested will be loaded
-         lazily.  This can result in a significant speed improvement
-         if the usual case is to not load all stored fields,
-         especially if the skipped fields are large compressed text
-         fields.
-    -->
-    <enableLazyFieldLoading>true</enableLazyFieldLoading>
-
-   <!-- Use Filter For Sorted Query
-
-        A possible optimization that attempts to use a filter to
-        satisfy a search.  If the requested sort does not include
-        score, then the filterCache will be checked for a filter
-        matching the query. If found, the filter will be used as the
-        source of document ids, and then the sort will be applied to
-        that.
-
-        For most situations, this will not be useful unless you
-        frequently get the same search repeatedly with different sort
-        options, and none of them ever use "score"
-     -->
-   <!--
-      <useFilterForSortedQuery>true</useFilterForSortedQuery>
-     -->
-
-   <!-- Result Window Size
-
-        An optimization for use with the queryResultCache.  When a search
-        is requested, a superset of the requested number of document ids
-        are collected.  For example, if a search for a particular query
-        requests matching documents 10 through 19, and queryWindowSize is 50,
-        then documents 0 through 49 will be collected and cached.  Any further
-        requests in that range can be satisfied via the cache.
-     -->
-   <queryResultWindowSize>20</queryResultWindowSize>
-
-   <!-- Maximum number of documents to cache for any entry in the
-        queryResultCache.
-     -->
-   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
-
-   <!-- Query Related Event Listeners
-
-        Various IndexSearcher related events can trigger Listeners to
-        take actions.
-
-        newSearcher - fired whenever a new searcher is being prepared
-        and there is a current searcher handling requests (aka
-        registered).  It can be used to prime certain caches to
-        prevent long request times for certain requests.
-
-        firstSearcher - fired whenever a new searcher is being
-        prepared but there is no current registered searcher to handle
-        requests or to gain autowarming data from.
-
-
-     -->
-    <!-- QuerySenderListener takes an array of NamedList and executes a
-         local query request for each NamedList in sequence.
-      -->
-    <listener event="newSearcher" class="solr.QuerySenderListener">
-      <arr name="queries">
-        <!--
-           <lst><str name="q">solr</str><str name="sort">price asc</str></lst>
-           <lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
-          -->
-      </arr>
-    </listener>
-    <listener event="firstSearcher" class="solr.QuerySenderListener">
-      <arr name="queries">
-        <lst>
-          <str name="q">static firstSearcher warming in solrconfig.xml</str>
-        </lst>
-      </arr>
-    </listener>
-
-    <!-- Use Cold Searcher
-
-         If a search request comes in and there is no current
-         registered searcher, then immediately register the still
-         warming searcher and use it.  If "false" then all requests
-         will block until the first searcher is done warming.
-      -->
-    <useColdSearcher>false</useColdSearcher>
-
-  </query>
-
-
-  <!-- Request Dispatcher
-
-       This section contains instructions for how the SolrDispatchFilter
-       should behave when processing requests for this SolrCore.
-    -->
-  <requestDispatcher>
-    <!-- Request Parsing
-
-         These settings indicate how Solr Requests may be parsed, and
-         what restrictions may be placed on the ContentStreams from
-         those requests
-
-         enableRemoteStreaming - enables use of the stream.file
-         and stream.url parameters for specifying remote streams.
-
-         multipartUploadLimitInKB - specifies the max size (in KiB) of
-         Multipart File Uploads that Solr will allow in a Request.
-
-         formdataUploadLimitInKB - specifies the max size (in KiB) of
-         form data (application/x-www-form-urlencoded) sent via
-         POST. You can use POST to pass request parameters not
-         fitting into the URL.
-
-         addHttpRequestToContext - if set to true, it will instruct
-         the requestParsers to include the original HttpServletRequest
-         object in the context map of the SolrQueryRequest under the
-         key "httpRequest". It will not be used by any of the existing
-         Solr components, but may be useful when developing custom
-         plugins.
-
-         *** WARNING ***
-         Before enabling remote streaming, you should make sure your
-         system has authentication enabled.
-
-    <requestParsers enableRemoteStreaming="false"
-                    multipartUploadLimitInKB="-1"
-                    formdataUploadLimitInKB="-1"
-                    addHttpRequestToContext="false"/>
-      -->
-
-    <!-- HTTP Caching
-
-         Set HTTP caching related parameters (for proxy caches and clients).
-
-         The options below instruct Solr not to output any HTTP Caching
-         related headers
-      -->
-    <httpCaching never304="true" />
-    <!-- If you include a <cacheControl> directive, it will be used to
-         generate a Cache-Control header (as well as an Expires header
-         if the value contains "max-age=")
-
-         By default, no Cache-Control header is generated.
-
-         You can use the <cacheControl> option even if you have set
-         never304="true"
-      -->
-    <!--
-       <httpCaching never304="true" >
-         <cacheControl>max-age=30, public</cacheControl>
-       </httpCaching>
-      -->
-    <!-- To enable Solr to respond with automatically generated HTTP
-         Caching headers, and to response to Cache Validation requests
-         correctly, set the value of never304="false"
-
-         This will cause Solr to generate Last-Modified and ETag
-         headers based on the properties of the Index.
-
-         The following options can also be specified to affect the
-         values of these headers...
-
-         lastModFrom - the default value is "openTime" which means the
-         Last-Modified value (and validation against If-Modified-Since
-         requests) will all be relative to when the current Searcher
-         was opened.  You can change it to lastModFrom="dirLastMod" if
-         you want the value to exactly correspond to when the physical
-         index was last modified.
-
-         etagSeed="..." is an option you can change to force the ETag
-         header (and validation against If-None-Match requests) to be
-         different even if the index has not changed (ie: when making
-         significant changes to your config file)
-
-         (lastModifiedFrom and etagSeed are both ignored if you use
-         the never304="true" option)
-      -->
-    <!--
-       <httpCaching lastModifiedFrom="openTime"
-                    etagSeed="Solr">
-         <cacheControl>max-age=30, public</cacheControl>
-       </httpCaching>
-      -->
-  </requestDispatcher>
-
-  <!-- Request Handlers
-
-       http://wiki.apache.org/solr/SolrRequestHandler
-
-       Incoming queries will be dispatched to a specific handler by name
-       based on the path specified in the request.
-
-       If a Request Handler is declared with startup="lazy", then it will
-       not be initialized until the first request that uses it.
-
-    -->
-
-  <requestHandler name="/dataimport" class="solr.DataImportHandler">
-    <lst name="defaults">
-      <str name="config">solr-data-config.xml</str>
-    </lst>
-  </requestHandler>
-
-  <!-- SearchHandler
-
-       http://wiki.apache.org/solr/SearchHandler
-
-       For processing Search Queries, the primary Request Handler
-       provided with Solr is "SearchHandler" It delegates to a sequent
-       of SearchComponents (see below) and supports distributed
-       queries across multiple shards
-    -->
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <!-- default values for query parameters can be specified, these
-         will be overridden by parameters in the request
-      -->
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <int name="rows">10</int>
-       <str name="df">text</str>
-       <!-- Change from JSON to XML format (the default prior to Solr 7.0)
-          <str name="wt">xml</str> 
-         -->
-     </lst>
-    <!-- In addition to defaults, "appends" params can be specified
-         to identify values which should be appended to the list of
-         multi-val params from the query (or the existing "defaults").
-      -->
-    <!-- In this example, the param "fq=instock:true" would be appended to
-         any query time fq params the user may specify, as a mechanism for
-         partitioning the index, independent of any user selected filtering
-         that may also be desired (perhaps as a result of faceted searching).
-
-         NOTE: there is *absolutely* nothing a client can do to prevent these
-         "appends" values from being used, so don't use this mechanism
-         unless you are sure you always want it.
-      -->
-    <!--
-       <lst name="appends">
-         <str name="fq">inStock:true</str>
-       </lst>
-      -->
-    <!-- "invariants" are a way of letting the Solr maintainer lock down
-         the options available to Solr clients.  Any params values
-         specified here are used regardless of what values may be specified
-         in either the query, the "defaults", or the "appends" params.
-
-         In this example, the facet.field and facet.query params would
-         be fixed, limiting the facets clients can use.  Faceting is
-         not turned on by default - but if the client does specify
-         facet=true in the request, these are the only facets they
-         will be able to see counts for; regardless of what other
-         facet.field or facet.query params they may specify.
-
-         NOTE: there is *absolutely* nothing a client can do to prevent these
-         "invariants" values from being used, so don't use this mechanism
-         unless you are sure you always want it.
-      -->
-    <!--
-       <lst name="invariants">
-         <str name="facet.field">cat</str>
-         <str name="facet.field">manu_exact</str>
-         <str name="facet.query">price:[* TO 500]</str>
-         <str name="facet.query">price:[500 TO *]</str>
-       </lst>
-      -->
-    <!-- If the default list of SearchComponents is not desired, that
-         list can either be overridden completely, or components can be
-         prepended or appended to the default list.  (see below)
-      -->
-    <!--
-       <arr name="components">
-         <str>nameOfCustomComponent1</str>
-         <str>nameOfCustomComponent2</str>
-       </arr>
-      -->
-    </requestHandler>
-
-  <!-- A request handler that returns indented JSON by default -->
-  <requestHandler name="/query" class="solr.SearchHandler">
-     <lst name="defaults">
-       <str name="echoParams">explicit</str>
-       <str name="wt">json</str>
-       <str name="indent">true</str>
-       <str name="df">text</str>
-     </lst>
-  </requestHandler>
-
-  <!-- A Robust Example
-
-       This example SearchHandler declaration shows off usage of the
-       SearchHandler with many defaults declared
-
-       Note that multiple instances of the same Request Handler
-       (SearchHandler) can be registered multiple times with different
-       names (and different init parameters)
-    -->
-  <requestHandler name="/browse" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-
-      <!-- VelocityResponseWriter settings -->
-      <str name="wt">velocity</str>
-      <str name="v.template">browse</str>
-      <str name="v.layout">layout</str>
-
-      <!-- Query settings -->
-      <str name="defType">edismax</str>
-      <str name="q.alt">*:*</str>
-      <str name="rows">10</str>
-      <str name="fl">*,score</str>
-
-      <!-- Faceting defaults -->
-      <str name="facet">on</str>
-      <str name="facet.mincount">1</str>
-    </lst>
-  </requestHandler>
-
-  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
-    <lst name="defaults">
-      <str name="df">text</str>
-    </lst>
-  </initParams>
-
-  <!-- Solr Cell Update Request Handler
-
-       http://wiki.apache.org/solr/ExtractingRequestHandler
-
-    -->
-  <requestHandler name="/update/extract"
-                  startup="lazy"
-                  class="solr.extraction.ExtractingRequestHandler" >
-    <lst name="defaults">
-      <str name="lowernames">true</str>
-      <str name="uprefix">ignored_</str>
-
-      <!-- capture link hrefs but ignore div attributes -->
-      <str name="captureAttr">true</str>
-      <str name="fmap.a">links</str>
-      <str name="fmap.div">ignored_</str>
-    </lst>
-  </requestHandler>
-  <!-- Search Components
-
-       Search components are registered to SolrCore and used by
-       instances of SearchHandler (which can access them by name)
-
-       By default, the following components are available:
-
-       <searchComponent name="query"     class="solr.QueryComponent" />
-       <searchComponent name="facet"     class="solr.FacetComponent" />
-       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
-       <searchComponent name="highlight" class="solr.HighlightComponent" />
-       <searchComponent name="stats"     class="solr.StatsComponent" />
-       <searchComponent name="debug"     class="solr.DebugComponent" />
-
-       Default configuration in a requestHandler would look like:
-
-       <arr name="components">
-         <str>query</str>
-         <str>facet</str>
-         <str>mlt</str>
-         <str>highlight</str>
-         <str>stats</str>
-         <str>debug</str>
-       </arr>
-
-       If you register a searchComponent to one of the standard names,
-       that will be used instead of the default.
-
-       To insert components before or after the 'standard' components, use:
-
-       <arr name="first-components">
-         <str>myFirstComponentName</str>
-       </arr>
-
-       <arr name="last-components">
-         <str>myLastComponentName</str>
-       </arr>
-
-       NOTE: The component registered with the name "debug" will
-       always be executed after the "last-components"
-
-     -->
-
-   <!-- Spell Check
-
-        The spell check component can return a list of alternative spelling
-        suggestions.
-
-        http://wiki.apache.org/solr/SpellCheckComponent
-     -->
-  <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
-
-    <str name="queryAnalyzerFieldType">text_general</str>
-
-    <!-- Multiple "Spell Checkers" can be declared and used by this
-         component
-      -->
-
-    <!-- a spellchecker built from a field of the main index -->
-    <lst name="spellchecker">
-      <str name="name">default</str>
-      <str name="field">text</str>
-      <str name="classname">solr.DirectSolrSpellChecker</str>
-      <!-- the spellcheck distance measure used, the default is the internal levenshtein -->
-      <str name="distanceMeasure">internal</str>
-      <!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
-      <float name="accuracy">0.5</float>
-      <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
-      <int name="maxEdits">2</int>
-      <!-- the minimum shared prefix when enumerating terms -->
-      <int name="minPrefix">1</int>
-      <!-- maximum number of inspections per result. -->
-      <int name="maxInspections">5</int>
-      <!-- minimum length of a query term to be considered for correction -->
-      <int name="minQueryLength">4</int>
-      <!-- maximum threshold of documents a query term can appear to be considered for correction -->
-      <float name="maxQueryFrequency">0.01</float>
-      <!-- uncomment this to require suggestions to occur in 1% of the documents
-        <float name="thresholdTokenFrequency">.01</float>
-      -->
-    </lst>
-
-    <!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage -->
-    <lst name="spellchecker">
-      <str name="name">wordbreak</str>
-      <str name="classname">solr.WordBreakSolrSpellChecker</str>
-      <str name="field">name</str>
-      <str name="combineWords">true</str>
-      <str name="breakWords">true</str>
-      <int name="maxChanges">10</int>
-    </lst>
-
-    <!-- a spellchecker that uses a different distance measure -->
-    <!--
-       <lst name="spellchecker">
-         <str name="name">jarowinkler</str>
-         <str name="field">spell</str>
-         <str name="classname">solr.DirectSolrSpellChecker</str>
-         <str name="distanceMeasure">
-           org.apache.lucene.search.spell.JaroWinklerDistance
-         </str>
-       </lst>
-     -->
-
-    <!-- a spellchecker that use an alternate comparator
-
-         comparatorClass be one of:
-          1. score (default)
-          2. freq (Frequency first, then score)
-          3. A fully qualified class name
-      -->
-    <!--
-       <lst name="spellchecker">
-         <str name="name">freq</str>
-         <str name="field">lowerfilt</str>
-         <str name="classname">solr.DirectSolrSpellChecker</str>
-         <str name="comparatorClass">freq</str>
-      -->
-
-    <!-- A spellchecker that reads the list of words from a file -->
-    <!--
-       <lst name="spellchecker">
-         <str name="classname">solr.FileBasedSpellChecker</str>
-         <str name="name">file</str>
-         <str name="sourceLocation">spellings.txt</str>
-         <str name="characterEncoding">UTF-8</str>
-         <str name="spellcheckIndexDir">spellcheckerFile</str>
-       </lst>
-      -->
-  </searchComponent>
-
-  <!-- A request handler for demonstrating the spellcheck component.
-
-       NOTE: This is purely as an example.  The whole purpose of the
-       SpellCheckComponent is to hook it into the request handler that
-       handles your normal user queries so that a separate request is
-       not needed to get suggestions.
-
-       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
-       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
-
-       See http://wiki.apache.org/solr/SpellCheckComponent for details
-       on the request parameters.
-    -->
-  <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="df">text</str>
-      <!-- Solr will use suggestions from both the 'default' spellchecker
-           and from the 'wordbreak' spellchecker and combine them.
-           collations (re-written queries) can include a combination of
-           corrections from both spellcheckers -->
-      <str name="spellcheck.dictionary">default</str>
-      <str name="spellcheck.dictionary">wordbreak</str>
-      <str name="spellcheck">on</str>
-      <str name="spellcheck.extendedResults">true</str>
-      <str name="spellcheck.count">10</str>
-      <str name="spellcheck.alternativeTermCount">5</str>
-      <str name="spellcheck.maxResultsForSuggest">5</str>
-      <str name="spellcheck.collate">true</str>
-      <str name="spellcheck.collateExtendedResults">true</str>
-      <str name="spellcheck.maxCollationTries">10</str>
-      <str name="spellcheck.maxCollations">5</str>
-    </lst>
-    <arr name="last-components">
-      <str>spellcheck</str>
-    </arr>
-  </requestHandler>
-
-  <searchComponent name="suggest" class="solr.SuggestComponent">
-    <lst name="suggester">
-      <str name="name">mySuggester</str>
-      <str name="lookupImpl">FuzzyLookupFactory</str>      <!-- org.apache.solr.spelling.suggest.fst -->
-      <str name="dictionaryImpl">DocumentDictionaryFactory</str>     <!-- org.apache.solr.spelling.suggest.HighFrequencyDictionaryFactory -->
-      <str name="field">cat</str>
-      <str name="weightField">price</str>
-      <str name="suggestAnalyzerFieldType">string</str>
-    </lst>
-  </searchComponent>
-
-  <requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="suggest">true</str>
-      <str name="suggest.count">10</str>
-    </lst>
-    <arr name="components">
-      <str>suggest</str>
-    </arr>
-  </requestHandler>
-  <!-- Term Vector Component
-
-       http://wiki.apache.org/solr/TermVectorComponent
-    -->
-  <searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
-
-  <!-- A request handler for demonstrating the term vector component
-
-       This is purely as an example.
-
-       In reality you will likely want to add the component to your
-       already specified request handlers.
-    -->
-  <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="df">text</str>
-      <bool name="tv">true</bool>
-    </lst>
-    <arr name="last-components">
-      <str>tvComponent</str>
-    </arr>
-  </requestHandler>
-
-  <!-- Terms Component
-
-       http://wiki.apache.org/solr/TermsComponent
-
-       A component to return terms and document frequency of those
-       terms
-    -->
-  <searchComponent name="terms" class="solr.TermsComponent"/>
-
-  <!-- A request handler for demonstrating the terms component -->
-  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
-     <lst name="defaults">
-      <bool name="terms">true</bool>
-      <bool name="distrib">false</bool>
-    </lst>
-    <arr name="components">
-      <str>terms</str>
-    </arr>
-  </requestHandler>
-
-
-  <!-- Query Elevation Component
-
-       http://wiki.apache.org/solr/QueryElevationComponent
-
-       a search component that enables you to configure the top
-       results for a given query regardless of the normal lucene
-       scoring.
-    -->
-  <searchComponent name="elevator" class="solr.QueryElevationComponent" >
-    <!-- pick a fieldType to analyze queries -->
-    <str name="queryFieldType">string</str>
-    <str name="config-file">elevate.xml</str>
-  </searchComponent>
-
-  <!-- A request handler for demonstrating the elevator component -->
-  <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="df">text</str>
-    </lst>
-    <arr name="last-components">
-      <str>elevator</str>
-    </arr>
-  </requestHandler>
-
-  <!-- Highlighting Component
-
-       http://wiki.apache.org/solr/HighlightingParameters
-    -->
-  <searchComponent class="solr.HighlightComponent" name="highlight">
-    <highlighting>
-      <!-- Configure the standard fragmenter -->
-      <!-- This could most likely be commented out in the "default" case -->
-      <fragmenter name="gap"
-                  default="true"
-                  class="solr.highlight.GapFragmenter">
-        <lst name="defaults">
-          <int name="hl.fragsize">100</int>
-        </lst>
-      </fragmenter>
-
-      <!-- A regular-expression-based fragmenter
-           (for sentence extraction)
-        -->
-      <fragmenter name="regex"
-                  class="solr.highlight.RegexFragmenter">
-        <lst name="defaults">
-          <!-- slightly smaller fragsizes work better because of slop -->
-          <int name="hl.fragsize">70</int>
-          <!-- allow 50% slop on fragment sizes -->
-          <float name="hl.regex.slop">0.5</float>
-          <!-- a basic sentence pattern -->
-          <str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
-        </lst>
-      </fragmenter>
-
-      <!-- Configure the standard formatter -->
-      <formatter name="html"
-                 default="true"
-                 class="solr.highlight.HtmlFormatter">
-        <lst name="defaults">
-          <str name="hl.simple.pre"><![CDATA[<em>]]></str>
-          <str name="hl.simple.post"><![CDATA[</em>]]></str>
-        </lst>
-      </formatter>
-
-      <!-- Configure the standard encoder -->
-      <encoder name="html"
-               class="solr.highlight.HtmlEncoder" />
-
-      <!-- Configure the standard fragListBuilder -->
-      <fragListBuilder name="simple"
-                       class="solr.highlight.SimpleFragListBuilder"/>
-
-      <!-- Configure the single fragListBuilder -->
-      <fragListBuilder name="single"
-                       class="solr.highlight.SingleFragListBuilder"/>
-
-      <!-- Configure the weighted fragListBuilder -->
-      <fragListBuilder name="weighted"
-                       default="true"
-                       class="solr.highlight.WeightedFragListBuilder"/>
-
-      <!-- default tag FragmentsBuilder -->
-      <fragmentsBuilder name="default"
-                        default="true"
-                        class="solr.highlight.ScoreOrderFragmentsBuilder">
-        <!--
-        <lst name="defaults">
-          <str name="hl.multiValuedSeparatorChar">/</str>
-        </lst>
-        -->
-      </fragmentsBuilder>
-
-      <!-- multi-colored tag FragmentsBuilder -->
-      <fragmentsBuilder name="colored"
-                        class="solr.highlight.ScoreOrderFragmentsBuilder">
-        <lst name="defaults">
-          <str name="hl.tag.pre"><![CDATA[
-               <b style="background:yellow">,<b style="background:lawgreen">,
-               <b style="background:aquamarine">,<b style="background:magenta">,
-               <b style="background:palegreen">,<b style="background:coral">,
-               <b style="background:wheat">,<b style="background:khaki">,
-               <b style="background:lime">,<b style="background:deepskyblue">]]></str>
-          <str name="hl.tag.post"><![CDATA[</b>]]></str>
-        </lst>
-      </fragmentsBuilder>
-
-      <boundaryScanner name="default"
-                       default="true"
-                       class="solr.highlight.SimpleBoundaryScanner">
-        <lst name="defaults">
-          <str name="hl.bs.maxScan">10</str>
-          <str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
-        </lst>
-      </boundaryScanner>
-
-      <boundaryScanner name="breakIterator"
-                       class="solr.highlight.BreakIteratorBoundaryScanner">
-        <lst name="defaults">
-          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
-          <str name="hl.bs.type">WORD</str>
-          <!-- language and country are used when constructing Locale object.  -->
-          <!-- And the Locale object will be used when getting instance of BreakIterator -->
-          <str name="hl.bs.language">en</str>
-          <str name="hl.bs.country">US</str>
-        </lst>
-      </boundaryScanner>
-    </highlighting>
-  </searchComponent>
-
-  <!-- Update Processors
-
-       Chains of Update Processor Factories for dealing with Update
-       Requests can be declared, and then used by name in Update
-       Request Processors
-
-       http://wiki.apache.org/solr/UpdateRequestProcessor
-
-    -->
-  <!-- Deduplication
-
-       An example dedup update processor that creates the "id" field
-       on the fly based on the hash code of some other fields.  This
-       example has overwriteDupes set to false since we are using the
-       id field as the signatureField and Solr will maintain
-       uniqueness based on that anyway.
-
-    -->
-  <!--
-     <updateRequestProcessorChain name="dedupe">
-       <processor class="solr.processor.SignatureUpdateProcessorFactory">
-         <bool name="enabled">true</bool>
-         <str name="signatureField">id</str>
-         <bool name="overwriteDupes">false</bool>
-         <str name="fields">name,features,cat</str>
-         <str name="signatureClass">solr.processor.Lookup3Signature</str>
-       </processor>
-       <processor class="solr.LogUpdateProcessorFactory" />
-       <processor class="solr.RunUpdateProcessorFactory" />
-     </updateRequestProcessorChain>
-    -->
-
-  <!-- Language identification
-
-       This example update chain identifies the language of the incoming
-       documents using the langid contrib. The detected language is
-       written to field language_s. No field name mapping is done.
-       The fields used for detection are text, title, subject and description,
-       making this example suitable for detecting languages form full-text
-       rich documents injected via ExtractingRequestHandler.
-       See more about langId at http://wiki.apache.org/solr/LanguageDetection
-    -->
-    <!--
-     <updateRequestProcessorChain name="langid">
-       <processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
-         <str name="langid.fl">text,title,subject,description</str>
-         <str name="langid.langField">language_s</str>
-         <str name="langid.fallback">en</str>
-       </processor>
-       <processor class="solr.LogUpdateProcessorFactory" />
-       <processor class="solr.RunUpdateProcessorFactory" />
-     </updateRequestProcessorChain>
-    -->
-
-  <!-- Script update processor
-
-    This example hooks in an update processor implemented using JavaScript.
-
-    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor
-  -->
-  <!--
-    <updateRequestProcessorChain name="script">
-      <processor class="solr.StatelessScriptUpdateProcessorFactory">
-        <str name="script">update-script.js</str>
-        <lst name="params">
-          <str name="config_param">example config parameter</str>
-        </lst>
-      </processor>
-      <processor class="solr.RunUpdateProcessorFactory" />
-    </updateRequestProcessorChain>
-  -->
-
-  <!-- Response Writers
-
-       http://wiki.apache.org/solr/QueryResponseWriter
-
-       Request responses will be written using the writer specified by
-       the 'wt' request parameter matching the name of a registered
-       writer.
-
-       The "default" writer is the default and will be used if 'wt' is
-       not specified in the request.
-    -->
-  <!-- The following response writers are implicitly configured unless
-       overridden...
-    -->
-  <!--
-     <queryResponseWriter name="xml"
-                          default="true"
-                          class="solr.XMLResponseWriter" />
-     <queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
-     <queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
-     <queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
-     <queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
-     <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
-     <queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
-     <queryResponseWriter name="schema.xml" class="solr.SchemaXmlResponseWriter"/>
-    -->
-
-  <queryResponseWriter name="json" class="solr.JSONResponseWriter">
-     <!-- For the purposes of the tutorial, JSON responses are written as
-      plain text so that they are easy to read in *any* browser.
-      If you expect a MIME type of "application/json" just remove this override.
-     -->
-    <str name="content-type">text/plain; charset=UTF-8</str>
-  </queryResponseWriter>
-
-  <!--
-     Custom response writers can be declared as needed...
-    -->
-  <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy">
-    <str name="template.base.dir">${velocity.template.base.dir:}</str>
-  </queryResponseWriter>
-
-  <!-- XSLT response writer transforms the XML output by any xslt file found
-       in Solr's conf/xslt directory.  Changes to xslt files are checked for
-       every xsltCacheLifetimeSeconds.
-    -->
-  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
-    <int name="xsltCacheLifetimeSeconds">5</int>
-  </queryResponseWriter>
-
-  <!-- Query Parsers
-
-       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html
-
-       Multiple QParserPlugins can be registered by name, and then
-       used in either the "defType" param for the QueryComponent (used
-       by SearchHandler) or in LocalParams
-    -->
-  <!-- example of registering a query parser -->
-  <!--
-     <queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
-    -->
-
-  <!-- Function Parsers
-
-       http://wiki.apache.org/solr/FunctionQuery
-
-       Multiple ValueSourceParsers can be registered by name, and then
-       used as function names when using the "func" QParser.
-    -->
-  <!-- example of registering a custom function parser  -->
-  <!--
-     <valueSourceParser name="myfunc"
-                        class="com.mycompany.MyValueSourceParser" />
-    -->
-
-
-  <!-- Document Transformers
-       http://wiki.apache.org/solr/DocTransformers
-    -->
-  <!--
-     Could be something like:
-     <transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
-       <int name="connection">jdbc://....</int>
-     </transformer>
-
-     To add a constant value to all docs, use:
-     <transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
-       <int name="value">5</int>
-     </transformer>
-
-     If you want the user to still be able to change it with _value:something_ use this:
-     <transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
-       <double name="defaultValue">5</double>
-     </transformer>
-
-      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The
-      EditorialMarkerFactory will do exactly that:
-     <transformer name="qecBooster" class="org.apache.solr.response.transform.EditorialMarkerFactory" />
-    -->
-
-</config>
diff --git a/solr/example/example-DIH/solr/solr/conf/spellings.txt b/solr/example/example-DIH/solr/solr/conf/spellings.txt
deleted file mode 100644
index d7ede6f..0000000
--- a/solr/example/example-DIH/solr/solr/conf/spellings.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-pizza
-history
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/solr/conf/stopwords.txt b/solr/example/example-DIH/solr/solr/conf/stopwords.txt
deleted file mode 100644
index ae1e83e..0000000
--- a/solr/example/example-DIH/solr/solr/conf/stopwords.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
diff --git a/solr/example/example-DIH/solr/solr/conf/synonyms.txt b/solr/example/example-DIH/solr/solr/conf/synonyms.txt
deleted file mode 100644
index eab4ee8..0000000
--- a/solr/example/example-DIH/solr/solr/conf/synonyms.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#-----------------------------------------------------------------------
-#some test synonym mappings unlikely to appear in real input text
-aaafoo => aaabar
-bbbfoo => bbbfoo bbbbar
-cccfoo => cccbar cccbaz
-fooaaa,baraaa,bazaaa
-
-# Some synonym groups specific to this example
-GB,gib,gigabyte,gigabytes
-MB,mib,megabyte,megabytes
-Television, Televisions, TV, TVs
-#notice we use "gib" instead of "GiB" so any WordDelimiterGraphFilter coming
-#after us won't split it into two words.
-
-# Synonym mappings can be used for spelling correction too
-pixima => pixma
-
diff --git a/solr/example/example-DIH/solr/solr/conf/update-script.js b/solr/example/example-DIH/solr/solr/conf/update-script.js
deleted file mode 100644
index 49b07f9..0000000
--- a/solr/example/example-DIH/solr/solr/conf/update-script.js
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
-  This is a basic skeleton JavaScript update processor.
-
-  In order for this to be executed, it must be properly wired into solrconfig.xml; by default it is commented out in
-  the example solrconfig.xml and must be uncommented to be enabled.
-
-  See http://wiki.apache.org/solr/ScriptUpdateProcessor for more details.
-*/
-
-function processAdd(cmd) {
-
-  doc = cmd.solrDoc;  // org.apache.solr.common.SolrInputDocument
-  id = doc.getFieldValue("id");
-  logger.info("update-script#processAdd: id=" + id);
-
-// Set a field value:
-//  doc.setField("foo_s", "whatever");
-
-// Get a configuration parameter:
-//  config_param = params.get('config_param');  // "params" only exists if processor configured with <lst name="params">
-
-// Get a request parameter:
-// some_param = req.getParams().get("some_param")
-
-// Add a field of field names that match a pattern:
-//   - Potentially useful to determine the fields/attributes represented in a result set, via faceting on field_name_ss
-//  field_names = doc.getFieldNames().toArray();
-//  for(i=0; i < field_names.length; i++) {
-//    field_name = field_names[i];
-//    if (/attr_.*/.test(field_name)) { doc.addField("attribute_ss", field_names[i]); }
-//  }
-
-}
-
-function processDelete(cmd) {
-  // no-op
-}
-
-function processMergeIndexes(cmd) {
-  // no-op
-}
-
-function processCommit(cmd) {
-  // no-op
-}
-
-function processRollback(cmd) {
-  // no-op
-}
-
-function finish() {
-  // no-op
-}
diff --git a/solr/example/example-DIH/solr/solr/conf/xslt/example.xsl b/solr/example/example-DIH/solr/solr/conf/xslt/example.xsl
deleted file mode 100644
index b899270..0000000
--- a/solr/example/example-DIH/solr/solr/conf/xslt/example.xsl
+++ /dev/null
@@ -1,132 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to HTML
- -->
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'
->
-
-  <xsl:output media-type="text/html" encoding="UTF-8"/> 
-  
-  <xsl:variable name="title" select="concat('Solr search results (',response/result/@numFound,' documents)')"/>
-  
-  <xsl:template match='/'>
-    <html>
-      <head>
-        <title><xsl:value-of select="$title"/></title>
-        <xsl:call-template name="css"/>
-      </head>
-      <body>
-        <h1><xsl:value-of select="$title"/></h1>
-        <div class="note">
-          This has been formatted by the sample "example.xsl" transform -
-          use your own XSLT to get a nicer page
-        </div>
-        <xsl:apply-templates select="response/result/doc"/>
-      </body>
-    </html>
-  </xsl:template>
-  
-  <xsl:template match="doc">
-    <xsl:variable name="pos" select="position()"/>
-    <div class="doc">
-      <table width="100%">
-        <xsl:apply-templates>
-          <xsl:with-param name="pos"><xsl:value-of select="$pos"/></xsl:with-param>
-        </xsl:apply-templates>
-      </table>
-    </div>
-  </xsl:template>
-
-  <xsl:template match="doc/*[@name='score']" priority="100">
-    <xsl:param name="pos"></xsl:param>
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <xsl:value-of select="."/>
-
-        <xsl:if test="boolean(//lst[@name='explain'])">
-          <xsl:element name="a">
-            <!-- can't allow whitespace here -->
-            <xsl:attribute name="href">javascript:toggle("<xsl:value-of select="concat('exp-',$pos)" />");</xsl:attribute>?</xsl:element>
-          <br/>
-          <xsl:element name="div">
-            <xsl:attribute name="class">exp</xsl:attribute>
-            <xsl:attribute name="id">
-              <xsl:value-of select="concat('exp-',$pos)" />
-            </xsl:attribute>
-            <xsl:value-of select="//lst[@name='explain']/str[position()=$pos]"/>
-          </xsl:element>
-        </xsl:if>
-      </td>
-    </tr>
-  </xsl:template>
-
-  <xsl:template match="doc/arr" priority="100">
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <ul>
-        <xsl:for-each select="*">
-          <li><xsl:value-of select="."/></li>
-        </xsl:for-each>
-        </ul>
-      </td>
-    </tr>
-  </xsl:template>
-
-
-  <xsl:template match="doc/*">
-    <tr>
-      <td class="name">
-        <xsl:value-of select="@name"/>
-      </td>
-      <td class="value">
-        <xsl:value-of select="."/>
-      </td>
-    </tr>
-  </xsl:template>
-
-  <xsl:template match="*"/>
-  
-  <xsl:template name="css">
-    <script>
-      function toggle(id) {
-        var obj = document.getElementById(id);
-        obj.style.display = (obj.style.display != 'block') ? 'block' : 'none';
-      }
-    </script>
-    <style type="text/css">
-      body { font-family: "Lucida Grande", sans-serif }
-      td.name { font-style: italic; font-size:80%; }
-      td { vertical-align: top; }
-      ul { margin: 0px; margin-left: 1em; padding: 0px; }
-      .note { font-size:80%; }
-      .doc { margin-top: 1em; border-top: solid grey 1px; }
-      .exp { display: none; font-family: monospace; white-space: pre; }
-    </style>
-  </xsl:template>
-
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/solr/conf/xslt/example_atom.xsl b/solr/example/example-DIH/solr/solr/conf/xslt/example_atom.xsl
deleted file mode 100644
index b6c2315..0000000
--- a/solr/example/example-DIH/solr/solr/conf/xslt/example_atom.xsl
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to Atom
- -->
-
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-
-  <xsl:output
-       method="xml"
-       encoding="utf-8"
-       media-type="application/xml"
-  />
-
-  <xsl:template match='/'>
-    <xsl:variable name="query" select="response/lst[@name='responseHeader']/lst[@name='params']/str[@name='q']"/>
-    <feed xmlns="http://www.w3.org/2005/Atom">
-      <title>Example Solr Atom 1.0 Feed</title>
-      <subtitle>
-       This has been formatted by the sample "example_atom.xsl" transform -
-       use your own XSLT to get a nicer Atom feed.
-      </subtitle>
-      <author>
-        <name>Apache Solr</name>
-        <email>solr-user@lucene.apache.org</email>
-      </author>
-      <link rel="self" type="application/atom+xml" 
-            href="http://localhost:8983/solr/q={$query}&amp;wt=xslt&amp;tr=atom.xsl"/>
-      <updated>
-        <xsl:value-of select="response/result/doc[position()=1]/date[@name='timestamp']"/>
-      </updated>
-      <id>tag:localhost,2007:example</id>
-      <xsl:apply-templates select="response/result/doc"/>
-    </feed>
-  </xsl:template>
-    
-  <!-- search results xslt -->
-  <xsl:template match="doc">
-    <xsl:variable name="id" select="str[@name='id']"/>
-    <entry>
-      <title><xsl:value-of select="str[@name='name']"/></title>
-      <link href="http://localhost:8983/solr/select?q={$id}"/>
-      <id>tag:localhost,2007:<xsl:value-of select="$id"/></id>
-      <summary><xsl:value-of select="arr[@name='features']"/></summary>
-      <updated><xsl:value-of select="date[@name='timestamp']"/></updated>
-    </entry>
-  </xsl:template>
-
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/solr/conf/xslt/example_rss.xsl b/solr/example/example-DIH/solr/solr/conf/xslt/example_rss.xsl
deleted file mode 100644
index c8ab5bf..0000000
--- a/solr/example/example-DIH/solr/solr/conf/xslt/example_rss.xsl
+++ /dev/null
@@ -1,66 +0,0 @@
-<?xml version='1.0' encoding='UTF-8'?>
-
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!-- 
-  Simple transform of Solr query results to RSS
- -->
-
-<xsl:stylesheet version='1.0'
-    xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-
-  <xsl:output
-       method="xml"
-       encoding="utf-8"
-       media-type="application/xml"
-  />
-  <xsl:template match='/'>
-    <rss version="2.0">
-       <channel>
-         <title>Example Solr RSS 2.0 Feed</title>
-         <link>http://localhost:8983/solr</link>
-         <description>
-          This has been formatted by the sample "example_rss.xsl" transform -
-          use your own XSLT to get a nicer RSS feed.
-         </description>
-         <language>en-us</language>
-         <docs>http://localhost:8983/solr</docs>
-         <xsl:apply-templates select="response/result/doc"/>
-       </channel>
-    </rss>
-  </xsl:template>
-  
-  <!-- search results xslt -->
-  <xsl:template match="doc">
-    <xsl:variable name="id" select="str[@name='id']"/>
-    <xsl:variable name="timestamp" select="date[@name='timestamp']"/>
-    <item>
-      <title><xsl:value-of select="str[@name='name']"/></title>
-      <link>
-        http://localhost:8983/solr/select?q=id:<xsl:value-of select="$id"/>
-      </link>
-      <description>
-        <xsl:value-of select="arr[@name='features']"/>
-      </description>
-      <pubDate><xsl:value-of select="$timestamp"/></pubDate>
-      <guid>
-        http://localhost:8983/solr/select?q=id:<xsl:value-of select="$id"/>
-      </guid>
-    </item>
-  </xsl:template>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/solr/conf/xslt/luke.xsl b/solr/example/example-DIH/solr/solr/conf/xslt/luke.xsl
deleted file mode 100644
index 05fb5bf..0000000
--- a/solr/example/example-DIH/solr/solr/conf/xslt/luke.xsl
+++ /dev/null
@@ -1,337 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-    
-    http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-
-
-<!-- 
-  Display the luke request handler with graphs
- -->
-<xsl:stylesheet
-    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
-    xmlns="http://www.w3.org/1999/xhtml"
-    version="1.0"
-    >
-    <xsl:output
-        method="html"
-        encoding="UTF-8"
-        media-type="text/html"
-        doctype-public="-//W3C//DTD XHTML 1.0 Strict//EN"
-        doctype-system="http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"
-    />
-
-    <xsl:variable name="title">Solr Luke Request Handler Response</xsl:variable>
-
-    <xsl:template match="/">
-        <html xmlns="http://www.w3.org/1999/xhtml">
-            <head>
-                <link rel="stylesheet" type="text/css" href="solr-admin.css"/>
-                <link rel="icon" href="favicon.ico" type="image/x-icon"/>
-                <link rel="shortcut icon" href="favicon.ico" type="image/x-icon"/>
-                <title>
-                    <xsl:value-of select="$title"/>
-                </title>
-                <xsl:call-template name="css"/>
-
-            </head>
-            <body>
-                <h1>
-                    <xsl:value-of select="$title"/>
-                </h1>
-                <div class="doc">
-                    <ul>
-                        <xsl:if test="response/lst[@name='index']">
-                            <li>
-                                <a href="#index">Index Statistics</a>
-                            </li>
-                        </xsl:if>
-                        <xsl:if test="response/lst[@name='fields']">
-                            <li>
-                                <a href="#fields">Field Statistics</a>
-                                <ul>
-                                    <xsl:for-each select="response/lst[@name='fields']/lst">
-                                        <li>
-                                            <a href="#{@name}">
-                                                <xsl:value-of select="@name"/>
-                                            </a>
-                                        </li>
-                                    </xsl:for-each>
-                                </ul>
-                            </li>
-                        </xsl:if>
-                        <xsl:if test="response/lst[@name='doc']">
-                            <li>
-                                <a href="#doc">Document statistics</a>
-                            </li>
-                        </xsl:if>
-                    </ul>
-                </div>
-                <xsl:if test="response/lst[@name='index']">
-                    <h2><a name="index"/>Index Statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='index']"/>
-                </xsl:if>
-                <xsl:if test="response/lst[@name='fields']">
-                    <h2><a name="fields"/>Field Statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='fields']"/>
-                </xsl:if>
-                <xsl:if test="response/lst[@name='doc']">
-                    <h2><a name="doc"/>Document statistics</h2>
-                    <xsl:apply-templates select="response/lst[@name='doc']"/>
-                </xsl:if>
-            </body>
-        </html>
-    </xsl:template>
-
-    <xsl:template match="lst">
-        <xsl:if test="parent::lst">
-            <tr>
-                <td colspan="2">
-                    <div class="doc">
-                        <xsl:call-template name="list"/>
-                    </div>
-                </td>
-            </tr>
-        </xsl:if>
-        <xsl:if test="not(parent::lst)">
-            <div class="doc">
-                <xsl:call-template name="list"/>
-            </div>
-        </xsl:if>
-    </xsl:template>
-
-    <xsl:template name="list">
-        <xsl:if test="count(child::*)>0">
-            <table>
-                <thead>
-                    <tr>
-                        <th colspan="2">
-                            <p>
-                                <a name="{@name}"/>
-                            </p>
-                            <xsl:value-of select="@name"/>
-                        </th>
-                    </tr>
-                </thead>
-                <tbody>
-                    <xsl:choose>
-                        <xsl:when
-                            test="@name='histogram'">
-                            <tr>
-                                <td colspan="2">
-                                    <xsl:call-template name="histogram"/>
-                                </td>
-                            </tr>
-                        </xsl:when>
-                        <xsl:otherwise>
-                            <xsl:apply-templates/>
-                        </xsl:otherwise>
-                    </xsl:choose>
-                </tbody>
-            </table>
-        </xsl:if>
-    </xsl:template>
-
-    <xsl:template name="histogram">
-        <div class="doc">
-            <xsl:call-template name="barchart">
-                <xsl:with-param name="max_bar_width">50</xsl:with-param>
-                <xsl:with-param name="iwidth">800</xsl:with-param>
-                <xsl:with-param name="iheight">160</xsl:with-param>
-                <xsl:with-param name="fill">blue</xsl:with-param>
-            </xsl:call-template>
-        </div>
-    </xsl:template>
-
-    <xsl:template name="barchart">
-        <xsl:param name="max_bar_width"/>
-        <xsl:param name="iwidth"/>
-        <xsl:param name="iheight"/>
-        <xsl:param name="fill"/>
-        <xsl:variable name="max">
-            <xsl:for-each select="int">
-                <xsl:sort data-type="number" order="descending"/>
-                <xsl:if test="position()=1">
-                    <xsl:value-of select="."/>
-                </xsl:if>
-            </xsl:for-each>
-        </xsl:variable>
-        <xsl:variable name="bars">
-           <xsl:value-of select="count(int)"/>
-        </xsl:variable>
-        <xsl:variable name="bar_width">
-           <xsl:choose>
-             <xsl:when test="$max_bar_width &lt; ($iwidth div $bars)">
-               <xsl:value-of select="$max_bar_width"/>
-             </xsl:when>
-             <xsl:otherwise>
-               <xsl:value-of select="$iwidth div $bars"/>
-             </xsl:otherwise>
-           </xsl:choose>
-        </xsl:variable>
-        <table class="histogram">
-           <tbody>
-              <tr>
-                <xsl:for-each select="int">
-                   <td>
-                 <xsl:value-of select="."/>
-                 <div class="histogram">
-                  <xsl:attribute name="style">background-color: <xsl:value-of select="$fill"/>; width: <xsl:value-of select="$bar_width"/>px; height: <xsl:value-of select="($iheight*number(.)) div $max"/>px;</xsl:attribute>
-                 </div>
-                   </td> 
-                </xsl:for-each>
-              </tr>
-              <tr>
-                <xsl:for-each select="int">
-                   <td>
-                       <xsl:value-of select="@name"/>
-                   </td>
-                </xsl:for-each>
-              </tr>
-           </tbody>
-        </table>
-    </xsl:template>
-
-    <xsl:template name="keyvalue">
-        <xsl:choose>
-            <xsl:when test="@name">
-                <tr>
-                    <td class="name">
-                        <xsl:value-of select="@name"/>
-                    </td>
-                    <td class="value">
-                        <xsl:value-of select="."/>
-                    </td>
-                </tr>
-            </xsl:when>
-            <xsl:otherwise>
-                <xsl:value-of select="."/>
-            </xsl:otherwise>
-        </xsl:choose>
-    </xsl:template>
-
-    <xsl:template match="int|bool|long|float|double|uuid|date">
-        <xsl:call-template name="keyvalue"/>
-    </xsl:template>
-
-    <xsl:template match="arr">
-        <tr>
-            <td class="name">
-                <xsl:value-of select="@name"/>
-            </td>
-            <td class="value">
-                <ul>
-                    <xsl:for-each select="child::*">
-                        <li>
-                            <xsl:apply-templates/>
-                        </li>
-                    </xsl:for-each>
-                </ul>
-            </td>
-        </tr>
-    </xsl:template>
-
-    <xsl:template match="str">
-        <xsl:choose>
-            <xsl:when test="@name='schema' or @name='index' or @name='flags'">
-                <xsl:call-template name="schema"/>
-            </xsl:when>
-            <xsl:otherwise>
-                <xsl:call-template name="keyvalue"/>
-            </xsl:otherwise>
-        </xsl:choose>
-    </xsl:template>
-
-    <xsl:template name="schema">
-        <tr>
-            <td class="name">
-                <xsl:value-of select="@name"/>
-            </td>
-            <td class="value">
-                <xsl:if test="contains(.,'unstored')">
-                    <xsl:value-of select="."/>
-                </xsl:if>
-                <xsl:if test="not(contains(.,'unstored'))">
-                    <xsl:call-template name="infochar2string">
-                        <xsl:with-param name="charList">
-                            <xsl:value-of select="."/>
-                        </xsl:with-param>
-                    </xsl:call-template>
-                </xsl:if>
-            </td>
-        </tr>
-    </xsl:template>
-
-    <xsl:template name="infochar2string">
-        <xsl:param name="i">1</xsl:param>
-        <xsl:param name="charList"/>
-
-        <xsl:variable name="char">
-            <xsl:value-of select="substring($charList,$i,1)"/>
-        </xsl:variable>
-        <xsl:choose>
-            <xsl:when test="$char='I'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='I']"/> - </xsl:when>
-            <xsl:when test="$char='T'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='T']"/> - </xsl:when>
-            <xsl:when test="$char='S'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='S']"/> - </xsl:when>
-            <xsl:when test="$char='M'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='M']"/> - </xsl:when>
-            <xsl:when test="$char='V'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='V']"/> - </xsl:when>
-            <xsl:when test="$char='o'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='o']"/> - </xsl:when>
-            <xsl:when test="$char='p'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='p']"/> - </xsl:when>
-            <xsl:when test="$char='O'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='O']"/> - </xsl:when>
-            <xsl:when test="$char='L'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='L']"/> - </xsl:when>
-            <xsl:when test="$char='B'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='B']"/> - </xsl:when>
-            <xsl:when test="$char='C'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='C']"/> - </xsl:when>
-            <xsl:when test="$char='f'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='f']"/> - </xsl:when>
-            <xsl:when test="$char='l'">
-                <xsl:value-of select="/response/lst[@name='info']/lst/str[@name='l']"/> -
-            </xsl:when>
-        </xsl:choose>
-
-        <xsl:if test="not($i>=string-length($charList))">
-            <xsl:call-template name="infochar2string">
-                <xsl:with-param name="i">
-                    <xsl:value-of select="$i+1"/>
-                </xsl:with-param>
-                <xsl:with-param name="charList">
-                    <xsl:value-of select="$charList"/>
-                </xsl:with-param>
-            </xsl:call-template>
-        </xsl:if>
-    </xsl:template>
-    <xsl:template name="css">
-        <style type="text/css">
-            <![CDATA[
-            td.name {font-style: italic; font-size:80%; }
-            .doc { margin: 0.5em; border: solid grey 1px; }
-            .exp { display: none; font-family: monospace; white-space: pre; }
-            div.histogram { background: none repeat scroll 0%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;}
-            table.histogram { width: auto; vertical-align: bottom; }
-            table.histogram td, table.histogram th { text-align: center; vertical-align: bottom; border-bottom: 1px solid #ff9933; width: auto; }
-            ]]>
-        </style>
-    </xsl:template>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/solr/conf/xslt/updateXml.xsl b/solr/example/example-DIH/solr/solr/conf/xslt/updateXml.xsl
deleted file mode 100644
index a96e1d0..0000000
--- a/solr/example/example-DIH/solr/solr/conf/xslt/updateXml.xsl
+++ /dev/null
@@ -1,70 +0,0 @@
-<!-- 
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- -->
-
-<!--
-  Simple transform of Solr query response into Solr Update XML compliant XML.
-  When used in the xslt response writer you will get UpdaateXML as output.
-  But you can also store a query response XML to disk and feed this XML to
-  the XSLTUpdateRequestHandler to index the content. Provided as example only.
-  See http://wiki.apache.org/solr/XsltUpdateRequestHandler for more info
- -->
-<xsl:stylesheet version='1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform'>
-  <xsl:output media-type="text/xml" method="xml" indent="yes"/>
-
-  <xsl:template match='/'>
-    <add>
-        <xsl:apply-templates select="response/result/doc"/>
-    </add>
-  </xsl:template>
-  
-  <!-- Ignore score (makes no sense to index) -->
-  <xsl:template match="doc/*[@name='score']" priority="100">
-  </xsl:template>
-
-  <xsl:template match="doc">
-    <xsl:variable name="pos" select="position()"/>
-    <doc>
-        <xsl:apply-templates>
-          <xsl:with-param name="pos"><xsl:value-of select="$pos"/></xsl:with-param>
-        </xsl:apply-templates>
-    </doc>
-  </xsl:template>
-
-  <!-- Flatten arrays to duplicate field lines -->
-  <xsl:template match="doc/arr" priority="100">
-      <xsl:variable name="fn" select="@name"/>
-      
-      <xsl:for-each select="*">
-        <xsl:element name="field">
-          <xsl:attribute name="name"><xsl:value-of select="$fn"/></xsl:attribute>
-          <xsl:value-of select="."/>
-        </xsl:element>
-      </xsl:for-each>
-  </xsl:template>
-
-
-  <xsl:template match="doc/*">
-      <xsl:variable name="fn" select="@name"/>
-
-      <xsl:element name="field">
-        <xsl:attribute name="name"><xsl:value-of select="$fn"/></xsl:attribute>
-        <xsl:value-of select="."/>
-      </xsl:element>
-  </xsl:template>
-
-  <xsl:template match="*"/>
-</xsl:stylesheet>
diff --git a/solr/example/example-DIH/solr/solr/core.properties b/solr/example/example-DIH/solr/solr/core.properties
deleted file mode 100644
index e69de29..0000000
--- a/solr/example/example-DIH/solr/solr/core.properties
+++ /dev/null
diff --git a/solr/example/example-DIH/solr/tika/conf/managed-schema b/solr/example/example-DIH/solr/tika/conf/managed-schema
deleted file mode 100644
index 196cdb3..0000000
--- a/solr/example/example-DIH/solr/tika/conf/managed-schema
+++ /dev/null
@@ -1,54 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<schema name="example-DIH-tika" version="1.6">
-
-  <uniqueKey>id</uniqueKey>
-
-  <field name="id" type="string" indexed="true" stored="true"/>
-  <field name="author" type="text_simple" indexed="true" stored="true"/>
-  <field name="title" type="text_simple" indexed="true" stored="true" multiValued="true"/>
-  <field name="format" type="string" indexed="true" stored="true"/>
-
-  <!-- field "text" is searchable but it is not stored to save space -->
-  <field name="text" type="text_simple" indexed="true" stored="false" multiValued="true"/>
-
-
-  <!-- Uncomment the dynamicField definition to catch any other fields
-   that may have been declared in the DIH configuration.
-   This allows to speed up prototyping.
-  -->
-  <!-- <dynamicField name="*" type="string" indexed="true" stored="true" multiValued="true"/> -->
-
-  <!-- The StrField type is not analyzed, but is indexed/stored verbatim. -->
-  <fieldType name="string" class="solr.StrField" sortMissingLast="true"/>
-
-
-  <!-- A basic text field that has reasonable, generic
-   cross-language defaults: it tokenizes with StandardTokenizer,
-   and down cases. It does not deal with stopwords or other issues.
-   See other examples for alternative definitions.
-  -->
-  <fieldType name="text_simple" class="solr.TextField" positionIncrementGap="100">
-    <analyzer>
-      <tokenizer name="standard"/>
-      <filter name="lowercase"/>
-    </analyzer>
-  </fieldType>
-
-</schema>
\ No newline at end of file
diff --git a/solr/example/example-DIH/solr/tika/conf/solrconfig.xml b/solr/example/example-DIH/solr/tika/conf/solrconfig.xml
deleted file mode 100644
index 500ee19..0000000
--- a/solr/example/example-DIH/solr/tika/conf/solrconfig.xml
+++ /dev/null
@@ -1,61 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--
- This is a DEMO configuration highlighting elements
- specifically needed to get this example running
- such as libraries and request handler specifics.
-
- It uses defaults or does not define most of production-level settings
- such as various caches or auto-commit policies.
-
- See Solr Reference Guide and other examples for
- more details on a well configured solrconfig.xml
- https://lucene.apache.org/solr/guide/the-well-configured-solr-instance.html
--->
-
-<config>
-  <!-- Controls what version of Lucene various components of Solr
-   adhere to.  Generally, you want to use the latest version to
-   get all bug fixes and improvements. It is highly recommended
-   that you fully re-index after changing this setting as it can
-   affect both how text is indexed and queried.
-  -->
-  <luceneMatchVersion>9.0.0</luceneMatchVersion>
-
-  <!-- Load Data Import Handler and Apache Tika (extraction) libraries -->
-  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar"/>
-  <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar"/>
-
-  <requestHandler name="/select" class="solr.SearchHandler">
-    <lst name="defaults">
-      <str name="echoParams">explicit</str>
-      <str name="df">text</str>
-       <!-- Change from JSON to XML format (the default prior to Solr 7.0)
-          <str name="wt">xml</str> 
-         -->
-    </lst>
-  </requestHandler>
-
-  <requestHandler name="/dataimport" class="solr.DataImportHandler">
-    <lst name="defaults">
-      <str name="config">tika-data-config.xml</str>
-    </lst>
-  </requestHandler>
-
-</config>
diff --git a/solr/example/example-DIH/solr/tika/conf/tika-data-config.xml b/solr/example/example-DIH/solr/tika/conf/tika-data-config.xml
deleted file mode 100644
index 5286fc4..0000000
--- a/solr/example/example-DIH/solr/tika/conf/tika-data-config.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<dataConfig>
-  <dataSource type="BinFileDataSource"/>
-  <document>
-    <entity name="file" processor="FileListEntityProcessor" dataSource="null"
-            baseDir="${solr.install.dir}/example/exampledocs" fileName=".*pdf"
-            rootEntity="false">
-
-      <field column="file" name="id"/>
-
-      <entity name="pdf" processor="TikaEntityProcessor"
-              url="${file.fileAbsolutePath}" format="text">
-
-        <field column="Author" name="author" meta="true"/>
-        <!-- in the original PDF, the Author meta-field name is upper-cased,
-          but in Solr schema it is lower-cased
-         -->
-
-        <field column="title" name="title" meta="true"/>
-        <field column="dc:format" name="format" meta="true"/>
-
-        <field column="text" name="text"/>
-
-      </entity>
-    </entity>
-  </document>
-</dataConfig>
diff --git a/solr/example/example-DIH/solr/tika/core.properties b/solr/example/example-DIH/solr/tika/core.properties
deleted file mode 100644
index e69de29..0000000
--- a/solr/example/example-DIH/solr/tika/core.properties
+++ /dev/null
diff --git a/solr/example/files/conf/solrconfig.xml b/solr/example/files/conf/solrconfig.xml
index a6cce10..5d7bedd 100644
--- a/solr/example/files/conf/solrconfig.xml
+++ b/solr/example/files/conf/solrconfig.xml
@@ -550,7 +550,7 @@
      Circuit Breaker Section - This section consists of configurations for
      circuit breakers
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <circuitBreaker>
+  <circuitBreaker class="solr.CircuitBreakerManager" enabled="true">
     <!-- Enable Circuit Breakers
 
     Circuit breakers are designed to allow stability and predictable query
@@ -563,7 +563,12 @@
     they are free to add their specific configuration but need to ensure that this flag is always
     respected - this should have veto over all independent configuration flags.
     -->
-    <useCircuitBreakers>false</useCircuitBreakers>
+
+    <!-- Memory Circuit Breaker Control Flag
+
+    Use the following flag to control the behaviour of this circuit breaker
+    -->
+    <str name="memEnabled">true</str>
 
     <!-- Memory Circuit Breaker Threshold In Percentage
 
@@ -580,7 +585,20 @@
     in logs and the corresponding error message should tell you what transpired (if the failure
     was caused by tripped circuit breakers).
     -->
-    <memoryCircuitBreakerThresholdPct>100</memoryCircuitBreakerThresholdPct>
+    <str name="memThreshold">75</str>
+
+    <!-- CPU Based Circuit Breaker Control Flag
+
+    Use the following flag to control the behaviour of this circuit breaker
+    -->
+    <str name="cpuEnabled">true</str>
+
+    <!-- CPU Based Circuit Breaker Triggering Threshold
+
+    The triggering threshold is defined in units of CPU utilization. The configuration to control this is as below:
+    -->
+    <str name="cpuThreshold">75</str>
+
 
   </circuitBreaker>
 
diff --git a/solr/licenses/activation-1.1.1.jar.sha1 b/solr/licenses/activation-1.1.1.jar.sha1
deleted file mode 100644
index 7b2295c..0000000
--- a/solr/licenses/activation-1.1.1.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-485de3a253e23f645037828c07f1d7f1af40763a
diff --git a/solr/licenses/activation-LICENSE-CDDL.txt b/solr/licenses/activation-LICENSE-CDDL.txt
deleted file mode 100644
index 1154e0a..0000000
--- a/solr/licenses/activation-LICENSE-CDDL.txt
+++ /dev/null
@@ -1,119 +0,0 @@
-COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.0
-
-1. Definitions.
-
-1.1. Contributor means each individual or entity that creates or contributes to the creation of Modifications.
-
-1.2. Contributor Version means the combination of the Original Software, prior Modifications used by a Contributor (if any), and the Modifications made by that particular Contributor.
-
-1.3. Covered Software means (a) the Original Software, or (b) Modifications, or (c) the combination of files containing Original Software with files containing Modifications, in each case including portions thereof.
-
-1.4. Executable means the Covered Software in any form other than Source Code.
-
-1.5. Initial Developer means the individual or entity that first makes Original Software available under this License.
-
-1.6. Larger Work means a work which combines Covered Software or portions thereof with code not governed by the terms of this License.
-
-1.7. License means this document.
-
-1.8. Licensable means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently acquired, any and all of the rights conveyed herein.
-
-1.9. Modifications means the Source Code and Executable form of any of the following:
-
-A. Any file that results from an addition to, deletion from or modification of the contents of a file containing Original Software or previous Modifications;
-
-B. Any new file that contains any part of the Original Software or previous Modification; or
-
-C. Any new file that is contributed or otherwise made available under the terms of this License.
-
-1.10. Original Software means the Source Code and Executable form of computer software code that is originally released under this License.
-
-1.11. Patent Claims means any patent claim(s), now owned or hereafter acquired, including without limitation, method, process, and apparatus claims, in any patent Licensable by grantor.
-
-1.12. Source Code means (a) the common form of computer software code in which modifications are made and (b) associated documentation included in or with such code.
-
-1.13. You (or Your) means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License. For legal entities, You includes any entity which controls, is controlled by, or is under common control with You. For purposes of this definition, control means (a)áthe power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b)áownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.
-
-2. License Grants.
-
-2.1. The Initial Developer Grant.
-Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, the Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license:
-(a) under intellectual property rights (other than patent or trademark) Licensable by Initial Developer, to use, reproduce, modify, display, perform, sublicense and distribute the Original Software (or portions thereof), with or without Modifications, and/or as part of a Larger Work; and
-(b) under Patent Claims infringed by the making, using or selling of Original Software, to make, have made, use, practice, sell, and offer for sale, and/or otherwise dispose of the Original Software (or portions thereof).
-(c) The licenses granted in Sectionsá2.1(a) and (b) are effective on the date Initial Developer first distributes or otherwise makes the Original Software available to a third party under the terms of this License.
-(d) Notwithstanding Sectioná2.1(b) above, no patent license is granted: (1)áfor code that You delete from the Original Software, or (2)áfor infringements caused by: (i)áthe modification of the Original Software, or (ii)áthe combination of the Original Software with other software or devices.
-
-2.2. Contributor Grant.
-Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
-(a) under intellectual property rights (other than patent or trademark) Licensable by Contributor to use, reproduce, modify, display, perform, sublicense and distribute the Modifications created by such Contributor (or portions thereof), either on an unmodified basis, with other Modifications, as Covered Software and/or as part of a Larger Work; and
-(b) under Patent Claims infringed by the making, using, or selling of Modifications made by that Contributor either alone and/or in combination with its Contributor Version (or portions of such combination), to make, use, sell, offer for sale, have made, and/or otherwise dispose of: (1)áModifications made by that Contributor (or portions thereof); and (2)áthe combination of Modifications made by that Contributor with its Contributor Version (or portions of such combination).
-(c) The licenses granted in Sectionsá2.2(a) and 2.2(b) are effective on the date Contributor first distributes or otherwise makes the Modifications available to a third party.
-(d) Notwithstanding Sectioná2.2(b) above, no patent license is granted: (1)áfor any code that Contributor has deleted from the Contributor Version; (2)áfor infringements caused by: (i)áthird party modifications of Contributor Version, or (ii)áthe combination of Modifications made by that Contributor with other software (except as part of the Contributor Version) or other devices; or (3)áunder Patent Claims infringed by Covered Software in the absence of Modifications made by that Contributor.
-
-3. Distribution Obligations.
-
-3.1. Availability of Source Code.
-
-Any Covered Software that You distribute or otherwise make available in Executable form must also be made available in Source Code form and that Source Code form must be distributed only under the terms of this License. You must include a copy of this License with every copy of the Source Code form of the Covered Software You distribute or otherwise make available. You must inform recipients of any such Covered Software in Executable form as to how they can obtain such Covered Software in Source Code form in a reasonable manner on or through a medium customarily used for software exchange.
-
-3.2. Modifications.
-
-The Modifications that You create or to which You contribute are governed by the terms of this License. You represent that You believe Your Modifications are Your original creation(s) and/or You have sufficient rights to grant the rights conveyed by this License.
-
-3.3. Required Notices.
-You must include a notice in each of Your Modifications that identifies You as the Contributor of the Modification. You may not remove or alter any copyright, patent or trademark notices contained within the Covered Software, or any notices of licensing or any descriptive text giving attribution to any Contributor or the Initial Developer.
-
-3.4. Application of Additional Terms.
-You may not offer or impose any terms on any Covered Software in Source Code form that alters or restricts the applicable version of this License or the recipients rights hereunder. You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, you may do so only on Your own behalf, and not on behalf of the Initial Developer or any Contributor. You must make it absolutely clear that any such warranty, support, indemnity or liability obligation is offered by You alone, and You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of warranty, support, indemnity or liability terms You offer.
-
-3.5. Distribution of Executable Versions.
-You may distribute the Executable form of the Covered Software under the terms of this License or under the terms of a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable form does not attempt to limit or alter the recipients rights in the Source Code form from the rights set forth in this License. If You distribute the Covered Software in Executable form under a different license, You must make it absolutely clear that any terms which differ from this License are offered by You alone, not by the Initial Developer or Contributor. You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of any such terms You offer.
-
-3.6. Larger Works.
-You may create a Larger Work by combining Covered Software with other code not governed by the terms of this License and distribute the Larger Work as a single product. In such a case, You must make sure the requirements of this License are fulfilled for the Covered Software.
-
-4. Versions of the License.
-
-4.1. New Versions.
-Sun Microsystems, Inc. is the initial license steward and may publish revised and/or new versions of this License from time to time. Each version will be given a distinguishing version number. Except as provided in Section 4.3, no one other than the license steward has the right to modify this License.
-
-4.2. Effect of New Versions.
-
-You may always continue to use, distribute or otherwise make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. If the Initial Developer includes a notice in the Original Software prohibiting it from being distributed or otherwise made available under any subsequent version of the License, You must distribute and make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. Otherwise, You may also choose to use, distribute or otherwise make the Covered Software available under the terms of any subsequent version of the License published by the license steward.
-4.3. Modified Versions.
-
-When You are an Initial Developer and You want to create a new license for Your Original Software, You may create and use a modified version of this License if You: (a)árename the license and remove any references to the name of the license steward (except to note that the license differs from this License); and (b)áotherwise make it clear that the license contains terms which differ from this License.
-
-5. DISCLAIMER OF WARRANTY.
-
-COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN AS IS BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED SOFTWARE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED SOFTWARE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER.
-
-6. TERMINATION.
-
-6.1. This License and the rights granted hereunder will terminate automatically if You fail to comply with terms herein and fail to cure such breach within 30 days of becoming aware of the breach. Provisions which, by their nature, must remain in effect beyond the termination of this License shall survive.
-
-6.2. If You assert a patent infringement claim (excluding declaratory judgment actions) against Initial Developer or a Contributor (the Initial Developer or Contributor against whom You assert such claim is referred to as Participant) alleging that the Participant Software (meaning the Contributor Version where the Participant is a Contributor or the Original Software where the Participant is the Initial Developer) directly or indirectly infringes any patent, then any and all rights granted directly or indirectly to You by such Participant, the Initial Developer (if the Initial Developer is not the Participant) and all Contributors under Sectionsá2.1 and/or 2.2 of this License shall, upon 60 days notice from Participant terminate prospectively and automatically at the expiration of such 60 day notice period, unless if within such 60 day period You withdraw Your claim with respect to the Participant Software against such Participant either unilaterally or pursuant to a written agreement with Participant.
-
-6.3. In the event of termination under Sectionsá6.1 or 6.2 above, all end user licenses that have been validly granted by You or any distributor hereunder prior to termination (excluding licenses granted to You by any distributor) shall survive termination.
-
-7. LIMITATION OF LIABILITY.
-
-UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOST PROFITS, LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY RESULTING FROM SUCH PARTYS NEGLIGENCE TO THE EXTENT APPLICABLE LAW PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.
-
-8. U.S. GOVERNMENT END USERS.
-
-The Covered Software is a commercial item, as that term is defined in 48áC.F.R.á2.101 (Oct. 1995), consisting of commercial computer software (as that term is defined at 48 C.F.R. á252.227-7014(a)(1)) and commercial computer software documentation as such terms are used in 48áC.F.R.á12.212 (Sept. 1995). Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), all U.S. Government End Users acquire Covered Software with only those rights set forth herein. This U.S. Government Rights clause is in lieu of, and supersedes, any other FAR, DFAR, or other clause or provision that addresses Government rights in computer software under this License.
-
-9. MISCELLANEOUS.
-
-This License represents the complete agreement concerning subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. This License shall be governed by the law of the jurisdiction specified in a notice contained within the Original Software (except to the extent applicable law, if any, provides otherwise), excluding such jurisdictions conflict-of-law provisions. Any litigation relating to this License shall be subject to the jurisdiction of the courts located in the jurisdiction and venue specified in a notice contained within the Original Software, with the losing party responsible for costs, including, without limitation, court costs and reasonable attorneys fees and expenses. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not apply to this License. You agree that You alone are responsible for compliance with the United States export administration regulations (and the export control laws and regulation of any other countries) when You use, distribute or otherwise make available any Covered Software.
-
-10. RESPONSIBILITY FOR CLAIMS.
-
-As between Initial Developer and the Contributors, each party is responsible for claims and damages arising, directly or indirectly, out of its utilization of rights under this License and You agree to work with Initial Developer and Contributors to distribute such responsibility on an equitable basis. Nothing herein is intended or shall be deemed to constitute any admission of liability.
-
-NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL)
-The GlassFish code released under the CDDL shall be governed by the laws of the State of California (excluding conflict-of-law provisions). Any litigation relating to this License shall be subject to the jurisdiction of the Federal Courts of the Northern District of California and the state courts of the State of California, with venue lying in Santa Clara County, California. 
-
-
-
diff --git a/solr/licenses/ant-1.8.2.jar.sha1 b/solr/licenses/ant-1.8.2.jar.sha1
deleted file mode 100644
index 564db78..0000000
--- a/solr/licenses/ant-1.8.2.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-fc33bf7cd8c5309dd7b81228e8626515ee42efd9
diff --git a/solr/licenses/ant-LICENSE-ASL.txt b/solr/licenses/ant-LICENSE-ASL.txt
deleted file mode 100644
index ab3182e..0000000
--- a/solr/licenses/ant-LICENSE-ASL.txt
+++ /dev/null
@@ -1,272 +0,0 @@
-/*
- *                                 Apache License
- *                           Version 2.0, January 2004
- *                        http://www.apache.org/licenses/
- *
- *   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
- *
- *   1. Definitions.
- *
- *      "License" shall mean the terms and conditions for use, reproduction,
- *      and distribution as defined by Sections 1 through 9 of this document.
- *
- *      "Licensor" shall mean the copyright owner or entity authorized by
- *      the copyright owner that is granting the License.
- *
- *      "Legal Entity" shall mean the union of the acting entity and all
- *      other entities that control, are controlled by, or are under common
- *      control with that entity. For the purposes of this definition,
- *      "control" means (i) the power, direct or indirect, to cause the
- *      direction or management of such entity, whether by contract or
- *      otherwise, or (ii) ownership of fifty percent (50%) or more of the
- *      outstanding shares, or (iii) beneficial ownership of such entity.
- *
- *      "You" (or "Your") shall mean an individual or Legal Entity
- *      exercising permissions granted by this License.
- *
- *      "Source" form shall mean the preferred form for making modifications,
- *      including but not limited to software source code, documentation
- *      source, and configuration files.
- *
- *      "Object" form shall mean any form resulting from mechanical
- *      transformation or translation of a Source form, including but
- *      not limited to compiled object code, generated documentation,
- *      and conversions to other media types.
- *
- *      "Work" shall mean the work of authorship, whether in Source or
- *      Object form, made available under the License, as indicated by a
- *      copyright notice that is included in or attached to the work
- *      (an example is provided in the Appendix below).
- *
- *      "Derivative Works" shall mean any work, whether in Source or Object
- *      form, that is based on (or derived from) the Work and for which the
- *      editorial revisions, annotations, elaborations, or other modifications
- *      represent, as a whole, an original work of authorship. For the purposes
- *      of this License, Derivative Works shall not include works that remain
- *      separable from, or merely link (or bind by name) to the interfaces of,
- *      the Work and Derivative Works thereof.
- *
- *      "Contribution" shall mean any work of authorship, including
- *      the original version of the Work and any modifications or additions
- *      to that Work or Derivative Works thereof, that is intentionally
- *      submitted to Licensor for inclusion in the Work by the copyright owner
- *      or by an individual or Legal Entity authorized to submit on behalf of
- *      the copyright owner. For the purposes of this definition, "submitted"
- *      means any form of electronic, verbal, or written communication sent
- *      to the Licensor or its representatives, including but not limited to
- *      communication on electronic mailing lists, source code control systems,
- *      and issue tracking systems that are managed by, or on behalf of, the
- *      Licensor for the purpose of discussing and improving the Work, but
- *      excluding communication that is conspicuously marked or otherwise
- *      designated in writing by the copyright owner as "Not a Contribution."
- *
- *      "Contributor" shall mean Licensor and any individual or Legal Entity
- *      on behalf of whom a Contribution has been received by Licensor and
- *      subsequently incorporated within the Work.
- *
- *   2. Grant of Copyright License. Subject to the terms and conditions of
- *      this License, each Contributor hereby grants to You a perpetual,
- *      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- *      copyright license to reproduce, prepare Derivative Works of,
- *      publicly display, publicly perform, sublicense, and distribute the
- *      Work and such Derivative Works in Source or Object form.
- *
- *   3. Grant of Patent License. Subject to the terms and conditions of
- *      this License, each Contributor hereby grants to You a perpetual,
- *      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- *      (except as stated in this section) patent license to make, have made,
- *      use, offer to sell, sell, import, and otherwise transfer the Work,
- *      where such license applies only to those patent claims licensable
- *      by such Contributor that are necessarily infringed by their
- *      Contribution(s) alone or by combination of their Contribution(s)
- *      with the Work to which such Contribution(s) was submitted. If You
- *      institute patent litigation against any entity (including a
- *      cross-claim or counterclaim in a lawsuit) alleging that the Work
- *      or a Contribution incorporated within the Work constitutes direct
- *      or contributory patent infringement, then any patent licenses
- *      granted to You under this License for that Work shall terminate
- *      as of the date such litigation is filed.
- *
- *   4. Redistribution. You may reproduce and distribute copies of the
- *      Work or Derivative Works thereof in any medium, with or without
- *      modifications, and in Source or Object form, provided that You
- *      meet the following conditions:
- *
- *      (a) You must give any other recipients of the Work or
- *          Derivative Works a copy of this License; and
- *
- *      (b) You must cause any modified files to carry prominent notices
- *          stating that You changed the files; and
- *
- *      (c) You must retain, in the Source form of any Derivative Works
- *          that You distribute, all copyright, patent, trademark, and
- *          attribution notices from the Source form of the Work,
- *          excluding those notices that do not pertain to any part of
- *          the Derivative Works; and
- *
- *      (d) If the Work includes a "NOTICE" text file as part of its
- *          distribution, then any Derivative Works that You distribute must
- *          include a readable copy of the attribution notices contained
- *          within such NOTICE file, excluding those notices that do not
- *          pertain to any part of the Derivative Works, in at least one
- *          of the following places: within a NOTICE text file distributed
- *          as part of the Derivative Works; within the Source form or
- *          documentation, if provided along with the Derivative Works; or,
- *          within a display generated by the Derivative Works, if and
- *          wherever such third-party notices normally appear. The contents
- *          of the NOTICE file are for informational purposes only and
- *          do not modify the License. You may add Your own attribution
- *          notices within Derivative Works that You distribute, alongside
- *          or as an addendum to the NOTICE text from the Work, provided
- *          that such additional attribution notices cannot be construed
- *          as modifying the License.
- *
- *      You may add Your own copyright statement to Your modifications and
- *      may provide additional or different license terms and conditions
- *      for use, reproduction, or distribution of Your modifications, or
- *      for any such Derivative Works as a whole, provided Your use,
- *      reproduction, and distribution of the Work otherwise complies with
- *      the conditions stated in this License.
- *
- *   5. Submission of Contributions. Unless You explicitly state otherwise,
- *      any Contribution intentionally submitted for inclusion in the Work
- *      by You to the Licensor shall be under the terms and conditions of
- *      this License, without any additional terms or conditions.
- *      Notwithstanding the above, nothing herein shall supersede or modify
- *      the terms of any separate license agreement you may have executed
- *      with Licensor regarding such Contributions.
- *
- *   6. Trademarks. This License does not grant permission to use the trade
- *      names, trademarks, service marks, or product names of the Licensor,
- *      except as required for reasonable and customary use in describing the
- *      origin of the Work and reproducing the content of the NOTICE file.
- *
- *   7. Disclaimer of Warranty. Unless required by applicable law or
- *      agreed to in writing, Licensor provides the Work (and each
- *      Contributor provides its Contributions) on an "AS IS" BASIS,
- *      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- *      implied, including, without limitation, any warranties or conditions
- *      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- *      PARTICULAR PURPOSE. You are solely responsible for determining the
- *      appropriateness of using or redistributing the Work and assume any
- *      risks associated with Your exercise of permissions under this License.
- *
- *   8. Limitation of Liability. In no event and under no legal theory,
- *      whether in tort (including negligence), contract, or otherwise,
- *      unless required by applicable law (such as deliberate and grossly
- *      negligent acts) or agreed to in writing, shall any Contributor be
- *      liable to You for damages, including any direct, indirect, special,
- *      incidental, or consequential damages of any character arising as a
- *      result of this License or out of the use or inability to use the
- *      Work (including but not limited to damages for loss of goodwill,
- *      work stoppage, computer failure or malfunction, or any and all
- *      other commercial damages or losses), even if such Contributor
- *      has been advised of the possibility of such damages.
- *
- *   9. Accepting Warranty or Additional Liability. While redistributing
- *      the Work or Derivative Works thereof, You may choose to offer,
- *      and charge a fee for, acceptance of support, warranty, indemnity,
- *      or other liability obligations and/or rights consistent with this
- *      License. However, in accepting such obligations, You may act only
- *      on Your own behalf and on Your sole responsibility, not on behalf
- *      of any other Contributor, and only if You agree to indemnify,
- *      defend, and hold each Contributor harmless for any liability
- *      incurred by, or claims asserted against, such Contributor by reason
- *      of your accepting any such warranty or additional liability.
- *
- *   END OF TERMS AND CONDITIONS
- *
- *   APPENDIX: How to apply the Apache License to your work.
- *
- *      To apply the Apache License to your work, attach the following
- *      boilerplate notice, with the fields enclosed by brackets "[]"
- *      replaced with your own identifying information. (Don't include
- *      the brackets!)  The text should be enclosed in the appropriate
- *      comment syntax for the file format. We also recommend that a
- *      file or class name and description of purpose be included on the
- *      same "printed page" as the copyright notice for easier
- *      identification within third-party archives.
- *
- *   Copyright [yyyy] [name of copyright owner]
- *
- *   Licensed under the Apache License, Version 2.0 (the "License");
- *   you may not use this file except in compliance with the License.
- *   You may obtain a copy of the License at
- *
- *       http://www.apache.org/licenses/LICENSE-2.0
- *
- *   Unless required by applicable law or agreed to in writing, software
- *   distributed under the License is distributed on an "AS IS" BASIS,
- *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *   See the License for the specific language governing permissions and
- *   limitations under the License.
- */
-
-W3C® SOFTWARE NOTICE AND LICENSE
-http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231
-
-This work (and included software, documentation such as READMEs, or other
-related items) is being provided by the copyright holders under the following
-license. By obtaining, using and/or copying this work, you (the licensee) agree
-that you have read, understood, and will comply with the following terms and
-conditions.
-
-Permission to copy, modify, and distribute this software and its documentation,
-with or without modification, for any purpose and without fee or royalty is
-hereby granted, provided that you include the following on ALL copies of the
-software and documentation or portions thereof, including modifications:
-
-  1. The full text of this NOTICE in a location viewable to users of the
-     redistributed or derivative work. 
-  2. Any pre-existing intellectual property disclaimers, notices, or terms
-     and conditions. If none exist, the W3C Software Short Notice should be
-     included (hypertext is preferred, text is permitted) within the body
-     of any redistributed or derivative code.
-  3. Notice of any changes or modifications to the files, including the date
-     changes were made. (We recommend you provide URIs to the location from
-     which the code is derived.)
-     
-THIS SOFTWARE AND DOCUMENTATION IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE
-NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
-TO, WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT
-THE USE OF THE SOFTWARE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY
-PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
-
-COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR
-CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE SOFTWARE OR DOCUMENTATION.
-
-The name and trademarks of copyright holders may NOT be used in advertising or
-publicity pertaining to the software without specific, written prior permission.
-Title to copyright in this software and any associated documentation will at
-all times remain with copyright holders.
-
-____________________________________
-
-This formulation of W3C's notice and license became active on December 31 2002.
-This version removes the copyright ownership notice such that this license can
-be used with materials other than those owned by the W3C, reflects that ERCIM
-is now a host of the W3C, includes references to this specific dated version of
-the license, and removes the ambiguous grant of "use". Otherwise, this version
-is the same as the previous version and is written so as to preserve the Free
-Software Foundation's assessment of GPL compatibility and OSI's certification
-under the Open Source Definition. Please see our Copyright FAQ for common
-questions about using materials from our site, including specific terms and
-conditions for packages like libwww, Amaya, and Jigsaw. Other questions about
-this notice can be directed to site-policy@w3.org.
- 
-Joseph Reagle <site-policy@w3.org> 
-
-This license came from: http://www.megginson.com/SAX/copying.html
-  However please note future versions of SAX may be covered 
-  under http://saxproject.org/?selected=pd
-
-SAX2 is Free!
-
-I hereby abandon any property rights to SAX 2.0 (the Simple API for
-XML), and release all of the SAX 2.0 source code, compiled code, and
-documentation contained in this distribution into the Public Domain.
-SAX comes with NO WARRANTY or guarantee of fitness for any
-purpose.
-
-David Megginson, david@megginson.com
-2000-05-05
diff --git a/solr/licenses/ant-NOTICE.txt b/solr/licenses/ant-NOTICE.txt
deleted file mode 100644
index 4c88cc6..0000000
--- a/solr/licenses/ant-NOTICE.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-   =========================================================================
-   ==  NOTICE file corresponding to the section 4 d of                    ==
-   ==  the Apache License, Version 2.0,                                   ==
-   ==  in this case for the Apache Ant distribution.                      ==
-   =========================================================================
-
-   Apache Ant
-   Copyright 1999-2008 The Apache Software Foundation
-
-   This product includes software developed by
-   The Apache Software Foundation (http://www.apache.org/).
-
-   This product includes also software developed by :
-     - the W3C consortium (http://www.w3c.org) ,
-     - the SAX project (http://www.saxproject.org)
-
-   The <sync> task is based on code Copyright (c) 2002, Landmark
-   Graphics Corp that has been kindly donated to the Apache Software
-   Foundation.
-
-   Portions of this software were originally based on the following:
-     - software copyright (c) 1999, IBM Corporation., http://www.ibm.com.
-     - software copyright (c) 1999, Sun Microsystems., http://www.sun.com.
-     - voluntary contributions made by Paul Eng on behalf of the 
-       Apache Software Foundation that were originally developed at iClick, Inc.,
-       software copyright (c) 1999.
diff --git a/solr/licenses/asciidoctor-ant-1.6.2.jar.sha1 b/solr/licenses/asciidoctor-ant-1.6.2.jar.sha1
deleted file mode 100644
index 558a01f..0000000
--- a/solr/licenses/asciidoctor-ant-1.6.2.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-c5ba599e3918e7a3316e6bf110cadd5aeb2a026b
diff --git a/solr/licenses/asciidoctor-ant-LICENSE-ASL.txt b/solr/licenses/asciidoctor-ant-LICENSE-ASL.txt
deleted file mode 100644
index d645695..0000000
--- a/solr/licenses/asciidoctor-ant-LICENSE-ASL.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/solr/licenses/asciidoctor-ant-NOTICE.txt b/solr/licenses/asciidoctor-ant-NOTICE.txt
deleted file mode 100644
index 4cec135..0000000
--- a/solr/licenses/asciidoctor-ant-NOTICE.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Apache [asciidoctor-ant]
-Copyright [2013] The Apache Software Foundation
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
\ No newline at end of file
diff --git a/solr/licenses/derby-10.9.1.0.jar.sha1 b/solr/licenses/derby-10.9.1.0.jar.sha1
deleted file mode 100644
index 2a69e42..0000000
--- a/solr/licenses/derby-10.9.1.0.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-4538cf5564ab3c262eec65c55fdb13965625589c
diff --git a/solr/licenses/derby-LICENSE-ASL.txt b/solr/licenses/derby-LICENSE-ASL.txt
deleted file mode 100644
index d645695..0000000
--- a/solr/licenses/derby-LICENSE-ASL.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/solr/licenses/derby-NOTICE.txt b/solr/licenses/derby-NOTICE.txt
deleted file mode 100644
index f22595f..0000000
--- a/solr/licenses/derby-NOTICE.txt
+++ /dev/null
@@ -1,182 +0,0 @@
-=========================================================================
-==  NOTICE file corresponding to section 4(d) of the Apache License,
-==  Version 2.0, in this case for the Apache Derby distribution.
-==
-==  DO NOT EDIT THIS FILE DIRECTLY. IT IS GENERATED
-==  BY THE buildnotice TARGET IN THE TOP LEVEL build.xml FILE.
-==
-=========================================================================
-
-Apache Derby
-Copyright 2004-2012 The Apache Software Foundation
-
-This product includes software developed by
-The Apache Software Foundation (http://www.apache.org/).
-
-
-=========================================================================
-
-Portions of Derby were originally developed by
-International Business Machines Corporation and are
-licensed to the Apache Software Foundation under the
-"Software Grant and Corporate Contribution License Agreement",
-informally known as the "Derby CLA".
-The following copyright notice(s) were affixed to portions of the code
-with which this file is now or was at one time distributed
-and are placed here unaltered.
-
-(C) Copyright 1997,2004 International Business Machines Corporation.  All rights reserved.
-
-(C) Copyright IBM Corp. 2003. 
-
-
-=========================================================================
-
-
-The portion of the functionTests under 'nist' was originally 
-developed by the National Institute of Standards and Technology (NIST), 
-an agency of the United States Department of Commerce, and adapted by
-International Business Machines Corporation in accordance with the NIST
-Software Acknowledgment and Redistribution document at
-http://www.itl.nist.gov/div897/ctg/sql_form.htm
-
-
-
-=========================================================================
-
-
-The JDBC apis for small devices and JDBC3 (under java/stubs/jsr169 and
-java/stubs/jdbc3) were produced by trimming sources supplied by the
-Apache Harmony project. In addition, the Harmony SerialBlob and
-SerialClob implementations are used. The following notice covers the Harmony sources:
-
-Portions of Harmony were originally developed by
-Intel Corporation and are licensed to the Apache Software
-Foundation under the "Software Grant and Corporate Contribution
-License Agreement", informally known as the "Intel Harmony CLA".
-
-
-=========================================================================
-
-
-The Derby build relies on source files supplied by the Apache Felix
-project. The following notice covers the Felix files:
-
-  Apache Felix Main
-  Copyright 2008 The Apache Software Foundation
-
-
-  I. Included Software
-
-  This product includes software developed at
-  The Apache Software Foundation (http://www.apache.org/).
-  Licensed under the Apache License 2.0.
-
-  This product includes software developed at
-  The OSGi Alliance (http://www.osgi.org/).
-  Copyright (c) OSGi Alliance (2000, 2007).
-  Licensed under the Apache License 2.0.
-
-  This product includes software from http://kxml.sourceforge.net.
-  Copyright (c) 2002,2003, Stefan Haustein, Oberhausen, Rhld., Germany.
-  Licensed under BSD License.
-
-  II. Used Software
-
-  This product uses software developed at
-  The OSGi Alliance (http://www.osgi.org/).
-  Copyright (c) OSGi Alliance (2000, 2007).
-  Licensed under the Apache License 2.0.
-
-
-  III. License Summary
-  - Apache License 2.0
-  - BSD License
-
-
-=========================================================================
-
-
-The Derby build relies on jar files supplied by the Apache Xalan
-project. The following notice covers the Xalan jar files:
-
-   =========================================================================
-   ==  NOTICE file corresponding to section 4(d) of the Apache License,   ==
-   ==  Version 2.0, in this case for the Apache Xalan Java distribution.  ==
-   =========================================================================
-
-   Apache Xalan (Xalan XSLT processor)
-   Copyright 1999-2006 The Apache Software Foundation
-
-   Apache Xalan (Xalan serializer)
-   Copyright 1999-2006 The Apache Software Foundation
-
-   This product includes software developed at
-   The Apache Software Foundation (http://www.apache.org/).
-
-   =========================================================================
-   Portions of this software was originally based on the following:
-     - software copyright (c) 1999-2002, Lotus Development Corporation.,
-       http://www.lotus.com.
-     - software copyright (c) 2001-2002, Sun Microsystems.,
-       http://www.sun.com.
-     - software copyright (c) 2003, IBM Corporation., 
-       http://www.ibm.com.
-       
-   =========================================================================
-   The binary distribution package (ie. jars, samples and documentation) of
-   this product includes software developed by the following:
-       
-     - The Apache Software Foundation 
-         - Xerces Java - see LICENSE.txt 
-         - JAXP 1.3 APIs - see LICENSE.txt
-         - Bytecode Engineering Library - see LICENSE.txt
-         - Regular Expression - see LICENSE.txt
-       
-     - Scott Hudson, Frank Flannery, C. Scott Ananian 
-         - CUP Parser Generator runtime (javacup\runtime) - see LICENSE.txt 
- 
-   ========================================================================= 
-   The source distribution package (ie. all source and tools required to build
-   Xalan Java) of this product includes software developed by the following:
-       
-     - The Apache Software Foundation
-         - Xerces Java - see LICENSE.txt 
-         - JAXP 1.3 APIs - see LICENSE.txt
-         - Bytecode Engineering Library - see LICENSE.txt
-         - Regular Expression - see LICENSE.txt
-         - Ant - see LICENSE.txt
-         - Stylebook doc tool - see LICENSE.txt    
-       
-     - Elliot Joel Berk and C. Scott Ananian 
-         - Lexical Analyzer Generator (JLex) - see LICENSE.txt
-
-   =========================================================================       
-   Apache Xerces Java
-   Copyright 1999-2006 The Apache Software Foundation
-
-   This product includes software developed at
-   The Apache Software Foundation (http://www.apache.org/).
-
-   Portions of Apache Xerces Java in xercesImpl.jar and xml-apis.jar
-   were originally based on the following:
-     - software copyright (c) 1999, IBM Corporation., http://www.ibm.com.
-     - software copyright (c) 1999, Sun Microsystems., http://www.sun.com.
-     - voluntary contributions made by Paul Eng on behalf of the 
-       Apache Software Foundation that were originally developed at iClick, Inc.,
-       software copyright (c) 1999.    
-
-   =========================================================================   
-   Apache xml-commons xml-apis (redistribution of xml-apis.jar)
-
-   Apache XML Commons
-   Copyright 2001-2003,2006 The Apache Software Foundation.
-
-   This product includes software developed at
-   The Apache Software Foundation (http://www.apache.org/).
-
-   Portions of this software were originally based on the following:
-     - software copyright (c) 1999, IBM Corporation., http://www.ibm.com.
-     - software copyright (c) 1999, Sun Microsystems., http://www.sun.com.
-     - software copyright (c) 2000 World Wide Web Consortium, http://www.w3.org
-
diff --git a/solr/licenses/gimap-1.5.1.jar.sha1 b/solr/licenses/gimap-1.5.1.jar.sha1
deleted file mode 100644
index 41c9dbf..0000000
--- a/solr/licenses/gimap-1.5.1.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-3a4ccd3aa6ce33ec701893c3ee632eeb0e012c89
diff --git a/solr/licenses/gimap-LICENSE-CDDL.txt b/solr/licenses/gimap-LICENSE-CDDL.txt
deleted file mode 100644
index d6e03ec..0000000
--- a/solr/licenses/gimap-LICENSE-CDDL.txt
+++ /dev/null
@@ -1,135 +0,0 @@
-COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.1
-
-1. Definitions.
-
-1.1. "Contributor" means each individual or entity that creates or contributes to the creation of Modifications.
-
-1.2. "Contributor Version" means the combination of the Original Software, prior Modifications used by a Contributor (if any), and the Modifications made by that particular Contributor.
-
-1.3. "Covered Software" means (a) the Original Software, or (b) Modifications, or (c) the combination of files containing Original Software with files containing Modifications, in each case including portions thereof.
-
-1.4. "Executable" means the Covered Software in any form other than Source Code.
-
-1.5. "Initial Developer" means the individual or entity that first makes Original Software available under this License.
-
-1.6. "Larger Work" means a work which combines Covered Software or portions thereof with code not governed by the terms of this License.
-
-1.7. "License" means this document.
-
-1.8. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently acquired, any and all of the rights conveyed herein.
-
-1.9. "Modifications" means the Source Code and Executable form of any of the following:
-
-  A. Any file that results from an addition to, deletion from or modification of the contents of a file containing Original Software or previous Modifications;
-  
-  B. Any new file that contains any part of the Original Software or previous Modification; or
-  
-  C. Any new file that is contributed or otherwise made available under the terms of this License.
-
-1.10. "Original Software" means the Source Code and Executable form of computer software code that is originally released under this License.
-
-1.11. "Patent Claims" means any patent claim(s), now owned or hereafter acquired, including without limitation, method, process, and apparatus claims, in any patent Licensable by grantor.
-
-1.12. "Source Code" means (a) the common form of computer software code in which modifications are made and (b) associated documentation included in or with such code.
-
-1.13. "You" (or "Your") means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License. For legal entities, "You" includes any entity which controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.
-
-2. License Grants.
-
-2.1. The Initial Developer Grant.
-
-Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, the Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license:
-
-(a) under intellectual property rights (other than patent or trademark) Licensable by Initial Developer, to use, reproduce, modify, display, perform, sublicense and distribute the Original Software (or portions thereof), with or without Modifications, and/or as part of a Larger Work; and
-
-(b) under Patent Claims infringed by the making, using or selling of Original Software, to make, have made, use, practice, sell, and offer for sale, and/or otherwise dispose of the Original Software (or portions thereof).
-
-(c) The licenses granted in Sections 2.1(a) and (b) are effective on the date Initial Developer first distributes or otherwise makes the Original Software available to a third party under the terms of this License.
-
-(d) Notwithstanding Section 2.1(b) above, no patent license is granted: (1) for code that You delete from the Original Software, or (2) for infringements caused by: (i) the modification of the Original Software, or (ii) the combination of the Original Software with other software or devices.
-
-2.2. Contributor Grant.
-
-Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
-
-(a) under intellectual property rights (other than patent or trademark) Licensable by Contributor to use, reproduce, modify, display, perform, sublicense and distribute the Modifications created by such Contributor (or portions thereof), either on an unmodified basis, with other Modifications, as Covered Software and/or as part of a Larger Work; and
-
-(b) under Patent Claims infringed by the making, using, or selling of Modifications made by that Contributor either alone and/or in combination with its Contributor Version (or portions of such combination), to make, use, sell, offer for sale, have made, and/or otherwise dispose of: (1) Modifications made by that Contributor (or portions thereof); and (2) the combination of Modifications made by that Contributor with its Contributor Version (or portions of such combination).
-
-(c) The licenses granted in Sections 2.2(a) and 2.2(b) are effective on the date Contributor first distributes or otherwise makes the Modifications available to a third party.
-
-(d) Notwithstanding Section 2.2(b) above, no patent license is granted: (1) for any code that Contributor has deleted from the Contributor Version; (2) for infringements caused by: (i) third party modifications of Contributor Version, or (ii) the combination of Modifications made by that Contributor with other software (except as part of the Contributor Version) or other devices; or (3) under Patent Claims infringed by Covered Software in the absence of Modifications made by that Contributor.
-
-3. Distribution Obligations.
-
-3.1. Availability of Source Code.
-
-Any Covered Software that You distribute or otherwise make available in Executable form must also be made available in Source Code form and that Source Code form must be distributed only under the terms of this License. You must include a copy of this License with every copy of the Source Code form of the Covered Software You distribute or otherwise make available. You must inform recipients of any such Covered Software in Executable form as to how they can obtain such Covered Software in Source Code form in a reasonable manner on or through a medium customarily used for software exchange.
-
-3.2. Modifications.
-
-The Modifications that You create or to which You contribute are governed by the terms of this License. You represent that You believe Your Modifications are Your original creation(s) and/or You have sufficient rights to grant the rights conveyed by this License.
-
-3.3. Required Notices.
-
-You must include a notice in each of Your Modifications that identifies You as the Contributor of the Modification. You may not remove or alter any copyright, patent or trademark notices contained within the Covered Software, or any notices of licensing or any descriptive text giving attribution to any Contributor or the Initial Developer.
-
-3.4. Application of Additional Terms.
-
-You may not offer or impose any terms on any Covered Software in Source Code form that alters or restricts the applicable version of this License or the recipients' rights hereunder. You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, you may do so only on Your own behalf, and not on behalf of the Initial Developer or any Contributor. You must make it absolutely clear that any such warranty, support, indemnity or liability obligation is offered by You alone, and You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of warranty, support, indemnity or liability terms You offer.
-
-3.5. Distribution of Executable Versions.
-
-You may distribute the Executable form of the Covered Software under the terms of this License or under the terms of a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable form does not attempt to limit or alter the recipient's rights in the Source Code form from the rights set forth in this License. If You distribute the Covered Software in Executable form under a different license, You must make it absolutely clear that any terms which differ from this License are offered by You alone, not by the Initial Developer or Contributor. You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of any such terms You offer.
-
-3.6. Larger Works.
-
-You may create a Larger Work by combining Covered Software with other code not governed by the terms of this License and distribute the Larger Work as a single product. In such a case, You must make sure the requirements of this License are fulfilled for the Covered Software.
-
-4. Versions of the License.
-
-4.1. New Versions.
-
-Oracle is the initial license steward and may publish revised and/or new versions of this License from time to time. Each version will be given a distinguishing version number. Except as provided in Section 4.3, no one other than the license steward has the right to modify this License.
-
-4.2. Effect of New Versions.
-
-You may always continue to use, distribute or otherwise make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. If the Initial Developer includes a notice in the Original Software prohibiting it from being distributed or otherwise made available under any subsequent version of the License, You must distribute and make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. Otherwise, You may also choose to use, distribute or otherwise make the Covered Software available under the terms of any subsequent version of the License published by the license steward.
-
-4.3. Modified Versions.
-
-When You are an Initial Developer and You want to create a new license for Your Original Software, You may create and use a modified version of this License if You: (a) rename the license and remove any references to the name of the license steward (except to note that the license differs from this License); and (b) otherwise make it clear that the license contains terms which differ from this License.
-
-5. DISCLAIMER OF WARRANTY.
-
-COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED SOFTWARE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED SOFTWARE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER.
-
-6. TERMINATION.
-
-6.1. This License and the rights granted hereunder will terminate automatically if You fail to comply with terms herein and fail to cure such breach within 30 days of becoming aware of the breach. Provisions which, by their nature, must remain in effect beyond the termination of this License shall survive.
-
-6.2. If You assert a patent infringement claim (excluding declaratory judgment actions) against Initial Developer or a Contributor (the Initial Developer or Contributor against whom You assert such claim is referred to as "Participant") alleging that the Participant Software (meaning the Contributor Version where the Participant is a Contributor or the Original Software where the Participant is the Initial Developer) directly or indirectly infringes any patent, then any and all rights granted directly or indirectly to You by such Participant, the Initial Developer (if the Initial Developer is not the Participant) and all Contributors under Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from Participant terminate prospectively and automatically at the expiration of such 60 day notice period, unless if within such 60 day period You withdraw Your claim with respect to the Participant Software against such Participant either unilaterally or pursuant to a written agreement with Participant.
-
-6.3. If You assert a patent infringement claim against Participant alleging that the Participant Software directly or indirectly infringes any patent where such claim is resolved (such as by license or settlement) prior to the initiation of patent infringement litigation, then the reasonable value of the licenses granted by such Participant under Sections 2.1 or 2.2 shall be taken into account in determining the amount or value of any payment or license.
-
-6.4. In the event of termination under Sections 6.1 or 6.2 above, all end user licenses that have been validly granted by You or any distributor hereunder prior to termination (excluding licenses granted to You by any distributor) shall survive termination.
-
-7. LIMITATION OF LIABILITY.
-
-UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.
-
-8. U.S. GOVERNMENT END USERS.
-
-The Covered Software is a "commercial item," as that term is defined in 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer software" (as that term is defined at 48 C.F.R. § 252.227-7014(a)(1)) and "commercial computer software documentation" as such terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), all U.S. Government End Users acquire Covered Software with only those rights set forth herein. This U.S. Government Rights clause is in lieu of, and supersedes, any other FAR, DFAR, or other clause or provision that addresses Government rights in computer software under this License.
-
-9. MISCELLANEOUS.
-
-This License represents the complete agreement concerning subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. This License shall be governed by the law of the jurisdiction specified in a notice contained within the Original Software (except to the extent applicable law, if any, provides otherwise), excluding such jurisdiction's conflict-of-law provisions. Any litigation relating to this License shall be subject to the jurisdiction of the courts located in the jurisdiction and venue specified in a notice contained within the Original Software, with the losing party responsible for costs, including, without limitation, court costs and reasonable attorneys' fees and expenses. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not apply to this License. You agree that You alone are responsible for compliance with the United States export administration regulations (and the export control laws and regulation of any other countries) when You use, distribute or otherwise make available any Covered Software.
-
-10. RESPONSIBILITY FOR CLAIMS.
-
-As between Initial Developer and the Contributors, each party is responsible for claims and damages arising, directly or indirectly, out of its utilization of rights under this License and You agree to work with Initial Developer and Contributors to distribute such responsibility on an equitable basis. Nothing herein is intended or shall be deemed to constitute any admission of liability.
-
-NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL)
-
-The code released under the CDDL shall be governed by the laws of the State of California (excluding conflict-of-law provisions). Any litigation relating to this License shall be subject to the jurisdiction of the Federal Courts of the Northern District of California and the state courts of the State of California, with venue lying in Santa Clara County, California.
\ No newline at end of file
diff --git a/solr/licenses/hamcrest-2.2.jar.sha1 b/solr/licenses/hamcrest-2.2.jar.sha1
new file mode 100644
index 0000000..820b1fb
--- /dev/null
+++ b/solr/licenses/hamcrest-2.2.jar.sha1
@@ -0,0 +1 @@
+1820c0968dba3a11a1b30669bb1f01978a91dedc
diff --git a/solr/licenses/hamcrest-core-LICENSE-BSD.txt b/solr/licenses/hamcrest-LICENSE-BSD.txt
similarity index 100%
rename from solr/licenses/hamcrest-core-LICENSE-BSD.txt
rename to solr/licenses/hamcrest-LICENSE-BSD.txt
diff --git a/solr/licenses/hamcrest-core-NOTICE.txt b/solr/licenses/hamcrest-NOTICE.txt
similarity index 100%
rename from solr/licenses/hamcrest-core-NOTICE.txt
rename to solr/licenses/hamcrest-NOTICE.txt
diff --git a/solr/licenses/hamcrest-core-1.3.jar.sha1 b/solr/licenses/hamcrest-core-1.3.jar.sha1
deleted file mode 100644
index 67add77..0000000
--- a/solr/licenses/hamcrest-core-1.3.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-42a25dc3219429f0e5d060061f71acb49bf010a0
diff --git a/solr/licenses/javax.mail-1.5.1.jar.sha1 b/solr/licenses/javax.mail-1.5.1.jar.sha1
deleted file mode 100644
index e7a0a83..0000000
--- a/solr/licenses/javax.mail-1.5.1.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-9724dd44f1abbba99c9858aa05fc91d53f59e7a5
diff --git a/solr/licenses/javax.mail-LICENSE-CDDL.txt b/solr/licenses/javax.mail-LICENSE-CDDL.txt
deleted file mode 100644
index d6e03ec..0000000
--- a/solr/licenses/javax.mail-LICENSE-CDDL.txt
+++ /dev/null
@@ -1,135 +0,0 @@
-COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.1
-
-1. Definitions.
-
-1.1. "Contributor" means each individual or entity that creates or contributes to the creation of Modifications.
-
-1.2. "Contributor Version" means the combination of the Original Software, prior Modifications used by a Contributor (if any), and the Modifications made by that particular Contributor.
-
-1.3. "Covered Software" means (a) the Original Software, or (b) Modifications, or (c) the combination of files containing Original Software with files containing Modifications, in each case including portions thereof.
-
-1.4. "Executable" means the Covered Software in any form other than Source Code.
-
-1.5. "Initial Developer" means the individual or entity that first makes Original Software available under this License.
-
-1.6. "Larger Work" means a work which combines Covered Software or portions thereof with code not governed by the terms of this License.
-
-1.7. "License" means this document.
-
-1.8. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently acquired, any and all of the rights conveyed herein.
-
-1.9. "Modifications" means the Source Code and Executable form of any of the following:
-
-  A. Any file that results from an addition to, deletion from or modification of the contents of a file containing Original Software or previous Modifications;
-  
-  B. Any new file that contains any part of the Original Software or previous Modification; or
-  
-  C. Any new file that is contributed or otherwise made available under the terms of this License.
-
-1.10. "Original Software" means the Source Code and Executable form of computer software code that is originally released under this License.
-
-1.11. "Patent Claims" means any patent claim(s), now owned or hereafter acquired, including without limitation, method, process, and apparatus claims, in any patent Licensable by grantor.
-
-1.12. "Source Code" means (a) the common form of computer software code in which modifications are made and (b) associated documentation included in or with such code.
-
-1.13. "You" (or "Your") means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License. For legal entities, "You" includes any entity which controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.
-
-2. License Grants.
-
-2.1. The Initial Developer Grant.
-
-Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, the Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license:
-
-(a) under intellectual property rights (other than patent or trademark) Licensable by Initial Developer, to use, reproduce, modify, display, perform, sublicense and distribute the Original Software (or portions thereof), with or without Modifications, and/or as part of a Larger Work; and
-
-(b) under Patent Claims infringed by the making, using or selling of Original Software, to make, have made, use, practice, sell, and offer for sale, and/or otherwise dispose of the Original Software (or portions thereof).
-
-(c) The licenses granted in Sections 2.1(a) and (b) are effective on the date Initial Developer first distributes or otherwise makes the Original Software available to a third party under the terms of this License.
-
-(d) Notwithstanding Section 2.1(b) above, no patent license is granted: (1) for code that You delete from the Original Software, or (2) for infringements caused by: (i) the modification of the Original Software, or (ii) the combination of the Original Software with other software or devices.
-
-2.2. Contributor Grant.
-
-Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
-
-(a) under intellectual property rights (other than patent or trademark) Licensable by Contributor to use, reproduce, modify, display, perform, sublicense and distribute the Modifications created by such Contributor (or portions thereof), either on an unmodified basis, with other Modifications, as Covered Software and/or as part of a Larger Work; and
-
-(b) under Patent Claims infringed by the making, using, or selling of Modifications made by that Contributor either alone and/or in combination with its Contributor Version (or portions of such combination), to make, use, sell, offer for sale, have made, and/or otherwise dispose of: (1) Modifications made by that Contributor (or portions thereof); and (2) the combination of Modifications made by that Contributor with its Contributor Version (or portions of such combination).
-
-(c) The licenses granted in Sections 2.2(a) and 2.2(b) are effective on the date Contributor first distributes or otherwise makes the Modifications available to a third party.
-
-(d) Notwithstanding Section 2.2(b) above, no patent license is granted: (1) for any code that Contributor has deleted from the Contributor Version; (2) for infringements caused by: (i) third party modifications of Contributor Version, or (ii) the combination of Modifications made by that Contributor with other software (except as part of the Contributor Version) or other devices; or (3) under Patent Claims infringed by Covered Software in the absence of Modifications made by that Contributor.
-
-3. Distribution Obligations.
-
-3.1. Availability of Source Code.
-
-Any Covered Software that You distribute or otherwise make available in Executable form must also be made available in Source Code form and that Source Code form must be distributed only under the terms of this License. You must include a copy of this License with every copy of the Source Code form of the Covered Software You distribute or otherwise make available. You must inform recipients of any such Covered Software in Executable form as to how they can obtain such Covered Software in Source Code form in a reasonable manner on or through a medium customarily used for software exchange.
-
-3.2. Modifications.
-
-The Modifications that You create or to which You contribute are governed by the terms of this License. You represent that You believe Your Modifications are Your original creation(s) and/or You have sufficient rights to grant the rights conveyed by this License.
-
-3.3. Required Notices.
-
-You must include a notice in each of Your Modifications that identifies You as the Contributor of the Modification. You may not remove or alter any copyright, patent or trademark notices contained within the Covered Software, or any notices of licensing or any descriptive text giving attribution to any Contributor or the Initial Developer.
-
-3.4. Application of Additional Terms.
-
-You may not offer or impose any terms on any Covered Software in Source Code form that alters or restricts the applicable version of this License or the recipients' rights hereunder. You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, you may do so only on Your own behalf, and not on behalf of the Initial Developer or any Contributor. You must make it absolutely clear that any such warranty, support, indemnity or liability obligation is offered by You alone, and You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of warranty, support, indemnity or liability terms You offer.
-
-3.5. Distribution of Executable Versions.
-
-You may distribute the Executable form of the Covered Software under the terms of this License or under the terms of a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable form does not attempt to limit or alter the recipient's rights in the Source Code form from the rights set forth in this License. If You distribute the Covered Software in Executable form under a different license, You must make it absolutely clear that any terms which differ from this License are offered by You alone, not by the Initial Developer or Contributor. You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of any such terms You offer.
-
-3.6. Larger Works.
-
-You may create a Larger Work by combining Covered Software with other code not governed by the terms of this License and distribute the Larger Work as a single product. In such a case, You must make sure the requirements of this License are fulfilled for the Covered Software.
-
-4. Versions of the License.
-
-4.1. New Versions.
-
-Oracle is the initial license steward and may publish revised and/or new versions of this License from time to time. Each version will be given a distinguishing version number. Except as provided in Section 4.3, no one other than the license steward has the right to modify this License.
-
-4.2. Effect of New Versions.
-
-You may always continue to use, distribute or otherwise make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. If the Initial Developer includes a notice in the Original Software prohibiting it from being distributed or otherwise made available under any subsequent version of the License, You must distribute and make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. Otherwise, You may also choose to use, distribute or otherwise make the Covered Software available under the terms of any subsequent version of the License published by the license steward.
-
-4.3. Modified Versions.
-
-When You are an Initial Developer and You want to create a new license for Your Original Software, You may create and use a modified version of this License if You: (a) rename the license and remove any references to the name of the license steward (except to note that the license differs from this License); and (b) otherwise make it clear that the license contains terms which differ from this License.
-
-5. DISCLAIMER OF WARRANTY.
-
-COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED SOFTWARE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED SOFTWARE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER.
-
-6. TERMINATION.
-
-6.1. This License and the rights granted hereunder will terminate automatically if You fail to comply with terms herein and fail to cure such breach within 30 days of becoming aware of the breach. Provisions which, by their nature, must remain in effect beyond the termination of this License shall survive.
-
-6.2. If You assert a patent infringement claim (excluding declaratory judgment actions) against Initial Developer or a Contributor (the Initial Developer or Contributor against whom You assert such claim is referred to as "Participant") alleging that the Participant Software (meaning the Contributor Version where the Participant is a Contributor or the Original Software where the Participant is the Initial Developer) directly or indirectly infringes any patent, then any and all rights granted directly or indirectly to You by such Participant, the Initial Developer (if the Initial Developer is not the Participant) and all Contributors under Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from Participant terminate prospectively and automatically at the expiration of such 60 day notice period, unless if within such 60 day period You withdraw Your claim with respect to the Participant Software against such Participant either unilaterally or pursuant to a written agreement with Participant.
-
-6.3. If You assert a patent infringement claim against Participant alleging that the Participant Software directly or indirectly infringes any patent where such claim is resolved (such as by license or settlement) prior to the initiation of patent infringement litigation, then the reasonable value of the licenses granted by such Participant under Sections 2.1 or 2.2 shall be taken into account in determining the amount or value of any payment or license.
-
-6.4. In the event of termination under Sections 6.1 or 6.2 above, all end user licenses that have been validly granted by You or any distributor hereunder prior to termination (excluding licenses granted to You by any distributor) shall survive termination.
-
-7. LIMITATION OF LIABILITY.
-
-UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.
-
-8. U.S. GOVERNMENT END USERS.
-
-The Covered Software is a "commercial item," as that term is defined in 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer software" (as that term is defined at 48 C.F.R. § 252.227-7014(a)(1)) and "commercial computer software documentation" as such terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), all U.S. Government End Users acquire Covered Software with only those rights set forth herein. This U.S. Government Rights clause is in lieu of, and supersedes, any other FAR, DFAR, or other clause or provision that addresses Government rights in computer software under this License.
-
-9. MISCELLANEOUS.
-
-This License represents the complete agreement concerning subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. This License shall be governed by the law of the jurisdiction specified in a notice contained within the Original Software (except to the extent applicable law, if any, provides otherwise), excluding such jurisdiction's conflict-of-law provisions. Any litigation relating to this License shall be subject to the jurisdiction of the courts located in the jurisdiction and venue specified in a notice contained within the Original Software, with the losing party responsible for costs, including, without limitation, court costs and reasonable attorneys' fees and expenses. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not apply to this License. You agree that You alone are responsible for compliance with the United States export administration regulations (and the export control laws and regulation of any other countries) when You use, distribute or otherwise make available any Covered Software.
-
-10. RESPONSIBILITY FOR CLAIMS.
-
-As between Initial Developer and the Contributors, each party is responsible for claims and damages arising, directly or indirectly, out of its utilization of rights under this License and You agree to work with Initial Developer and Contributors to distribute such responsibility on an equitable basis. Nothing herein is intended or shall be deemed to constitute any admission of liability.
-
-NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL)
-
-The code released under the CDDL shall be governed by the laws of the State of California (excluding conflict-of-law provisions). Any litigation relating to this License shall be subject to the jurisdiction of the Federal Courts of the Northern District of California and the state courts of the State of California, with venue lying in Santa Clara County, California.
\ No newline at end of file
diff --git a/solr/licenses/junit4-ant-2.7.6.jar.sha1 b/solr/licenses/junit4-ant-2.7.6.jar.sha1
deleted file mode 100644
index 5f47480..0000000
--- a/solr/licenses/junit4-ant-2.7.6.jar.sha1
+++ /dev/null
@@ -1 +0,0 @@
-38416b709b9d7604cd2b65e5e032b61b5d32e9f2
diff --git a/solr/licenses/junit4-ant-LICENSE-ASL.txt b/solr/licenses/junit4-ant-LICENSE-ASL.txt
deleted file mode 100644
index 7a4a3ea..0000000
--- a/solr/licenses/junit4-ant-LICENSE-ASL.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
\ No newline at end of file
diff --git a/solr/licenses/junit4-ant-NOTICE.txt b/solr/licenses/junit4-ant-NOTICE.txt
deleted file mode 100644
index 3c321aa..0000000
--- a/solr/licenses/junit4-ant-NOTICE.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-
-JUnit4, parallel JUnit execution for ANT
-Copyright 2011-2012 Carrot Search s.c.
-http://labs.carrotsearch.com/randomizedtesting.html
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
-
-This product includes asm (asmlib), BSD license
-This product includes Google Guava, ASL license
-This product includes simple-xml,   ASL license
-This product includes Google GSON,  ASL license
diff --git a/solr/packaging/build.gradle b/solr/packaging/build.gradle
index 55b78ca..a75ea19 100644
--- a/solr/packaging/build.gradle
+++ b/solr/packaging/build.gradle
@@ -46,8 +46,6 @@
    ":solr:contrib:analytics",
    ":solr:contrib:extraction",
    ":solr:contrib:clustering",
-   ":solr:contrib:dataimporthandler",
-   ":solr:contrib:dataimporthandler-extras",
    ":solr:contrib:jaegertracer-configurator",
    ":solr:contrib:langid",
    ":solr:contrib:ltr",
diff --git a/solr/server/README.md b/solr/server/README.md
index 6686c4f..3760227 100644
--- a/solr/server/README.md
+++ b/solr/server/README.md
@@ -98,8 +98,8 @@
 this directory for loading "contrib" plugins via relative paths.  
 
 If you make a copy of this example server and wish to use the 
-ExtractingRequestHandler (SolrCell), DataImportHandler (DIH), the 
-clustering component, or any other modules in "contrib", you will need to 
+ExtractingRequestHandler (SolrCell), the clustering component,
+or any other modules in "contrib", you will need to
 copy the required jars or update the paths to those jars in your 
 solrconfig.xml.
 
diff --git a/solr/server/build.xml b/solr/server/build.xml
deleted file mode 100644
index d3d7af6..0000000
--- a/solr/server/build.xml
+++ /dev/null
@@ -1,54 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="solr-server" default="resolve" xmlns:ivy="antlib:org.apache.ivy.ant">
-  <description>Solr Server</description>
-
-  <import file="../common-build.xml"/>
-
-  <!-- example tests are currently elsewhere -->
-  <target name="test"/>
-  <target name="test-nocompile"/>
-
-  <!-- this module has no javadocs -->
-  <target name="javadocs"/>
-
-  <!-- this module has no jar either -->
-  <target name="jar-core"/>
-
-  <!-- nothing to compile -->
-  <target name="compile-core"/>
-  <target name="compile-test"/>
-
-  <!-- nothing to cover -->
-  <target name="pitest"/>
-
-  <target name="resolve" depends="ivy-availability-check,ivy-fail,ivy-configure">
-    <sequential>
-    <!-- jetty libs in lib/ -->
-    <ivy:retrieve conf="jetty,servlet,metrics" type="jar,bundle" log="download-only" symlink="${ivy.symlink}"
-                  pattern="lib/[artifact]-[revision].[ext]" sync="${ivy.sync}"/>
-    <ivy:retrieve conf="logging" type="jar,bundle" log="download-only" symlink="${ivy.symlink}"
-                  pattern="lib/ext/[artifact]-[revision].[ext]" sync="${ivy.sync}"/>
-    <!-- start.jar - we don't use sync=true here, we don't own the dir, but
-         it's one jar with a constant name and we don't need it -->
-    <ivy:retrieve conf="start" type="jar" log="download-only" symlink="${ivy.symlink}" 
-                  pattern="start.jar"/>
-    </sequential>
-  </target>
-
-</project>
diff --git a/solr/server/etc/security.policy b/solr/server/etc/security.policy
index 57229f0..1030347 100644
--- a/solr/server/etc/security.policy
+++ b/solr/server/etc/security.policy
@@ -114,7 +114,7 @@
   // needed by hadoop htrace
   permission java.net.NetPermission "getNetworkInformation";
 
-  // needed by DIH
+  // needed by DIH - possibly even after DIH is a package
   permission java.sql.SQLPermission "deregisterDriver";
 
   permission java.util.logging.LoggingPermission "control";
diff --git a/solr/server/ivy.xml b/solr/server/ivy.xml
deleted file mode 100644
index 29a66e9..0000000
--- a/solr/server/ivy.xml
+++ /dev/null
@@ -1,74 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0" xmlns:maven="http://ant.apache.org/ivy/maven">
-  <info organisation="org.apache.solr" module="server"/>
-  <configurations defaultconfmapping="metrics->master;jetty->master;start->master;servlet->master;logging->master">
-    <conf name="metrics" description="metrics jars" transitive="true"/>
-    <conf name="jetty" description="jetty jars" transitive="false"/>
-    <conf name="start" description="jetty start jar" transitive="false"/>
-    <conf name="servlet" description="servlet-api jar" transitive="false"/>
-    <conf name="logging" description="logging setup" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="org.apache.logging.log4j" name="log4j-api" rev="${/org.apache.logging.log4j/log4j-api}" conf="logging"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-core" rev="${/org.apache.logging.log4j/log4j-core}" conf="logging"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-web" rev="${/org.apache.logging.log4j/log4j-web}" conf="logging"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-slf4j-impl" rev="${/org.apache.logging.log4j/log4j-slf4j-impl}" conf="logging"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-1.2-api" rev="${/org.apache.logging.log4j/log4j-1.2-api}" conf="logging"/>
-    <dependency org="com.lmax" name="disruptor" rev="${/com.lmax/disruptor}" conf="logging"/>
-    <dependency org="org.slf4j" name="slf4j-api" rev="${/org.slf4j/slf4j-api}" conf="logging"/>
-    <dependency org="org.slf4j" name="jcl-over-slf4j" rev="${/org.slf4j/jcl-over-slf4j}" conf="logging"/>
-    <dependency org="org.slf4j" name="jul-to-slf4j" rev="${/org.slf4j/jul-to-slf4j}" conf="logging"/>
-
-    <dependency org="io.dropwizard.metrics" name="metrics-core" rev="${/io.dropwizard.metrics/metrics-core}" conf="metrics" />
-    <dependency org="io.dropwizard.metrics" name="metrics-jetty9" rev="${/io.dropwizard.metrics/metrics-jetty9}" conf="metrics" />
-    <dependency org="io.dropwizard.metrics" name="metrics-jmx" rev="${/io.dropwizard.metrics/metrics-jmx}" conf="metrics" />
-    <dependency org="io.dropwizard.metrics" name="metrics-jvm" rev="${/io.dropwizard.metrics/metrics-jvm}" conf="metrics" />
-    <dependency org="io.dropwizard.metrics" name="metrics-graphite" rev="${/io.dropwizard.metrics/metrics-graphite}" conf="metrics" />
-
-    <dependency org="org.eclipse.jetty" name="jetty-continuation" rev="${/org.eclipse.jetty/jetty-continuation}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-deploy" rev="${/org.eclipse.jetty/jetty-deploy}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-http" rev="${/org.eclipse.jetty/jetty-http}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-io" rev="${/org.eclipse.jetty/jetty-io}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-jmx" rev="${/org.eclipse.jetty/jetty-jmx}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-rewrite" rev="${/org.eclipse.jetty/jetty-rewrite}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-security" rev="${/org.eclipse.jetty/jetty-security}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-server" rev="${/org.eclipse.jetty/jetty-server}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-servlet" rev="${/org.eclipse.jetty/jetty-servlet}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-servlets" rev="${/org.eclipse.jetty/jetty-servlets}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-util" rev="${/org.eclipse.jetty/jetty-util}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-webapp" rev="${/org.eclipse.jetty/jetty-webapp}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-xml" rev="${/org.eclipse.jetty/jetty-xml}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-alpn-java-server" rev="${/org.eclipse.jetty/jetty-alpn-java-server}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty" name="jetty-alpn-server" rev="${/org.eclipse.jetty/jetty-alpn-server}" conf="jetty"/>
-
-    <dependency org="org.eclipse.jetty.http2" name="http2-server" rev="${/org.eclipse.jetty.http2/http2-server}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty.http2" name="http2-common" rev="${/org.eclipse.jetty.http2/http2-common}" conf="jetty"/>
-    <dependency org="org.eclipse.jetty.http2" name="http2-hpack" rev="${/org.eclipse.jetty.http2/http2-hpack}" conf="jetty"/>
-
-    <dependency org="javax.servlet" name="javax.servlet-api" rev="${/javax.servlet/javax.servlet-api}" conf="jetty"/>
-
-    <dependency org="org.eclipse.jetty" name="jetty-start" rev="${/org.eclipse.jetty/jetty-start}" conf="start">
-      <artifact name="jetty-start" type="jar" maven:classifier="shaded"/>
-    </dependency>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/solr/server/solr/configsets/_default/conf/solrconfig.xml b/solr/server/solr/configsets/_default/conf/solrconfig.xml
index 33b6cd5..a69e46c 100644
--- a/solr/server/solr/configsets/_default/conf/solrconfig.xml
+++ b/solr/server/solr/configsets/_default/conf/solrconfig.xml
@@ -582,27 +582,24 @@
      Circuit Breaker Section - This section consists of configurations for
      circuit breakers
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
-  <circuitBreaker>
-    <!-- Enable Circuit Breakers
+
+    <!-- Circuit Breakers
 
      Circuit breakers are designed to allow stability and predictable query
      execution. They prevent operations that can take down the node and cause
      noisy neighbour issues.
 
      This flag is the uber control switch which controls the activation/deactivation of all circuit
-     breakers. At the moment, the only circuit breaker (max JVM circuit breaker) does not have its
-     own specific configuration. However, if a circuit breaker wishes to be independently configurable,
+     breakers. If a circuit breaker wishes to be independently configurable,
      they are free to add their specific configuration but need to ensure that this flag is always
      respected - this should have veto over all independent configuration flags.
     -->
-    <!--
-   <useCircuitBreakers>true</useCircuitBreakers>
-    -->
+    <circuitBreakers enabled="true">
 
-    <!-- Memory Circuit Breaker Threshold In Percentage
+    <!-- Memory Circuit Breaker Configuration
 
-     Specific configuration for max JVM heap usage circuit breaker. This configuration defines the
-     threshold percentage of maximum heap allocated beyond which queries will be rejected until the
+     Specific configuration for max JVM heap usage circuit breaker. This configuration defines whether
+     the circuit breaker is enabled and the threshold percentage of maximum heap allocated beyond which queries will be rejected until the
      current JVM usage goes below the threshold. The valid value range for this value is 50-95.
 
      Consider a scenario where the max heap allocated is 4 GB and memoryCircuitBreakerThreshold is
@@ -613,12 +610,31 @@
      If you see queries getting rejected with 503 error code, check for "Circuit Breakers tripped"
      in logs and the corresponding error message should tell you what transpired (if the failure
      was caused by tripped circuit breakers).
+
+     If, at any point, the current JVM heap usage goes above 3 GB, queries will be rejected until the heap usage goes below 3 GB again.
+     If you see queries getting rejected with 503 error code, check for "Circuit Breakers tripped"
+     in logs and the corresponding error message should tell you what transpired (if the failure
+     was caused by tripped circuit breakers).
     -->
     <!--
-   <memoryCircuitBreakerThresholdPct>100</memoryCircuitBreakerThresholdPct>
+   <memBreaker enabled="true" threshold="75"/>
     -->
 
-  </circuitBreaker>
+      <!-- CPU Circuit Breaker Configuration
+
+     Specific configuration for CPU utilization based circuit breaker. This configuration defines whether the circuit breaker is enabled
+     and the average load over the last minute at which the circuit breaker should start rejecting queries.
+
+     Consider a scenario where the max heap allocated is 4 GB and memoryCircuitBreakerThreshold is
+     defined as 75. Threshold JVM usage will be 4 * 0.75 = 3 GB. Its generally a good idea to keep this value between 75 - 80% of maximum heap
+     allocated.
+    -->
+
+      <!--
+       <cpuBreaker enabled="true" threshold="75"/>
+      -->
+
+  </circuitBreakers>
 
 
   <!-- Request Dispatcher
diff --git a/solr/solr-ref-guide/README.adoc b/solr/solr-ref-guide/README.adoc
index b8112c1..628da06 100644
--- a/solr/solr-ref-guide/README.adoc
+++ b/solr/solr-ref-guide/README.adoc
@@ -39,24 +39,26 @@
 
 
 == Building the Guide
-For details on building the ref guide, see `ant -p`.
+For details on building the ref guide, see `gradlew tasks`, the `Documentation Tasks` section.
 
 There are currently four available targets:
 
-* `ant default`: builds the HTML versions of the Solr Ref Guide.
-* `ant build-site`: also builds the HTML version.
-* `ant clean`: removes the `../build/solr-ref-guide` directory.
+* `gradlew buildSite`: Builds an HTML Site w/Jekyll and verifies the anchors+links are valid
+* `gradlew documentation`: Generate all documentation
+* `gradlew javadoc`: Generates Javadoc API documentation for the main source code.
+* `gradlew renderJavadoc`: Generates Javadoc API documentation for the main source code. This directly invokes javadoc tool.
+* `gradlew renderSiteJavadoc`: Generates Javadoc API documentation for the site (relative links).
 
-The output of all builds will be located in `../build/solr-ref-guide`.
+The output of all builds will be located in `../solr-ref-guide/build`.
 
 == Key Directories
 Key directories to be aware of:
 
 * `src` - where all human edited `*.adoc` files related to the Guide live, as well as various configuration, theme, and template files.
 * `tools` - custom Java code for parsing metadata in our `src/*.adoc` files to produce some `_data/` files for site navigation purposes.
-* `../build/solr-ref-guide/content` - a copy of the `src` dir generated by ant where:
-** `*.template` files are processed to replace ant properties with their runtime values
-** some `../build/solr-ref-guide/content/_data` files are generated by our java tools based header attributes from each of the `*.adoc` files
-* `../build/solr-ref-guide/html-site` - HTML generated version of the ref guide
+* `../solr-ref-guide/build/content` - a copy of the `src` dir generated by gradle where:
+** `*.template` files are processed to replace gradle (TODO CHECK!) properties with their runtime values
+** some `../solr-ref-guide/build/content/_data` files are generated by our java tools based header attributes from each of the `*.adoc` files
+* `../solr-ref-guide/build/html-site` - HTML generated version of the ref guide
 
 See the additional documentation in `src/metadocs` for more information about how to edit files, build for releases, or modifying any Jekyll templates.
diff --git a/solr/solr-ref-guide/build.gradle b/solr/solr-ref-guide/build.gradle
index f61a9e0..ac48516 100644
--- a/solr/solr-ref-guide/build.gradle
+++ b/solr/solr-ref-guide/build.gradle
@@ -108,8 +108,6 @@
     main {
         java {
             srcDirs = ['tools']
-            exclude "**/CustomizedAsciidoctorAntTask.java"
-            exclude "**/asciidoctor-antlib.xml"
         }
     }
 
diff --git a/solr/solr-ref-guide/build.xml b/solr/solr-ref-guide/build.xml
deleted file mode 100644
index 6ee2396..0000000
--- a/solr/solr-ref-guide/build.xml
+++ /dev/null
@@ -1,305 +0,0 @@
-<?xml version="1.0"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<project name="solr-ref-guide" default="default" xmlns:asciidoctor="antlib:org.asciidoctor.ant" >
-
-  <import file="../common-build.xml"/>
-  <!-- properties to use in our docs -->
-  <loadresource property="solr-docs-version">
-    <!-- NOTE: this is specifically only the "major.minor", it does not include the ".bugfix"
-         This is because we (currently) only release the guide for minor versions.
-    -->
-    <propertyresource name="version"/>
-    <filterchain>
-      <tokenfilter>
-        <filetokenizer/>
-        <replaceregex pattern="^(\d+\.\d+)(|\..*)$" replace="\1" flags="s"/>
-      </tokenfilter>
-    </filterchain>
-  </loadresource>
-  <loadresource property="solr-docs-version-path">
-    <!-- NOTE: This is the ${solr-docs-version} as a path suitbale for linking to javadocs -->
-    <propertyresource name="solr-docs-version"/>
-    <filterchain>
-      <tokenfilter>
-        <filetokenizer/>
-        <replaceregex pattern="^(\d+)\.(\d+)(|\..*)$" replace="\1_\2_0" flags="s"/>
-      </tokenfilter>
-    </filterchain>
-  </loadresource>
-  <!-- NOTE: ${solr-guide-version} is the version of this ref-guide.
-
-       By default, we assume this is the same as ${solr-docs-version} with a "-DRAFT" suffix
-       When releasing, specify an explicit value of this property on the command line.
-
-       NOTE: the ${solr-guide-version} used *may* be different from the version of Solr the guide
-       covers if we decide to do a bug-fix release of the ref-guide
-
-       Examples: (assume branch_6_1 where version=6.1.SOMETHING)
-
-       Basic nightly/local build of the 6.1 guide...
-
-       => ant build-site
-
-       Official build of the 6.1 guide...
-
-       => ant build-site -Dsolr-guide-version=6.1
-
-       Release of a "6.1.1" ref guide, correcting some serious error in the docs
-       (even if there is no 6.1.1 version - or if we've already released up to 6.1.5 - of Solr itself)
-
-       => ant build-site -Dsolr-guide-version=6.1.1
-
-  -->
-  <property name="solr-guide-version" value="${solr-docs-version}-DRAFT" />
-  <condition property="solr-guide-draft-status" value="" else="DRAFT">
-    <matches pattern="^\d+\.\d+(|\.\d+)$" string="${solr-guide-version}" />
-  </condition>
-
-  <loadresource property="solr-guide-version-path">
-    <!-- NOTE: This is the ${solr-guide-version} as a path suitable for use publishing the HTML -->
-    <propertyresource name="solr-guide-version"/>
-    <filterchain>
-      <tokenfilter>
-        <filetokenizer/>
-        <replaceregex pattern="^(\d+)\.(\d+)(-DRAFT)?.*" replace="\1_\2\3" flags="s"/>
-      </tokenfilter>
-    </filterchain>
-  </loadresource>
-
-  <!-- where we link to javadocs from the html guide, and if we validate them, is all dependent
-       on the 'local.javadocs' sysprop -->
-  <condition property="check-all-relative-links" value="-check-all-relative-links" else="">
-    <isset property="local.javadocs" />
-  </condition>
-  <condition property="html-solr-javadocs"
-             value="link:../../docs/"
-             else="https://lucene.apache.org/solr/${solr-docs-version-path}/">
-    <isset property="local.javadocs" />
-  </condition>
-  <condition property="html-lucene-javadocs"
-             value="link:../../../../lucene/build/docs/"
-             else="https://lucene.apache.org/core/${solr-docs-version-path}/">
-    <isset property="local.javadocs" />
-  </condition>
-
-  <property name="build.content.dir" location="${build.dir}/content" />
-  <property name="main-page" value="index" />
-  <!-- for pulling in versions of major deps -->
-  <property prefix="ivyversions" file="${common.dir}/ivy-versions.properties"/>
-
-  <!-- ====== TOOLS FOR GENERATING/VALIDATING BITS OF THE SITE ======= -->
-  <property name="tools-jar-name" value="solr-ref-guide-tools.jar" />
-  <path id="tools-compile-classpath">
-    <fileset dir="lib">
-      <include name="**/*.jar"/>
-      <exclude name="**/${tools-jar-name}" />
-    </fileset>
-  </path>
-  <path id="tools-run-classpath">
-    <fileset dir="lib">
-      <include name="**/*.jar"/>
-    </fileset>
-    <fileset dir="${build.dir}">
-      <include name="**/*.jar"/>
-    </fileset>
-  </path>
-
-  <target name="clean">
-    <delete dir="${build.dir}"/>
-  </target>
-
-  <target name="build-tools-jar" depends="resolve" description="Builds the custom java tools use use for generating some data files from page metdata">
-    <mkdir dir="${build.dir}/classes"/>
-    <!-- NOTE: we include the ant runtime so we can compile our customized version of the asciidoctor ant task -->
-    <javac debug="yes"
-           debuglevel="source,lines,vars"
-           destdir="${build.dir}/classes"
-           includeantruntime="true">
-      <compilerarg value="-Xlint:all"/>
-      <classpath refid="tools-compile-classpath"/>
-      <src path="tools/"/>
-    </javac>
-    <copy todir="${build.dir}/classes" file="tools/asciidoctor-antlib.xml" />
-    <jar destfile="${build.dir}/${tools-jar-name}">
-      <fileset dir="${build.dir}/classes"
-               includes="**/*.class,**/*.xml"/>
-    </jar>
-  </target>
-
-  <target name="build-init" description="Prepares the build's 'content' dir, copying over src files and transforming *.template files in the process">
-    <delete dir="${build.content.dir}" />
-    <mkdir dir="${build.content.dir}" />
-
-    <echo>Copying all non template files from src ...</echo>
-    <copy todir="${build.content.dir}">
-      <fileset dir="src">
-        <exclude name="**/*.template"/>
-      </fileset>
-    </copy>
-    <echo>Copy (w/prop replacement) any template files from src...</echo>
-    <copy todir="${build.content.dir}">
-      <fileset dir="src">
-        <include name="**/*.template"/>
-      </fileset>
-      <mapper type="glob" from="*.template" to="*"/>
-      <filterchain>
-        <expandproperties/>
-      </filterchain>
-    </copy>
-  </target>
-
-  <target name="build-nav-data-files" depends="build-init,build-tools-jar" description="creates nav based data files">
-    <mkdir dir="${build.content.dir}/_data"/>
-    <java classname="BuildNavDataFiles"
-          failonerror="true"
-          fork="true">
-      <classpath refid="tools-run-classpath"/>
-      <arg value="${build.content.dir}"/>
-      <arg value="${main-page}"/>
-    </java>
-  </target>
-
-  <macrodef name="asciidoctor-convert">
-    <!-- custom macro that fills in all the defaults we care about when running asciidoctor-ant
-         The primary purpose for this is to build a bare-bones HTML version for validating the
-         document structure (ie: duplicate anchors, links all point to valid anchors,
-         etc...) that we want to be able to validate even if the current user doesn't have jekyll installed
-    -->
-    <attribute name="sourceDirectory"/>
-    <attribute name="sourceDocumentName"/>
-    <attribute name="outputDirectory"/>
-    <attribute name="backend"/>
-    <attribute name="solr-javadocs" default="${html-solr-javadocs}" />
-    <attribute name="lucene-javadocs" default="${html-lucene-javadocs}" />
-    <attribute name="headerFooter" default="true" />
-    <sequential>
-      <!-- NOTE: we have our own variant on the asciidoctor-ant task, so that sourceDocumentName=""
-           is treated the same as if it's unset (ie: null)
-      -->
-      <taskdef uri="antlib:org.asciidoctor.ant" resource="asciidoctor-antlib.xml"
-               classpathref="tools-run-classpath"/>
-      <asciidoctor:convert
-                   sourceDirectory="@{sourceDirectory}"
-                   sourceDocumentName="@{sourceDocumentName}"
-                   baseDir="${build.content.dir}"
-                   outputDirectory="@{outputDirectory}"
-                   preserveDirectories="true"
-                   backend="@{backend}"
-                   headerFooter="@{headerFooter}"
-                   extensions="adoc"
-                   sourceHighlighter="coderay"
-                   imagesDir="${build.content.dir}"
-                   doctype="book"
-                   safemode="unsafe">
-        <attribute key="attribute-missing" value="warn" />
-        <attribute key="icons" value="font" />
-        <attribute key="icon-set" value="fa" />
-        <attribute key="figure-caption!" value='' />
-        <attribute key="idprefix" value='' />
-        <attribute key="idseparator" value='-' />
-        <!-- attributes used in adoc files -->
-        <!-- NOTE: If you add any attributes here for use in adoc files, you almost certainly need to also add
-             them to the _config.yml.template file for building the jekyll site as well
-        -->
-        <attribute key="solr-root-path" value="../../../" />
-        <attribute key="solr-guide-draft-status" value="${solr-guide-draft-status}" />
-        <attribute key="solr-guide-version" value="${solr-guide-version}" />
-        <attribute key="solr-docs-version" value="${solr-docs-version}" />
-        <attribute key="java-javadocs" value="${javadoc.link}" />
-        <attribute key="solr-javadocs" value="@{solr-javadocs}" />
-        <attribute key="lucene-javadocs" value="@{lucene-javadocs}" />
-        <attribute key="build-date" value="${DSTAMP}" />
-        <attribute key="build-year" value="${current.year}" />
-        <attribute key="ivy-commons-codec-version" value="${ivyversions./commons-codec/commons-codec}" />
-        <attribute key="ivy-dropwizard-version" value="${ivyversions.io.dropwizard.metrics.version}" />
-        <attribute key="ivy-log4j-version" value="${ivyversions.org.log4j.major.version}" />
-        <attribute key="ivy-opennlp-version" value="${ivyversions./org.apache.opennlp/opennlp-tools}" />
-        <attribute key="ivy-tika-version" value="${ivyversions.org.apache.tika.version}" />
-        <attribute key="ivy-velocity-tools-version" value="${ivyversions.org.apache.velocity.tools.version}" />
-        <attribute key="ivy-zookeeper-version" value="${ivyversions.org.apache.zookeeper.version}" />
-      </asciidoctor:convert>
-    </sequential>
-  </macrodef>
-
-
-  <!-- ======= HTML Site Build =======
-       Builds site with Jekyll.
-       This (for now) assumes that Jekyll (http://jekyllrb.com) is installed locally. -->
-  <target name="build-site"
-          depends="-build-site"
-          description="Builds an HTML Site w/Jekyll and verifies the anchors+links are valid" >
-    <java classname="CheckLinksAndAnchors"
-          failonerror="true"
-          fork="true">
-      <classpath refid="tools-run-classpath"/>
-      <arg value="${build.dir}/html-site"/>
-      <arg value="${check-all-relative-links}" />
-    </java>
-    <echo>Ready to browse site: ${build.dir}/html-site/${main-page}.html</echo>
-  </target>
-  <target name="-build-site"
-          depends="build-init,build-nav-data-files" >
-    <echo>Running Jekyll...</echo>
-    <exec executable="jekyll" dir="${build.content.dir}" failonerror="true">
-      <arg value="build" />
-      <arg value="--verbose"/>
-    </exec>
-  </target>
-
-  <!-- ======= HTML Bare Bones Conversion =======
-       Does a very raw converstion of the adoc files to HTML for the purpose of link & anchor checking
-
-       Unlike the "HTML Site Build" above, this does *NOT* require Jekyll, and can be done entirely
-       With ivy deps fetched automatically.
-       -->
-  <target name="bare-bones-html-validation" depends="build-init,build-nav-data-files"
-          description="Builds (w/o Jekyll) a very simple html version of the guide and runs link/anchor validation on it">
-
-    <delete dir="${build.dir}/bare-bones-html"/>
-    <mkdir dir="${build.dir}/bare-bones-html"/>
-    <asciidoctor-convert sourceDirectory="${build.content.dir}"
-                         sourceDocumentName=""
-                         outputDirectory="${build.dir}/bare-bones-html"
-                         headerFooter="false"
-                         backend="html5"
-                         solr-javadocs="${html-solr-javadocs}"
-                         lucene-javadocs="${html-lucene-javadocs}"
-                         />
-
-    <java classname="CheckLinksAndAnchors"
-          failonerror="true"
-          fork="true">
-      <classpath refid="tools-run-classpath"/>
-      <arg value="${build.dir}/bare-bones-html"/>
-      <arg value="-bare-bones" />
-      <arg value="${check-all-relative-links}" />
-    </java>
-    <echo>Validated Links &amp; Anchors via: ${build.dir}/bare-bones-html/</echo>
-  </target>
-
-  <target name="default"
-          description="Builds an HTML versions of the ref guide"
-          depends="build-site">
-    <echo>SITE: ${build.dir}/html-site/${main-page}.html</echo>
-  </target>
-
-
-
-</project>
diff --git a/solr/solr-ref-guide/ivy.xml b/solr/solr-ref-guide/ivy.xml
deleted file mode 100644
index 390eee0..0000000
--- a/solr/solr-ref-guide/ivy.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="ref-guide-tools"/>
-  <configurations defaultconfmapping="compile->master">
-    <conf name="compile" transitive="false" />
-  </configurations>
-  <dependencies>
-    <dependency org="org.asciidoctor" name="asciidoctor-ant" rev="${/org.asciidoctor/asciidoctor-ant}" conf="compile" />
-    <dependency org="com.vaadin.external.google" name="android-json" rev="${/com.vaadin.external.google/android-json}" conf="compile" />
-    <dependency org="org.jsoup" name="jsoup" rev="${/org.jsoup/jsoup}" conf="compile" />
-    <dependency org="org.slf4j" name="jcl-over-slf4j" rev="${/org.slf4j/jcl-over-slf4j}" conf="compile"/>
-    <dependency org="org.slf4j" name="slf4j-api" rev="${/org.slf4j/slf4j-api}" conf="compile"/>
-    <dependency org="org.slf4j" name="slf4j-simple" rev="${/org.slf4j/slf4j-simple}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-api" rev="${/org.apache.logging.log4j/log4j-api}" conf="compile"/>
-    <dependency org="org.apache.logging.log4j" name="log4j-core" rev="${/org.apache.logging.log4j/log4j-core}" conf="compile"/>
-
-  </dependencies>
-</ivy-module>
diff --git a/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc b/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc
deleted file mode 100644
index 1d950d2..0000000
--- a/solr/solr-ref-guide/src/adding-custom-plugins-in-solrcloud-mode.adoc
+++ /dev/null
@@ -1,333 +0,0 @@
-= Adding Custom Plugins in SolrCloud Mode
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-In SolrCloud mode, custom plugins need to be shared across all nodes of the cluster.
-
-.Deprecated
-[IMPORTANT]
-====
-The functionality here is a subset of the <<package-manager.adoc#package-manager,Package Management>> system.  It will no longer be supported in Solr 9.
-====
-
-When running Solr in SolrCloud mode and you want to use custom code (such as custom analyzers, tokenizers, query parsers, and other plugins), it can be cumbersome to add jars to the classpath on all nodes in your cluster. Using the <<blob-store-api.adoc#blob-store-api,Blob Store API>> and special commands with the <<config-api.adoc#config-api,Config API>>, you can upload jars to a special system-level collection and dynamically load plugins from them at runtime without needing to restart any nodes.
-
-.This Feature is Disabled By Default
-[IMPORTANT]
-====
-In addition to requiring that Solr is running in <<solrcloud.adoc#solrcloud,SolrCloud>> mode, this feature is also disabled by default unless all Solr nodes are run with the `-Denable.runtime.lib=true` option on startup.
-
-Before enabling this feature, users should carefully consider the issues discussed in the <<Securing Runtime Libraries>> section below.
-====
-
-== Uploading Jar Files
-
-Use your own service to host the jars or you can use Solr itself to host the jars.
-
-Use the <<blob-store-api.adoc#blob-store-api,Blob Store API>> to upload your jar files to Solr. This will to put your jars in the `.system` collection and distribute them across your SolrCloud nodes. These jars are added to a separate classloader and only accessible to components that are configured with the property `runtimeLib=true`. These components are loaded lazily because the `.system` collection may not be loaded when a particular core is loaded.
-
-== Config API Commands to use Jars as Runtime Libraries
-
-The runtime library feature uses a special set of commands for the <<config-api.adoc#config-api,Config API>> to add, update, or remove jar files currently available in the blob store to the list of runtime libraries.
-
-The following commands are used to manage runtime libs:
-
-* `add-runtimelib`
-* `update-runtimelib`
-* `delete-runtimelib`
-
-[.dynamic-tabs]
---
-[example.tab-pane#v1manage-libs]
-====
-[.tab-label]*V1 API*
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json' -d '{
-   "add-runtimelib": { "name":"jarblobname", "version":2 },
-   "update-runtimelib": { "name":"jarblobname", "version":3 },
-   "delete-runtimelib": "jarblobname"
-}'
-----
-====
-
-[example.tab-pane#v2manage-libs]
-====
-[.tab-label]*V2 API*
-
-[source,bash]
-----
-curl http://localhost:8983/api/collections/techproducts/config -H 'Content-type:application/json' -d '{
-   "add-runtimelib": { "name":"jarblobname", "version":2 },
-   "update-runtimelib": { "name":"jarblobname", "version":3 },
-   "delete-runtimelib": "jarblobname"
-}'
-----
-====
---
-
-The name to use is the name of the blob that you specified when you uploaded your jar to the blob store. You should also include the version of the jar found in the blob store that you want to use. These details are added to `configoverlay.json`.
-
-The default `SolrResourceLoader` does not have visibility to the jars that have been defined as runtime libraries. There is a classloader that can access these jars which is made available only to those components which are specially annotated.
-
-Every pluggable component can have an optional extra attribute called `runtimeLib=true`, which means that the components are not loaded at core load time. Instead, they will be loaded on demand. If all the dependent jars are not available when the component is loaded, an error is thrown.
-
-This example shows creating a ValueSourceParser using a jar that has been loaded to the Blob store.
-
-[.dynamic-tabs]
---
-[example.tab-pane#v1add-jar]
-====
-[.tab-label]*V1 API*
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json' -d '{
-  "create-valuesourceparser": {
-    "name": "nvl",
-    "runtimeLib": true,
-    "class": "solr.org.apache.solr.search.function.NvlValueSourceParser,
-    "nvlFloatValue": 0.0 }
-}'
-----
-====
-
-[example.tab-pane#v2add-jar]
-====
-[.tab-label]*V2 API*
-
-[source,bash]
-----
-curl http://localhost:8983/api/collections/techproducts/config -H 'Content-type:application/json' -d '{
-  "create-valuesourceparser": {
-    "name": "nvl",
-    "runtimeLib": true,
-    "class": "solr.org.apache.solr.search.function.NvlValueSourceParser,
-    "nvlFloatValue": 0.0 }
-}'
-----
-====
---
-
-== Example: Using external service to host your jars
-
-Hosting your jars externally is more convenient if you have a reliable server to host your jars. There is no need to create and manage a `.system` collection.
-
-Step 1: Download a jar from github to the current directory
-
-[source,bash]
-----
- curl -o runtimelibs.jar   -LO https://github.com/apache/lucene-solr/blob/master/solr/core/src/test-files/runtimecode/runtimelibs.jar.bin?raw=true
-----
-Step 2: Get the `sha512` hash of the jar
-
-[source,bash]
-----
- openssl dgst -sha512 runtimelibs.jar
-----
-
-Step 3:  Start Solr with runtime lib enabled
-
-[source,bash]
-----
- bin/solr start -e cloud -a "-Denable.runtime.lib=true" -noprompt
-----
-
-Step 4: Run a local server. Skip this step if you have another place to host your jars. Ensure that the url is set appropriately
-
-[source,bash]
-----
- python -m SimpleHTTPServer 8000 &
-----
-
-Step 5: Add the jar to your collection `gettingstarted`
-
-[source,bash]
-----
- curl http://localhost:8983/solr/gettingstarted/config -H 'Content-type:application/json' -d '{
-    "add-runtimelib": { "name" : "testjar",
-    "url":"http://localhost:8000/runtimelibs.jar" ,
-    "sha512" : "d01b51de67ae1680a84a813983b1de3b592fc32f1a22b662fc9057da5953abd1b72476388ba342cad21671cd0b805503c78ab9075ff2f3951fdf75fa16981420"}
-    }'
-----
-
-Step 6: Create a new request handler '/test' for the collection 'gettingstarted' from the jar we just added
-
-[source,bash]
-----
-curl http://localhost:8983/solr/gettingstarted/config -H 'Content-type:application/json' -d '{
-    "create-requesthandler": { "name" : "/test",
-    'class': 'org.apache.solr.core.RuntimeLibReqHandler', 'runtimeLib' : true}
-    }'
-----
-
-Step 7:  Test your request handler
-
-[source,bash]
-----
-curl  http://localhost:8983/solr/gettingstarted/test
-----
-
-output:
-[source,json]
-----
-{
-  "responseHeader":{
-    "status":0,
-    "QTime":0},
-  "params":{},
-  "context":{
-    "webapp":"/solr",
-    "path":"/test",
-    "httpMethod":"GET"},
-  "class":"org.apache.solr.core.RuntimeLibReqHandler",
-  "loader":"org.apache.solr.core.MemClassLoader"}
-----
-
-=== Updating Remote Jars
-
-Example:
-
-* Host the new jar to a new url, e.g., http://localhost:8000/runtimelibs_v2.jar
-* Get the `sha512` hash of the new jar.
-* Run the `update-runtimelib` command.
-
-[source,bash]
-----
- curl http://localhost:8983/solr/gettingstarted/config -H 'Content-type:application/json' -d '{
-    "update-runtimelib": { "name" : "testjar",
-    "url":"http://localhost:8000/runtimelibs_v2.jar" ,
-    "sha512" : "<replace-the-new-sha512-digest-here>"}
-    }'
-----
-
-NOTE: Always upload your jar to a new url as the Solr cluster is still referring to the old jar. If the existing jar is modified it can cause errors as the hash may not match.
-
-== Securing Runtime Libraries
-
-A drawback of this feature is that it could be used to load malicious executable code into the system. However, it is possible to restrict the system to load only trusted jars using http://en.wikipedia.org/wiki/Public_key_infrastructure[PKI] to verify that the executables loaded into the system are trustworthy.
-
-The following steps will allow you enable security for this feature. The instructions assume you have started all your Solr nodes with the `-Denable.runtime.lib=true`.
-
-=== Step 1: Generate an RSA Private Key
-
-The first step is to generate an RSA private key. The example below uses a 512-bit key, but you should use the strength appropriate to your needs.
-
-[source,bash]
-----
-$ openssl genrsa -out priv_key.pem 512
-----
-
-=== Step 2: Output the Public Key
-
-The public portion of the key should be output in DER format so Java can read it.
-
-[source,bash]
-----
-$ openssl rsa -in priv_key.pem -pubout -outform DER -out pub_key.der
-----
-
-=== Step 3: Load the Key to ZooKeeper
-
-The `.der` files that are output from Step 2 should then be loaded to ZooKeeper under a node `/keys/exe` so they are available throughout every node. You can load any number of public keys to that node and all are valid. If a key is removed from the directory, the signatures of that key will cease to be valid. So, before removing the a key, make sure to update your runtime library configurations with valid signatures with the `update-runtimelib` command.
-
-At the current time, you can only use the ZooKeeper `zkCli.sh` (or `zkCli.cmd` on Windows) script to issue these commands (the Solr version has the same name, but is not the same). If you have your own ZooKeeper ensemble running already, you can find the script in `$ZK_INSTALL/bin/zkCli.sh` (or `zkCli.cmd` if you are using Windows).
-
-NOTE: If you are running the embedded ZooKeeper that is included with Solr, you *do not* have this script already; in order to use it, you will need to download a copy of ZooKeeper v{ivy-zookeeper-version} from http://zookeeper.apache.org/. Don't worry about configuring the download, you're just trying to get the command line utility script. When you start the script, you will connect to the embedded ZooKeeper.
-
-To load the keys, you will need to connect to ZooKeeper with `zkCli.sh`, create the directories, and then create the key file, as in the following example.
-
-[source,bash]
-----
-# Connect to ZooKeeper
-# Replace the server location below with the correct ZooKeeper connect string for your installation.
-$ .bin/zkCli.sh -server localhost:9983
-
-# After connection, you will interact with the ZK prompt.
-# Create the directories
-[zk: localhost:9983(CONNECTED) 5] create /keys
-[zk: localhost:9983(CONNECTED) 5] create /keys/exe
-
-# Now create the public key file in ZooKeeper
-# The second path is the path to the .der file on your local machine
-[zk: localhost:9983(CONNECTED) 5] create /keys/exe/pub_key.der /myLocal/pathTo/pub_key.der
-----
-
-After this, any attempt to load a jar will fail. All your jars must be signed with one of your private keys for Solr to trust it. The process to sign your jars and use the signature is outlined in Steps 4-6.
-
-=== Step 4: Sign the jar File
-
-Next you need to sign the sha1 digest of your jar file and get the base64 string.
-
-[source,bash]
-----
-$ openssl dgst -sha1 -sign priv_key.pem myjar.jar | openssl enc -base64
-----
-
-The output of this step will be a string that you will need to add the jar to your classpath in Step 6 below.
-
-=== Step 5: Load the jar to the Blob Store
-
-Load your jar to the Blob store, using the <<blob-store-api.adoc#blob-store-api,Blob Store API>>. This step does not require a signature; you will need the signature in Step 6 to add it to your classpath.
-
-[source,bash]
-----
-curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @{filename}
-http://localhost:8983/solr/.system/blob/{blobname}
-----
-
-The blob name that you give the jar file in this step will be used as the name in the next step.
-
-=== Step 6: Add the jar to the Classpath
-
-Finally, add the jar to the classpath using the Config API as detailed above. In this step, you will need to provide the signature of the jar that you got in Step 4.
-
-[.dynamic-tabs]
---
-[example.tab-pane#v1add-jar2]
-====
-[.tab-label]*V1 API*
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application/json'  -d '{
-  "add-runtimelib": {
-    "name":"blobname",
-    "version":2,
-    "sig":"mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/
-           PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0=" }
-}'
-----
-====
-
-[example.tab-pane#v2add-jar2]
-====
-[.tab-label]*V2 API*
-
-[source,bash]
-----
-curl http://localhost:8983/api/collections/techproducts/config -H 'Content-type:application/json'  -d '{
-  "add-runtimelib": {
-    "name":"blobname",
-    "version":2,
-    "sig":"mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/
-           PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0=" }
-}'
-----
-====
---
diff --git a/solr/solr-ref-guide/src/aliases.adoc b/solr/solr-ref-guide/src/aliases.adoc
index 8b2c0a6..1c18b54 100644
--- a/solr/solr-ref-guide/src/aliases.adoc
+++ b/solr/solr-ref-guide/src/aliases.adoc
@@ -390,8 +390,6 @@
 * Collections might be constrained by their size instead of or in addition to time or category value.
   This might be implemented as another type of routed alias, or possibly as an option on the existing routed aliases
 
-* Compatibility with CDCR.
-
 * Option for deletion of aliases that also deletes the underlying collections in one step. Routed Aliases may quickly
   create more collections than expected during initial testing. Removing them after such events is overly tedious.
 
diff --git a/solr/solr-ref-guide/src/cdcr-api.adoc b/solr/solr-ref-guide/src/cdcr-api.adoc
deleted file mode 100644
index f5c26dc..0000000
--- a/solr/solr-ref-guide/src/cdcr-api.adoc
+++ /dev/null
@@ -1,321 +0,0 @@
-= CDCR API
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-The CDCR API is used to control and monitor the replication process. Control actions are performed at a collection level, i.e., by using the following base URL for API calls: `\http://localhost:8983/solr/<collection>/cdcr`.
-
-[WARNING]
-.CDCR is deprecated
-====
-This feature (in its current form) is deprecated and will likely be removed in 9.0.
-
-See <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>> for more details.
-====
-
-Monitor actions are performed at a core level, i.e., by using the following base URL for API calls: `\http://localhost:8983/solr/<core>/cdcr`.
-
-Currently, none of the CDCR API calls have parameters.
-
-== API Entry Points
-
-*Control*
-
-* `<collection>/cdcr?action=STATUS`: <<CDCR STATUS,Returns the current state>> of CDCR.
-* `<collection>/cdcr?action=START`: <<CDCR START,Starts CDCR>> replication
-* `<collection>/cdcr?action=STOP`: <<CDCR STOP,Stops CDCR>> replication.
-* `<collection>/cdcr?action=ENABLEBUFFER`: <<ENABLEBUFFER,Enables the buffering>> of updates.
-* `<collection>/cdcr?action=DISABLEBUFFER`: <<DISABLEBUFFER,Disables the buffering>> of updates.
-
-*Monitoring*
-
-* `core/cdcr?action=QUEUES`: <<QUEUES,Fetches statistics about the queue>> for each replica and about the update logs.
-* `core/cdcr?action=OPS`: <<OPS,Fetches statistics about the replication performance>> (operations per second) for each replica.
-* `core/cdcr?action=ERRORS`: <<ERRORS,Fetches statistics and other information about replication errors>> for each replica.
-
-== Control Commands
-
-=== CDCR STATUS
-
-`solr/<collection>/cdcr?action=STATUS`
-
-==== CDCR Status Example
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/techproducts/cdcr?action=STATUS
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader": {
-  "status": 0,
-  "QTime": 0
-  },
-  "status": {
-  "process": "stopped",
-  "buffer": "enabled"
-  }
-}
-----
-
-=== ENABLEBUFFER
-
-`solr/<collection>/cdcr?action=ENABLEBUFFER`
-
-==== Enable Buffer Example
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/techproducts/cdcr?action=ENABLEBUFFER
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader": {
-  "status": 0,
-  "QTime": 0
-  },
-  "status": {
-  "process": "started",
-  "buffer": "enabled"
-  }
-}
-----
-
-=== DISABLEBUFFER
-
-`solr/<collection>/cdcr?action=DISABLEBUFFER`
-
-==== Disable Buffer Example
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/techproducts/cdcr?action=DISABLEBUFFER
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader": {
-  "status": 0,
-  "QTime": 0
-  },
-  "status": {
-  "process": "started",
-  "buffer": "disabled"
-  }
-}
-----
-
-=== CDCR START
-
-`solr/<collection>/cdcr?action=START`
-
-==== CDCR Start Examples
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/techproducts/cdcr?action=START
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader": {
-  "status": 0,
-  "QTime": 0
-  },
-  "status": {
-  "process": "started",
-  "buffer": "enabled"
-  }
-}
-----
-
-=== CDCR STOP
-
-`solr/<collection>/cdcr?action=STOP`
-
-==== CDCR Stop Examples
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/techproducts/cdcr?action=STOP
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader": {
-  "status": 0,
-  "QTime": 0
-  },
-  "status": {
-  "process": "stopped",
-  "buffer": "enabled"
-  }
-}
-----
-
-
-== CDCR Monitoring Commands
-
-=== QUEUES
-
-`solr/<core>/cdcr?action=QUEUES`
-
-==== QUEUES Response
-
-The output is composed of a list “queues” which contains a list of (ZooKeeper) Target hosts, themselves containing a list of Target collections. For each collection, the current size of the queue and the timestamp of the last update operation successfully processed is provided. The timestamp of the update operation is the original timestamp, i.e., the time this operation was processed on the Source SolrCloud. This allows an estimate the latency of the replication process.
-
-The “queues” object also contains information about the update logs, such as the size (in bytes) of the update logs on disk (`tlogTotalSize`), the number of transaction log files (`tlogTotalCount`) and the status of the update logs synchronizer (`updateLogSynchronizer`).
-
-==== QUEUES Examples
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/<replica_name>/cdcr?action=QUEUES
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader":{
-    "status": 0,
-    "QTime": 1
-  },
-  "queues":{
-    "127.0.0.1: 40342/solr":{
-    "Target_collection":{
-        "queueSize": 104,
-        "lastTimestamp": "2014-12-02T10:32:15.879Z"
-      }
-    }
-  },
-  "tlogTotalSize":3817,
-  "tlogTotalCount":1,
-  "updateLogSynchronizer": "stopped"
-}
-----
-
-=== OPS
-
-`solr/<core>/cdcr?action=OPS`
-
-
-==== OPS Response
-
-Provides the average number of operations as a sum and broken down by adds/deletes.
-
-==== OPS Examples
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/<replica_name>/cdcr?action=OPS
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader":{
-    "status":0,
-    "QTime":1
-  },
-  "operationsPerSecond":{
-    "127.0.0.1: 59661/solr":{
-      "Target_collection":{
-          "all": 297.102944952749052,
-          "adds": 297.102944952749052,
-          "deletes": 0.0
-      }
-    }
-  }
-}
-----
-
-=== ERRORS
-
-`solr/<core>/cdcr?action=ERRORS`
-
-==== ERRORS Response
-
-Provides the number of consecutive errors encountered by the replicator thread, the number of bad requests or internal errors since the start of the replication process, and a list of the last errors encountered ordered by timestamp.
-
-==== ERRORS Examples
-
-*Input*
-
-[source,text]
-----
-http://localhost:8983/solr/<replica_name>/cdcr?action=ERRORS
-----
-
-*Output*
-
-[source,json]
-----
-{
-  "responseHeader":{
-    "status":0,
-    "QTime":2
-  },
-  "errors": {
-    "127.0.0.1: 36872/solr":{
-      "Target_collection":{
-        "consecutiveErrors":3,
-        "bad_request":0,
-        "internal":3,
-        "last":{
-          "2014-12-02T11:04:42.523Z":"internal",
-          "2014-12-02T11:04:39.223Z":"internal",
-          "2014-12-02T11:04:38.22Z":"internal"
-        }
-      }
-    }
-  }
-}
-----
diff --git a/solr/solr-ref-guide/src/cdcr-architecture.adoc b/solr/solr-ref-guide/src/cdcr-architecture.adoc
deleted file mode 100644
index b479f3d..0000000
--- a/solr/solr-ref-guide/src/cdcr-architecture.adoc
+++ /dev/null
@@ -1,167 +0,0 @@
-= CDCR Architecture
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-== CDCR Architecture Overview
-
-With CDCR, Source and Target data centers can each serve search queries when CDCR is operating. The Target data center will lag somewhat behind the Source cluster due to propagation delays.
-
-[WARNING]
-.CDCR is deprecated
-====
-This feature (in its current form) is deprecated and will likely be removed in 9.0.
-
-See <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>> for more details.
-====
-
-Data changes on the Source data center are replicated to the Target data center only after they are persisted to disk. The data changes can be replicated in near real-time (with a small delay) or could be scheduled to be sent at longer intervals to the Target data center. CDCR can "bootstrap" the collection to the Target data center. Since this is a full copy of the entire index, network bandwidth should be considered. Of course both Source and Target collections may be empty to start.
-
-Each shard leader in the Source data center will be responsible for replicating its updates to the corresponding leader in the Target data center. When receiving updates from the Source data center, shard leaders in the Target data center will replicate the changes to their own replicas as normal SolrCloud updates.
-
-This replication model is designed to tolerate some degradation in connectivity, accommodate limited bandwidth, and support batch updates to optimize communication.
-
-Replication supports both a new empty index and pre-built indexes. In the scenario where the replication is set up on a pre-built index in the Source cluster and nothing on the Target cluster, CDCR will replicate the _entire_ index from the Source to Target.
-
-The directional nature of the implementation implies a "push" model from the Source collection to the Target collection. Therefore, the Source configuration must be able to "see" the ZooKeeper ensemble in the Target cluster. The ZooKeeper ensemble is provided configured in the Source's `solrconfig.xml` file.
-
-CDCR is configured to replicate from collections in the Source cluster to collections in the Target cluster on a collection-by-collection basis. Since CDCR is configured in `solrconfig.xml` (on both Source and Target clusters), the settings can be tailored for the needs of each collection.
-
-CDCR can be configured to replicate from one collection to a second collection _within the same cluster_. That is a specialized scenario not covered in this Guide.
-
-=== Uni-Directional Architecture
-
-When uni-directional updates are configured, updates and deletes are first written to the Source cluster, then forwarded to one or more Target data centers, as illustrated in this graphic:
-
-.Uni-Directional Data Flow
-image::images/cross-data-center-replication-cdcr-/CDCR_arch.png[image,width=700,height=525]
-
-With uni-directional updates, the Target data center(s) will not propagate updates such as adds, updates, or deletes to the Source data center and updates should not be sent to any of the Target data center(s).
-
-The data flow sequence is:
-
-. A shard leader receives a new update that is processed by its update processor chain.
-. The data update is first applied to the local index.
-. Upon successful application of the data update on the local index, the data update is added to CDCR's Update Logs queue.
-. After the data update is persisted to disk, the data update is sent to the replicas within the data center.
-. After Step 4 is successful, CDCR reads the data update from the Update Logs and pushes it to the corresponding collection in the Target data center. This is necessary in order to ensure consistency between the Source and Target data centers.
-. The leader on the Target data center writes the data locally and forwards it to all its followers.
-
-Steps 1, 2, 3 and 4 are performed synchronously by SolrCloud; Step 5 is performed asynchronously by a background thread. Given that CDCR replication is performed asynchronously, it becomes possible to push batch updates in order to minimize network communication overhead. Also, if CDCR is unable to push the update at a given time, for example, due to a degradation in connectivity, it can retry later without any impact on the Source data center.
-
-One implication of the architecture is that the leaders in the Source cluster must be able to "see" the leaders in the Target cluster. Since leaders may change in both Source and Target collections, all nodes in the Source cluster must be able to "see" all Solr nodes in the Target cluster. Firewalls, ACL rules, etc., must be configured to allow this.
-
-This design works most robustly if both the Source and Target clusters have the same number of shards. There is no requirement that the shards in the Source and Target collection have the same number of replicas.
-
-Having different numbers of shards on the Source and Target cluster is possible, but is also an "expert" configuration as that option imposes certain constraints and is not generally recommended. Most of the scenarios where having differing numbers of shards are contemplated are better accomplished by hosting multiple shards on each Solr instance.
-
-=== Bi-Directional Architecture
-
-When bi-directional updates are configured, either cluster can act as a Source or a Target, and that role can shift between the clusters, as illustrated in this graphic:
-
-.Bi-Directional Data Flow
-image::images/cross-data-center-replication-cdcr-/CDCR_bidir.png[image,width=700,height=525]
-
-With bi-directional updates, indexing and querying must be done on a single cluster at a time to maintain consistency. The second cluster is used when the first cluster is down. Simplifying, one cluster can act as Source and other as Target but both roles, Source and Target, cannot be assigned to any single cluster at the same time. Failover is handled smoothly without any configuration changes. Updates sent from Source data center to Target is not propagated back to Source when bi-directional updates are configured.
-
-The data flow sequence is similar from Step 1 to 6 above, with an additional step:
-
-[start=7]
-. When bi-directional updates are configured, the updates received from Source are flagged on Target and not forwarded further.
-
-All the behavior(s) and constraint(s) explained in uni-directional data flow are applicable to the respective Source and Target clusters in this scenario.
-
-== Major Components of CDCR
-
-What follows is a discussion of the key features and components in CDCR’s architecture:
-
-=== CDCR Configuration
-
-In order to configure CDCR, the Source data center requires the host address of the ZooKeeper cluster associated with the Target data center. The ZooKeeper host address is the only information needed by CDCR to instantiate the communication with the Target Solr cluster. The CDCR configuration section of `solrconfig.xml` file on the Source cluster will therefore contain a list of ZooKeeper hosts. The CDCR configuration section of `solrconfig.xml` might also contain secondary/optional configuration, such as the number of CDC Replicator threads, batch updates related settings, etc.
-
-=== CDCR Initialization
-
-CDCR supports incremental updates to either new or existing collections. CDCR may not be able to keep up with very high volume updates, especially if there are significant communications latencies due to a slow "pipe" between the data centers. Some scenarios:
-
-* There is an initial bulk load of a corpus followed by lower volume incremental updates. In this case, one can do the initial bulk load and then enable CDCR. See the section <<cdcr-config.adoc#initial-startup,Initial Startup>> for more information.
-* The index is being built up from scratch, without a significant initial bulk load. CDCR can be set up on empty collections and keep them synchronized from the start.
-* The index is always being updated at a volume too high for CDCR to keep up. This is especially possible in situations where the connection between the Source and Target data centers is poor. This scenario is unsuitable for CDCR in its current form.
-
-=== Inter-Data Center Communication
-
-The CDCR REST API is the primary form of end-user communication for admin commands.
-
-A SolrJ client is used internally for CDCR operations. The SolrJ client gets its configuration information from the `solrconfig.xml` file. Users of CDCR will not interact directly with the internal SolrJ implementation and will interact with CDCR exclusively through the REST API.
-
-=== Updates Tracking & Pushing
-
-CDCR replicates data updates from the Source to the Target data center by leveraging Update Logs. These logs will replace SolrCloud's transaction log.
-
-A background thread regularly checks the Update Logs for new entries, and then forwards them to the Target data center. The thread therefore needs to keep a checkpoint in the form of a pointer to the last update successfully processed in the Update Logs. Upon acknowledgement from the Target data center that updates have been successfully processed, the Update Logs pointer is updated to reflect the current checkpoint.
-
-This pointer must be synchronized across all the replicas. In the case where the leader goes down and a new leader is elected, the new leader will be able to resume replication from the last update by using this synchronized pointer. The strategy to synchronize such a pointer across replicas will be explained next.
-
-If for some reason, the Target data center is offline or fails to process the updates, the thread will periodically try to contact the Target data center and push the updates while buffering updates on the Source cluster. One implication of this is that the Source Update Logs directory should be periodically monitored as the updates will continue to accumulate and will not be purged until the connection to the Target data center is restored.
-
-=== Synchronization of Update Checkpoints
-
-A reliable synchronization of the update checkpoints between the shard leader and shard replicas is critical to avoid introducing inconsistency between the Source and Target data centers. Another important requirement is that the synchronization must be performed with minimal network traffic to maximize scalability.
-
-In order to achieve this, the strategy is to:
-
-* Uniquely identify each update operation. This unique identifier will serve as pointer.
-* Rely on two storages: an ephemeral storage on the Source shard leader, and a persistent storage on the Target cluster.
-
-The shard leader in the Source cluster will be in charge of generating a unique identifier for each update operation, and will keep a copy of the identifier of the last processed updates in memory. The identifier will be sent to the Target cluster as part of the update request. On the Target data center side, the shard leader will receive the update request, store it along with the unique identifier in the Update Logs, and replicate it to the other shards.
-
-SolrCloud already provides a unique identifier for each update operation, i.e., a “version” number. This version number is generated using a time-based lmport clock which is incremented for each update operation sent. This provides a “happened-before” ordering of the update operations that will be leveraged in (1) the initialization of the update checkpoint on the Source cluster, and in (2) the maintenance strategy of the Update Logs.
-
-The persistent storage on the Target cluster is used only during the election of a new shard leader on the Source cluster. If a shard leader goes down on the Source cluster and a new leader is elected, the new leader will contact the Target cluster to retrieve the last update checkpoint and instantiate its ephemeral pointer. On such a request, the Target cluster will retrieve the latest identifier received across all the shards, and send it back to the Source cluster. To retrieve the latest identifier, every shard leader will look up the identifier of the first entry in its Update Logs and send it back to a coordinator. The coordinator will have to select the highest among them.
-
-This strategy does not require any additional network traffic and ensures reliable pointer synchronization. Consistency is principally achieved by leveraging SolrCloud. The update workflow of SolrCloud ensures that every update is applied to the leader and also to any of the replicas. If the leader goes down, a new leader is elected. During the leader election, a synchronization is performed between the new leader and the other replicas. This ensures that the new leader has a consistent Update Logs with the previous leader. Having a consistent Update Logs means that:
-
-* On the Source cluster, the update checkpoint can be reused by the new leader.
-* On the Target cluster, the update checkpoint will be consistent between the previous and new leader. This ensures the correctness of the update checkpoint sent by a newly elected leader from the Target cluster.
-
-=== Maintenance of Update Logs
-
-The CDCR replication logic requires modification to the maintenance logic of Update Logs on the Source data center. Initially, the Update Logs acts as a fixed size queue, limited to 100 update entries by default. In CDCR, the Update Logs must act as a queue of variable size as they need to keep track of all the updates up through the last processed update by the Target data center. Entries in the Update Logs are removed only when all pointers (one pointer per Target data center) are after them.
-
-If the communication with one of the Target data center is slow, the Update Logs on the Source data center can grow to a substantial size. In such a scenario, it is necessary for the Update Logs to be able to efficiently find a given update operation given its identifier. Given that its identifier is an incremental number, it is possible to implement an efficient search strategy. Each transaction log file contains as part of its filename the version number of the first element. This is used to quickly traverse all the transaction log files and find the transaction log file containing one specific version number.
-
-=== Monitoring Operations
-
-CDCR provides the following monitoring capabilities over the replication operations:
-
-* Monitoring of the outgoing and incoming replications, with information such as the Source and Target nodes, their status, etc.
-* Statistics about the replication, with information such as operations (add/delete) per second, number of documents in the queue, etc.
-
-Information about the lifecycle and statistics will be provided on a per-shard basis by the CDC Replicator thread. The CDCR API can then aggregate this information an a collection level.
-
-=== CDC Replicator
-
-The CDC Replicator is a background thread that is responsible for replicating updates from a Source data center to one or more Target data centers. It is responsible for providing monitoring information on a per-shard basis. As there can be a large number of collections and shards in a cluster, we will use a fixed-size pool of CDC Replicator threads that will be shared across shards.
-
-=== CDCR Limitations
-
-The current design of CDCR has some limitations. CDCR will continue to evolve over time and many of these limitations will be addressed. Among them are:
-
-* CDCR is unlikely to be satisfactory for bulk-load situations where the update rate is high, especially if the bandwidth between the Source and Target clusters is restricted. In this scenario, the initial bulk load should be performed, the Source and Target data centers synchronized and CDCR be utilized for incremental updates.
-* CDCR works most robustly with the same number of shards in the Source and Target collection. The shards in the two collections may have different numbers of replicas.
-* Running CDCR with the indexes on HDFS is not currently supported, see the https://issues.apache.org/jira/browse/SOLR-9861[Solr CDCR over HDFS] JIRA issue.
-* Configuration files (`solrconfig.xml`, `managed-schema`, etc.) are not automatically synchronized between the Source and Target clusters. This means that when the Source schema or `solrconfig.xml` files are changed, those changes must be replicated manually to the Target cluster. This includes adding fields by the <<schema-api.adoc#schema-api,Schema API>> or <<managed-resources.adoc#managed-resources,Managed Resources>> as well as hand editing those files.
-* CDCR doesn't support <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication features>> across clusters.
-* CDCR does not yet support TLOG or PULL replica types.
diff --git a/solr/solr-ref-guide/src/cdcr-config.adoc b/solr/solr-ref-guide/src/cdcr-config.adoc
deleted file mode 100644
index 25add10..0000000
--- a/solr/solr-ref-guide/src/cdcr-config.adoc
+++ /dev/null
@@ -1,376 +0,0 @@
-= CDCR Configuration
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-The Source and Target configurations differ in the case of the data centers being in separate clusters. "Cluster" here means separate ZooKeeper ensembles controlling disjoint Solr instances. Whether these data centers are physically separated or not is immaterial for this discussion.
-
-[WARNING]
-.CDCR is deprecated
-====
-This feature (in its current form) is deprecated and will likely be removed in 9.0.
-
-See <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>> for more details.
-====
-
-As described in the section <<cdcr-architecture.adoc#cdcr-architecture,CDCR Architecture>>, two approaches are supported: uni-directional updates and bi-directional updates.
-
-All CDCR configuration is done in the `solrconfig.xml` file. Because this is a per-collection configuration file, all CDCR configuration is done for each collection.
-
-== Uni-Directional Updates
-
-=== Source Configuration
-
-Here is a sample of a Source configuration file, a section in `solrconfig.xml`. The presence of the `<replica>` section causes CDCR to use this cluster as the Source and it should not be present in the Target collections. Details about each setting are after the two examples. The source example has buffering disabled, the default is enabled:
-
-[source,xml]
-----
-<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-  <lst name="replica">
-    <str name="zkHost">10.240.18.211:2181,10.240.18.212:2181</str>
-    <!--
-    If you have chrooted your Solr information at the target you must include the chroot, for example:
-    <str name="zkHost">10.240.18.211:2181,10.240.18.212:2181/solr</str>
-    -->
-    <str name="source">collection1</str>
-    <str name="target">collection1</str>
-  </lst>
-
-  <lst name="replicator">
-    <str name="threadPoolSize">8</str>
-    <str name="schedule">1000</str>
-    <str name="batchSize">128</str>
-  </lst>
-
-  <lst name="updateLogSynchronizer">
-    <str name="schedule">1000</str>
-  </lst>
-
-</requestHandler>
-
-<!-- Modify the <updateLog> section of your existing <updateHandler>
-     in your config as below -->
-<updateHandler class="solr.DirectUpdateHandler2">
-  <updateLog class="solr.CdcrUpdateLog">
-    <str name="dir">${solr.ulog.dir:}</str>
-    <!--Any parameters from the original <updateLog> section -->
-  </updateLog>
-
-  <!-- Other configuration options such as autoCommit should still be present -->
-</updateHandler>
-----
-
-=== Target Configuration
-
-Here is a typical Target configuration.
-
-Target instance must configure an update processor chain that is specific to CDCR. The update processor chain must include the `CdcrUpdateProcessorFactory`. The task of this processor is to ensure that the version numbers attached to update requests coming from a CDCR Source SolrCloud are reused and not overwritten by the Target. A properly configured Target configuration looks similar to this:
-
-[source,xml]
-----
-<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-  <!-- recommended for Target clusters -->
-  <lst name="buffer">
-    <str name="defaultState">disabled</str>
-  </lst>
-</requestHandler>
-
-<requestHandler name="/update" class="solr.UpdateRequestHandler">
-  <lst name="defaults">
-    <str name="update.chain">cdcr-processor-chain</str>
-  </lst>
-</requestHandler>
-
-<updateRequestProcessorChain name="cdcr-processor-chain">
-  <processor class="solr.CdcrUpdateProcessorFactory"/>
-  <processor class="solr.RunUpdateProcessorFactory"/>
-</updateRequestProcessorChain>
-
-<!-- Modify the <updateLog> section of your existing <updateHandler> in your
-    config as below -->
-<updateHandler class="solr.DirectUpdateHandler2">
-  <updateLog class="solr.CdcrUpdateLog">
-    <str name="dir">${solr.ulog.dir:}</str>
-    <!--Any parameters from the original <updateLog> section -->
-  </updateLog>
-
-  <!-- Other configuration options such as autoCommit should still be present -->
-
-</updateHandler>
-----
-
-== Bi-Directional Updates
-
-The configurations in both Cluster 1 and 2 are identical with respective `zkHost` string specified in each cluster's `solrconfig.xml`.
-
-TIP: Both Cluster 1 and Cluster 2 can act as Source and Target at any given point of time but a cluster cannot be both Source and Target at the same time.
-
-=== Cluster 1 Configuration
-
-Here is a sample of a Cluster 1 configuration file, a section in `solrconfig.xml`. Cluster 2 `zkhost` string is specified in a `CdcrRequestHandler` declaration:
-
-[source,xml]
-----
-<requestHandler name="/update" class="solr.UpdateRequestHandler">
-  <lst name="defaults">
-    <str name="update.chain">cdcr-processor-chain</str>
-  </lst>
-</requestHandler>
-
-<updateRequestProcessorChain name="cdcr-processor-chain">
-  <processor class="solr.CdcrUpdateProcessorFactory"/>
-  <processor class="solr.RunUpdateProcessorFactory"/>
-</updateRequestProcessorChain>
-
-<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-  <lst name="replica">
-    <str name="zkHost">10.240.19.241:2181,10.240.19.242:2181</str>
-    <!--
-    If you have chrooted your Solr information at the target you must include the chroot, for example:
-    <str name="zkHost">10.240.19.241:2181,10.240.19.242:2181/solr</str>
-    -->
-    <str name="source">collection1</str>
-    <str name="target">collection1</str>
-  </lst>
-
-  <lst name="replicator">
-    <str name="threadPoolSize">8</str>
-    <str name="schedule">1000</str>
-    <str name="batchSize">128</str>
-  </lst>
-
-  <lst name="updateLogSynchronizer">
-    <str name="schedule">1000</str>
-  </lst>
-
-</requestHandler>
-
-<!-- Modify the <updateLog> section of your existing <updateHandler>
-     in your config as below -->
-<updateHandler class="solr.DirectUpdateHandler2">
-  <updateLog class="solr.CdcrUpdateLog">
-    <str name="dir">${solr.ulog.dir:}</str>
-    <!--Any parameters from the original <updateLog> section -->
-  </updateLog>
-</updateHandler>
-----
-
-=== Cluster 2 Configuration
-
-The configuration of the 2nd cluster is identical to the configuration of Cluster 1, with the Cluster 1 `zkHost` string specified in `CdcrRequestHandler` definition:
-
-[source,xml]
-----
-<requestHandler name="/update" class="solr.UpdateRequestHandler">
-  <lst name="defaults">
-    <str name="update.chain">cdcr-processor-chain</str>
-  </lst>
-</requestHandler>
-
-<updateRequestProcessorChain name="cdcr-processor-chain">
-  <processor class="solr.CdcrUpdateProcessorFactory"/>
-  <processor class="solr.RunUpdateProcessorFactory"/>
-</updateRequestProcessorChain>
-
-<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
-  <lst name="replica">
-    <str name="zkHost">10.250.18.211:2181,10.250.18.212:2181</str>
-    <!--
-    If you have chrooted your Solr information at the target you must include the chroot, for example:
-    <str name="zkHost">10.250.18.211:2181,10.250.18.212:2181/solr</str>
-    -->
-    <str name="source">collection1</str>
-    <str name="target">collection1</str>
-  </lst>
-
-  <lst name="replicator">
-    <str name="threadPoolSize">8</str>
-    <str name="schedule">1000</str>
-    <str name="batchSize">128</str>
-  </lst>
-
-  <lst name="updateLogSynchronizer">
-    <str name="schedule">1000</str>
-  </lst>
-
-</requestHandler>
-
-<!-- Modify the <updateLog> section of your existing <updateHandler>
-     in your config as below -->
-<updateHandler class="solr.DirectUpdateHandler2">
-  <updateLog class="solr.CdcrUpdateLog">
-    <str name="dir">${solr.ulog.dir:}</str>
-    <!--Any parameters from the original <updateLog> section -->
-  </updateLog>
-</updateHandler>
-----
-
-== CDCR Configuration Parameters
-
-The configuration details, defaults and options are as follows:
-
-=== The Replica Element
-
-CDCR can be configured to forward update requests to one or more Target collections. A Target collection is defined with a “replica” list as follows:
-
-`zkHost`::
-The host address for ZooKeeper of the Target SolrCloud. Usually this is a comma-separated list of addresses to each node in the Target ZooKeeper ensemble. This parameter is required.
-
-`Source`::
-The name of the collection on the Source SolrCloud to be replicated. This parameter is required.
-
-`Target`::
-The name of the collection on the Target SolrCloud to which updates will be forwarded. This parameter is required.
-
-=== The Replicator Element
-
-The CDC Replicator is the component in charge of forwarding updates to the replicas. The replicator will monitor the update logs of the Source collection and will forward any new updates to the Target collection.
-
-The replicator uses a fixed thread pool to forward updates to multiple replicas in parallel. If more than one replica is configured, one thread will forward a batch of updates from one replica at a time in a round-robin fashion. The replicator can be configured with a “replicator” list as follows:
-
-`threadPoolSize`::
-The number of threads to use for forwarding updates. One thread per replica is recommended. The default is `2`.
-
-`schedule`::
-The delay in milliseconds for the monitoring the update log(s). The default is `10`.
-
-`batchSize`::
-The number of updates to send in one batch. The optimal size depends on the size of the documents. Large batches of large documents can increase your memory usage significantly. The default is `128`.
-
-=== The updateLogSynchronizer Element
-
-Expert: Non-leader nodes need to synchronize their update logs with their leader node from time to time in order to clean deprecated transaction log files. By default, such a synchronization process is performed every minute. The schedule of the synchronization can be modified with a “updateLogSynchronizer” list as follows:
-
-TIP: If the updateLogSynchronizer element is omitted from the Source cluster, transaction logs may accumulate on non-leaders.
-
-`schedule`::
- The delay in milliseconds for synchronizing the update logs. The default is `60000`.
-
-=== The Buffer Element
-
-When buffering updates, the update logs will store all the updates indefinitely. It is best to disable buffering on both the Source and Target clusters during normal operation as when buffering is enabled the Update Logs will grow without limit. Enbling buffering is intended for special maintenance periods. Buffering can be disabled at startup with a “buffer” list and the parameter “defaultState” as follows:
-
-`defaultState`::
-The state of the buffer at startup. The default is `enabled`.
-
-[TIP]
-.Buffering should be enabled only for maintenance windows
-====
-Buffering is designed to augment maintenance windows. The following points should be kept in mind:
-
- * When buffering is enabled, the Update Logs will grow without limit; they will never be purged.
- * During normal operation, the Update Logs will automatically accrue on the Source data center if the Target data center is unavailable; It is not necessary to enable buffering for CDCR to handle routine network disruptions.
- ** For this reason, monitoring disk usage on the Source data center is recommended as an additional check that the Target data center is receiving updates.
- * For uni-directional updates, buffering should _not_ be enabled on the Target data center as Update Logs would accrue without limit.
- * If buffering is enabled and then disabled, the Update Logs will be removed when their contents have been sent to the Target data center. This process may take some time and is triggered by additional updates the Source cluster.
- ** Update Log cleanup is not triggered until a new update is sent to the Source data center.
-====
-
-== Initial Startup
-
-=== Uni-Directional Approach
-
-This is a general approach for initializing CDCR in a production environment. It's based upon an approach taken by the initial working installation of CDCR and generously contributed to illustrate a "real world" scenario.
-
-* CDCR is used to keep a remote disaster-recovery instance available for production backup.
-* This example as 26 clouds with 200 million assets per cloud (15GB indexes). Total document count is over 4.8 billion.
-** Source and Target clouds were synched in 2-3 hour maintenance windows to establish the base index for the Targets.
-
-As usual, it is good to start small. Sync a single cloud and monitor for a period of time before doing the others. You may need to adjust your settings several times before finding the right balance.
-
-* Before starting, stop or pause the indexers. This is best done during a small maintenance window.
-* Stop the SolrCloud instances at the Source.
-* Upload the modified `solrconfig.xml` to ZooKeeper on both Source and Target as appropriate, see the examples above.
-* Sync the index directories from the Source collection to Target collection across to the corresponding shard nodes. `rsync` works well for this.
-+
-For example, if there are two shards on collection1 with 2 replicas for each shard, copy the corresponding index directories from:
-+
-[width="75%",cols="45,10,45"]
-|===
-|shard1replica1Source |to |shard1replica1Target
-|shard1replica2Source |to |shard1replica2Target
-|shard2replica1Source |to |shard2replica1Target
-|shard2replica2Source |to |shard2replica2Target
-|===
-
-* Start ZooKeeper on the Target (DR).
-* Start SolrCloud on the Target (DR).
-* Start ZooKeeper on the Source.
-* Start SolrCloud on the Source. As a general rule, the Target (DR) should be started before the Source.
-* Activate CDCR on Source instance using the CDCR API:
-+
-[source,text]
-http://host:port/solr/<collection_name>/cdcr?action=START
-+
-There is no need to run the `/cdcr?action=START` command on the Target.
-* Disable the buffer on the Target and Source:
-+
-[source,text]
-http://host:port/solr/collection_name/cdcr?action=DISABLEBUFFER
-+
-* Re-enable indexing.
-
-=== Bi-Directional Approach
-
-[TIP]
-====
-When using the bi-directional approach, it is highly recommended to enable CDCR on both cluster-collections before any indexing has taken place.
-====
-
-Based on the same example from uni-directional solution, let's walk through the necessary steps:
-
-* Before you begin, stop or pause any indexing processes. This is best done during a small maintenance window.
-* Stop the SolrCloud instances in both Cluster 1 and Cluster 2.
-* Upload the modified `solrconfig.xml` to ZooKeeper on both Cluster 1 and Cluster 2 as appropriate, see the examples above in the section <<Bi-Directional Updates>>.
-* If documents were indexed prior to this exercise, sync the index directories from the Cluster 1 collection to the Cluster 2 collection to the corresponding shard nodes or vice versa. The `rsync` utility works well for this if it's available on your server. Check to be sure the the updated index is copied across.
-+
-For example, if there are 2 shards on collection 'cluster1' (the updated collection) with 2 replicas for each shard, copy the corresponding index directories from:
-+
-[width="75%",cols="45,10,45"]
-|===
-|shard1replica1cluster1 |to |shard1replica1cluster2
-|shard1replica2cluster1 |to |shard1replica2cluster2
-|shard2replica1cluster1 |to |shard2replica1cluster2
-|shard2replica2cluster1 |to |shard2replica2cluster2
-|===
-
-* Start ZooKeeper on Cluster 1.
-* Start ZooKeeper on Cluster 2.
-* Start SolrCloud on Cluster 1.
-* Start SolrCloud on Cluster 2.
-* If not present, create respective collections in both Cluster 1 and Cluster 2.
-* Activate the CDCR on Cluster 1 and Cluster 2 instance using the CDCR API:
-+
-[source,text]
-http://host:port/solr/<collection_name>/cdcr?action=START
-+
-* Disable the buffer on Cluster 1 and Cluster 2:
-+
-[source,text]
-http://host:port/solr/collection_name/cdcr?action=DISABLEBUFFER
-+
-* Re-enable indexing.
-
-== ZooKeeper Settings
-
-With CDCR, the Target ZooKeepers will have connections from the Target clouds and the Source clouds. You may need to increase the `maxClientCnxns` setting in `zoo.cfg`.
-
-[source,text]
-----
-## set numbers of connection to 800 from client
-## is maxClientCnxns=0 that means no limit
-maxClientCnxns=800
-----
diff --git a/solr/solr-ref-guide/src/cdcr-operations.adoc b/solr/solr-ref-guide/src/cdcr-operations.adoc
deleted file mode 100644
index af4d0d9..0000000
--- a/solr/solr-ref-guide/src/cdcr-operations.adoc
+++ /dev/null
@@ -1,49 +0,0 @@
-= Cross Data Center Replication Operations
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-[WARNING]
-.CDCR is deprecated
-====
-This feature (in its current form) is deprecated and will likely be removed in 9.0.
-
-See <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>> for more details.
-====
-
-== Monitoring
-
-. Network and disk space monitoring are essential. Ensure that the system has plenty of available storage to queue up changes if there is a disconnect between the Source and Target. A network outage between the two data centers can cause your disk usage to grow. Some tips:
-.. Set a monitor for your disks to send alerts when the disk gets over a certain percentage (e.g., 70%).
-.. Run a test. With moderate indexing, how long can the system queue changes before you run out of disk space?
-. Create a simple way to check the counts between the Source and the Target.
-.. Keep in mind that if indexing is running, the Source and Target may not match document for document. Set an alert to fire if the difference is greater than some percentage of the overall cloud size.
-
-== Upgrading and Patching Production
-
-When rolling in upgrades to your indexer or application, you should shutdown the Source and the Target. Depending on your setup, you may want to pause/stop indexing, deploy the release or patch, then re-enable indexing. Then start the Target last.
-
-* There is no need to reissue the DISABLEBUFFERS or START commands. These are persisted.
-* After starting the Target, run a simple test. Add a test document to each of the Source clouds. Then check for it on the Target.
-
-[source,bash]
-----
-#send to the Source
-curl http://<Source>/solr/cloud1/update -H 'Content-type:application/json' -d '[{"SKU":"ABC"}]'
-
-#check the Target
-curl "http://<Target>:8983/solr/<collection_name>/select?q=SKU:ABC&indent=true"
-----
diff --git a/solr/solr-ref-guide/src/circuit-breakers.adoc b/solr/solr-ref-guide/src/circuit-breakers.adoc
index 1629c32..b28bda6 100644
--- a/solr/solr-ref-guide/src/circuit-breakers.adoc
+++ b/solr/solr-ref-guide/src/circuit-breakers.adoc
@@ -27,14 +27,24 @@
 It is up to the client to handle this error and potentially build a retrial logic as this should ideally be a transient situation.
 
 == Circuit Breaker Configurations
-The following flag controls the global activation/deactivation of circuit breakers. If this flag is disabled, all circuit breakers
-will be disabled globally. Per circuit breaker configurations are specified in their respective sections later.
+All circuit breaker configurations are listed in the circuitBreaker tags in solrconfig.xml as shown below:
 
 [source,xml]
 ----
-<useCircuitBreakers>false</useCircuitBreakers>
+<circuitBreaker class="solr.CircuitBreakerManager" enabled="true">
+  <!-- All specific configs in this section -->
+</circuitBreaker>
 ----
 
+The "enabled" attribute controls the global activation/deactivation of circuit breakers. If this flag is disabled, all circuit breakers
+will be disabled globally. Per circuit breaker configurations are specified in their respective sections later.
+
+This attribute acts as the highest authority and global controller of circuit breakers. For using specific circuit breakers, each one
+needs to be individually enabled in addition to this flag being enabled.
+
+CircuitBreakerManager is the default manager for all circuit breakers that should be defined in the tag unless the user wishes to use
+a custom implementation.
+
 == Currently Supported Circuit Breakers
 
 === JVM Heap Usage Based Circuit Breaker
@@ -42,26 +52,57 @@
 exceeds a configured percentage of maximum heap allocated to the JVM (-Xmx). The main configuration for this circuit breaker is
 controlling the threshold percentage at which the breaker will trip.
 
-It does not logically make sense to have a threshold below 50% and above 95% of the max heap allocated to the JVM. Hence, the range
-of valid values for this parameter is [50, 95], both inclusive.
+Configuration for JVM heap usage based circuit breaker:
 
 [source,xml]
 ----
-<memoryCircuitBreakerThresholdPct>75</memoryCircuitBreakerThresholdPct>
+<str name="memEnabled">true</str>
 ----
 
+Note that this configuration will be overridden by the global circuit breaker flag -- if circuit breakers are disabled, this flag
+will not help you.
+
+The triggering threshold is defined as a percentage of the max heap allocated to the JVM. It is controlled by the below configuration:
+
+[source,xml]
+----
+<str name="memThreshold">75</str>
+----
+
+It does not logically make sense to have a threshold below 50% and above 95% of the max heap allocated to the JVM. Hence, the range
+of valid values for this parameter is [50, 95], both inclusive.
+
 Consider the following example:
 
 JVM has been allocated a maximum heap of 5GB (-Xmx) and memoryCircuitBreakerThresholdPct is set to 75. In this scenario, the heap usage
 at which the circuit breaker will trip is 3.75GB.
 
-Note that this circuit breaker is checked for each incoming search request and considers the current heap usage of the node, i.e every search
-request will get the live heap usage and compare it against the set memory threshold. The check does not impact performance,
-but any performance regressions that are suspected to be caused by this feature should be reported to the dev list.
 
+=== CPU Utilization Based Circuit Breaker
+This circuit breaker tracks CPU utilization and triggers if the average CPU utilization over the last one minute
+exceeds a configurable threshold. Note that the value used in computation is over the last one minute -- so a sudden
+spike in traffic that goes down might still cause the circuit breaker to trigger for a short while before it resolves
+and updates the value. For more details of the calculation, please see https://en.wikipedia.org/wiki/Load_(computing)
+
+Configuration for CPU utilization based circuit breaker:
+
+[source,xml]
+----
+<str name="cpuEnabled">true</str>
+----
+
+Note that this configuration will be overridden by the global circuit breaker flag -- if circuit breakers are disabled, this flag
+will not help you.
+
+The triggering threshold is defined in units of CPU utilization. The configuration to control this is as below:
+
+[source,xml]
+----
+<str name="cpuThreshold">75</str>
+----
 
 == Performance Considerations
-It is worth noting that while JVM circuit breaker does not add any noticeable overhead per query, having too many
+It is worth noting that while JVM or CPU circuit breakers do not add any noticeable overhead per query, having too many
 circuit breakers checked for a single request can cause a performance overhead.
 
 In addition, it is a good practice to exponentially back off while retrying requests on a busy node.
diff --git a/solr/solr-ref-guide/src/cloud-screens.adoc b/solr/solr-ref-guide/src/cloud-screens.adoc
index f603e39..26aa2b5 100644
--- a/solr/solr-ref-guide/src/cloud-screens.adoc
+++ b/solr/solr-ref-guide/src/cloud-screens.adoc
@@ -23,7 +23,7 @@
 .Only Visible When using SolrCloud
 [NOTE]
 ====
-The "Cloud" menu option is only available on Solr instances running in <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,SolrCloud mode>>. Single node or master/slave replication instances of Solr will not display this option.
+The "Cloud" menu option is only available on Solr instances running in <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,SolrCloud mode>>. Single node or leader/follower replication instances of Solr will not display this option.
 ====
 
 Click on the "Cloud" option in the left-hand navigation, and a small sub-menu appears with options called "Nodes", "Tree", "ZK Status" and "Graph". The sub-view selected by default is "Nodes".
diff --git a/solr/solr-ref-guide/src/collection-specific-tools.adoc b/solr/solr-ref-guide/src/collection-specific-tools.adoc
index e3ae1c5..a927ae5 100644
--- a/solr/solr-ref-guide/src/collection-specific-tools.adoc
+++ b/solr/solr-ref-guide/src/collection-specific-tools.adoc
@@ -1,5 +1,5 @@
 = Collection-Specific Tools
-:page-children: analysis-screen, dataimport-screen, documents-screen, files-screen, query-screen, stream-screen, schema-browser-screen
+:page-children: analysis-screen, documents-screen, files-screen, query-screen, stream-screen, schema-browser-screen
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -24,7 +24,7 @@
 ====
 The "Collection Selector" pull-down menu is only available on Solr instances running in <<solrcloud.adoc#solrcloud,SolrCloud mode>>.
 
-Single node or master/slave replication instances of Solr will not display this menu, instead the Collection specific UI pages described in this section will be available in the <<core-specific-tools.adoc#core-specific-tools,Core Selector pull-down menu>>.
+Single node or leader/follower replication instances of Solr will not display this menu, instead the Collection specific UI pages described in this section will be available in the <<core-specific-tools.adoc#core-specific-tools,Core Selector pull-down menu>>.
 ====
 
 Clicking on the Collection Selector pull-down menu will show a list of the collections in your Solr cluster, with a search box that can be used to find a specific collection by name. When you select a collection from the pull-down, the main display of the page will display some basic metadata about the collection, and a secondary menu will appear in the left nav with links to additional collection specific administration screens.
@@ -35,7 +35,6 @@
 
 // TODO: SOLR-10655 BEGIN: refactor this into a 'collection-screens-list.include.adoc' file for reuse
 * <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data found in specific fields.
-* <<dataimport-screen.adoc#dataimport-screen,Dataimport>> - shows you information about the current status of the Data Import Handler.
 * <<documents-screen.adoc#documents-screen,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
 * <<files-screen.adoc#files-screen,Files>> - shows the current core configuration files such as `solrconfig.xml`.
 * <<query-screen.adoc#query-screen,Query>> - lets you submit a structured query about various elements of a core.
diff --git a/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc b/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
index 17cae7d..0c5e3c3 100644
--- a/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
+++ b/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
@@ -18,16 +18,16 @@
 
 When your index is too large for a single machine and you have a query volume that single shards cannot keep up with, it's time to replicate each shard in your distributed search setup.
 
-The idea is to combine distributed search with replication. As shown in the figure below, a combined distributed-replication configuration features a master server for each shard and then 1-_n_ slaves that are replicated from the master. As in a standard replicated configuration, the master server handles updates and optimizations without adversely affecting query handling performance.
+The idea is to combine distributed search with replication. As shown in the figure below, a combined distributed-replication configuration features a leader server for each shard and then 1-_n_ followers that are replicated from the leader. As in a standard replicated configuration, the leader server handles updates and optimizations without adversely affecting query handling performance.
 
-Query requests should be load balanced across each of the shard slaves. This gives you both increased query handling capacity and fail-over backup if a server goes down.
+Query requests should be load balanced across each of the shard followers. This gives you both increased query handling capacity and fail-over backup if a server goes down.
 
-.A Solr configuration combining both replication and master-slave distribution.
+.A Solr configuration combining both replication and leader-follower distribution.
 image::images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png[image,width=312,height=344]
 
 
-None of the master shards in this configuration know about each other. You index to each master, the index is replicated to each slave, and then searches are distributed across the slaves, using one slave from each master/slave shard.
+None of the leader shards in this configuration know about each other. You index to each leader, the index is replicated to each follower, and then searches are distributed across the followers, using one follower from each leader/follower shard.
 
-For high availability you can use a load balancer to set up a virtual IP for each shard's set of slaves. If you are new to load balancing, HAProxy (http://haproxy.1wt.eu/) is a good open source software load-balancer. If a slave server goes down, a good load-balancer will detect the failure using some technique (generally a heartbeat system), and forward all requests to the remaining live slaves that served with the failed slave. A single virtual IP should then be set up so that requests can hit a single IP, and get load balanced to each of the virtual IPs for the search slaves.
+For high availability you can use a load balancer to set up a virtual IP for each shard's set of followers. If you are new to load balancing, HAProxy (http://haproxy.1wt.eu/) is a good open source software load-balancer. If a follower server goes down, a good load-balancer will detect the failure using some technique (generally a heartbeat system), and forward all requests to the remaining live followers that served with the failed follower. A single virtual IP should then be set up so that requests can hit a single IP, and get load balanced to each of the virtual IPs for the search followers.
 
-With this configuration you will have a fully load balanced, search-side fault-tolerant system (Solr does not yet support fault-tolerant indexing). Incoming searches will be handed off to one of the functioning slaves, then the slave will distribute the search request across a slave for each of the shards in your configuration. The slave will issue a request to each of the virtual IPs for each shard, and the load balancer will choose one of the available slaves. Finally, the results will be combined into a single results set and returned. If any of the slaves go down, they will be taken out of rotation and the remaining slaves will be used. If a shard master goes down, searches can still be served from the slaves until you have corrected the problem and put the master back into production.
+With this configuration you will have a fully load balanced, search-side fault-tolerant system (Solr does not yet support fault-tolerant indexing). Incoming searches will be handed off to one of the functioning followers, then the follower will distribute the search request across a follower for each of the shards in your configuration. The follower will issue a request to each of the virtual IPs for each shard, and the load balancer will choose one of the available followers. Finally, the results will be combined into a single results set and returned. If any of the followers go down, they will be taken out of rotation and the remaining followers will be used. If a shard leader goes down, searches can still be served from the followers until you have corrected the problem and put the leader back into production.
diff --git a/solr/solr-ref-guide/src/config-api.adoc b/solr/solr-ref-guide/src/config-api.adoc
index 6e5d30b..529d271 100644
--- a/solr/solr-ref-guide/src/config-api.adoc
+++ b/solr/solr-ref-guide/src/config-api.adoc
@@ -373,9 +373,6 @@
 * `add-listener`
 * `update-listener`
 * `delete-listener`
-* `add-runtimelib`
-* `update-runtimelib`
-* `delete-runtimelib`
 * `add-expressible`
 * `update-expressible`
 * `delete-expressible`
diff --git a/solr/solr-ref-guide/src/config-sets.adoc b/solr/solr-ref-guide/src/config-sets.adoc
index f892517..8acb03a 100644
--- a/solr/solr-ref-guide/src/config-sets.adoc
+++ b/solr/solr-ref-guide/src/config-sets.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Configsets are a set of configuration files used in a Solr installation: `solrconfig.xml`, the schema, and then <<resource-loading.adoc#resource-loading,resources>> like language files, `synonyms.txt`, DIH-related configuration, and others.
+Configsets are a set of configuration files used in a Solr installation: `solrconfig.xml`, the schema, and then <<resource-loading.adoc#resource-loading,resources>> like language files, `synonyms.txt`, and others.
 
 Such configuration, _configsets_, can be named and then referenced by collections or cores, possibly with the intent to share them to avoid duplication.
 
diff --git a/solr/solr-ref-guide/src/configsets-api.adoc b/solr/solr-ref-guide/src/configsets-api.adoc
index 2ce4839..9b0cf26 100644
--- a/solr/solr-ref-guide/src/configsets-api.adoc
+++ b/solr/solr-ref-guide/src/configsets-api.adoc
@@ -19,7 +19,7 @@
 
 The Configsets API enables you to upload new configsets to ZooKeeper, create, and delete configsets when Solr is running SolrCloud mode.
 
-Configsets are a collection of configuration files such as `solrconfig.xml`, `synonyms.txt`, the schema, language-specific files, DIH-related configuration, and other collection-level configuration files (everything that normally lives in the `conf` directory). Solr ships with two example configsets (`_default` and `sample_techproducts_configs`) which can be used when creating collections. Using the same concept, you can create your own configsets and make them available when creating collections.
+Configsets are a collection of configuration files such as `solrconfig.xml`, `synonyms.txt`, the schema, language-specific files, and other collection-level configuration files (everything that normally lives in the `conf` directory). Solr ships with two example configsets (`_default` and `sample_techproducts_configs`) which can be used when creating collections. Using the same concept, you can create your own configsets and make them available when creating collections.
 
 This API provides a way to upload configuration files to ZooKeeper and share the same set of configuration files between two or more collections.
 
@@ -86,7 +86,6 @@
 
 A configset is uploaded in a "trusted" mode if authentication is enabled and the upload operation is performed as an authenticated request. Without authentication, a configset is uploaded in an "untrusted" mode. Upon creation of a collection using an "untrusted" configset, the following functionality will not work:
 
-* If specified in the configset, the DataImportHandler's ScriptTransformer will not initialize.
 * The XSLT transformer (`tr` parameter) cannot be used at request processing time.
 * If specified in the configset, the StatelessScriptUpdateProcessor will not initialize.
 * Collections won't initialize if <lib> directives are used in the configset. (Note: Libraries added to Solr's classpath don't need the <lib> directive)
diff --git a/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc b/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
index fccd9d2..f1f0fa2 100644
--- a/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
+++ b/solr/solr-ref-guide/src/configuring-solrconfig-xml.adoc
@@ -93,10 +93,15 @@
 
 [source,json]
 ----
-{"userProps": {
-    "dih.db.url": "jdbc:oracle:thin:@localhost:1521",
-    "dih.db.user": "username",
-    "dih.db.pass": "password"}}
+{
+  "userProps":{"update.autoCreateFields":"false"},
+  "requestHandler":{"/myterms":{
+      "name":"/myterms",
+      "class":"solr.SearchHandler",
+      "defaults":{
+        "terms":true,
+        "distrib":false},
+      "components":["terms"]}}}
 ----
 
 For more details, see the section <<config-api.adoc#config-api,Config API>>.
diff --git a/solr/solr-ref-guide/src/core-specific-tools.adoc b/solr/solr-ref-guide/src/core-specific-tools.adoc
index 16c31c4..ab02c11 100644
--- a/solr/solr-ref-guide/src/core-specific-tools.adoc
+++ b/solr/solr-ref-guide/src/core-specific-tools.adoc
@@ -39,7 +39,6 @@
 
 // TODO: SOLR-10655 BEGIN: refactor this into a 'collection-screens-list.include.adoc' file for reuse
 * <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data found in specific fields.
-* <<dataimport-screen.adoc#dataimport-screen,Dataimport>> - shows you information about the current status of the Data Import Handler.
 * <<documents-screen.adoc#documents-screen,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
 * <<files-screen.adoc#files-screen,Files>> - shows the current core configuration files such as `solrconfig.xml`.
 * <<query-screen.adoc#query-screen,Query>> - lets you submit a structured query about various elements of a core.
diff --git a/solr/solr-ref-guide/src/coreadmin-api.adoc b/solr/solr-ref-guide/src/coreadmin-api.adoc
index 90f0ab8..7d390f6 100644
--- a/solr/solr-ref-guide/src/coreadmin-api.adoc
+++ b/solr/solr-ref-guide/src/coreadmin-api.adoc
@@ -19,7 +19,7 @@
 
 The Core Admin API is primarily used under the covers by the <<collections-api.adoc#collections-api,Collections API>> when running a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster.
 
-SolrCloud users should not typically use the CoreAdmin API directly, but the API may be useful for users of single-node or master/slave Solr installations for core maintenance operations.
+SolrCloud users should not typically use the CoreAdmin API directly, but the API may be useful for users of single-node or leader/follower Solr installations for core maintenance operations.
 
 The CoreAdmin API is implemented by the CoreAdminHandler, which is a special purpose <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,request handler>> that is used to manage Solr cores. Unlike other request handlers, the CoreAdminHandler is not attached to a single core. Instead, there is a single instance of the CoreAdminHandler in each Solr node that manages all the cores running in that node and is accessible at the `/solr/admin/cores` path.
 
diff --git a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
deleted file mode 100644
index f19b2c4..0000000
--- a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
+++ /dev/null
@@ -1,63 +0,0 @@
-= Cross Data Center Replication (CDCR)
-:page-children: cdcr-architecture, cdcr-config, cdcr-operations, cdcr-api
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-Cross Data Center Replication (CDCR) allows you to create multiple SolrCloud data centers and keep them in sync.
-
-[WARNING]
-.CDCR is deprecated
-====
-This feature (in its current form) is deprecated and will likely be removed in 9.0.
-
-Anyone currently using CDCR should consider migrating away from it.
-There are several open issues which make CDCR complex to maintain and generally unstable.
-
-The simplest alternative to CDCR is to manually maintain two Solr implementations in separate data centers and manage them as completely separate installations.
-While this may require more initial setup, it will be more stable in the long run.
-
-There are other alternatives, and the Solr community is working to identify the best recommended replacement in time for 9.0.
-====
-
-== What is CDCR?
-
-The <<solrcloud.adoc#solrcloud,SolrCloud>> architecture is designed to support <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time (NRT)>> searches on a Solr collection that usual consists of multiple nodes in a single data center. CDCR augments this model by forwarding updates from a Solr collection in one data center to a parallel Solr collection in another data center where the network latencies are greater than the SolrCloud model was designed to accommodate.
-
-For more information about CDCR, see the following sections:
-
-* *<<cdcr-architecture.adoc#cdcr-architecture,CDCR Architecture>>*: A detailed overview of how CDCR works.
-* *<<cdcr-config.adoc#cdcr-config,CDCR Configuration>>*: How to set up and initialize CDCR for your cluster.
-* *<<cdcr-operations.adoc#cdcr-operations,CDCR Operations>>*: Information on monitoring CDCR and upgrading your cluster when using CDCR.
-* *<<cdcr-api.adoc#cdcr-api,CDCR API>>*: Reference for the CDCR API.
-
-
-// Are there any terms here that are new? If not, I think we should remove this.
-== CDCR Glossary
-
-For the purposes of discussing CDCR, the following terminology is used. If you are already familiar with SolrCloud, many of these terms will already be familiar to you.
-
-[glossary]
-Node:: A JVM instance running Solr; a server.
-Cluster:: A set of Solr nodes managed as a single unit by a ZooKeeper ensemble hosting one or more Collections.
-Data Center:: A group of networked servers hosting a Solr cluster. For CDCR, the terms _Cluster_ and _Data Center_ are interchangeable as we assume that each Solr cluster is hosted in a different group of networked servers.
-Shard:: A sub-index of a single logical collection. This may be spread across multiple nodes of the cluster. Each shard can have 1-N replicas.
-Leader:: Each shard has replica identified as its leader. All the writes for documents belonging to a shard are routed through the leader.
-Replica:: A copy of a shard for use in failover or load balancing. Replicas comprising a shard can either be leaders or non-leaders.
-Follower:: A convenience term for a replica that is _not_ the leader of a shard.
-Collection:: A logical index, consisting of one or more shards. A cluster can have multiple collections.
-Update:: An operation that changes the collection's index in any way. This could be adding a new document, deleting documents or changing a document.
-Update Log(s):: An append-only log of write operations maintained by each node.
diff --git a/solr/solr-ref-guide/src/dataimport-screen.adoc b/solr/solr-ref-guide/src/dataimport-screen.adoc
deleted file mode 100644
index 1f28cd5..0000000
--- a/solr/solr-ref-guide/src/dataimport-screen.adoc
+++ /dev/null
@@ -1,28 +0,0 @@
-= Dataimport Screen
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-WARNING: The Data Import Handler is deprecated as of v8.6 and is scheduled to be removed in 9.0.
-
-The Dataimport screen shows the configuration of the DataImportHandler (DIH) and allows you start, and monitor the status of, import commands as defined by the options selected on the screen and defined in the configuration file.
-
-.The Dataimport Screen
-image::images/dataimport-screen/dataimport.png[image,width=485,height=250]
-
-This screen also lets you adjust various options to control how the data is imported to Solr, and view the data import configuration file that controls the import.
-
-For more information about data importing with DIH, see the section on <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.
diff --git a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
index f2c745f..d51c142 100644
--- a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
+++ b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
@@ -61,7 +61,7 @@
 
 === Shards Whitelist
 
-The nodes allowed in the `shards` parameter is configurable through the `shardsWhitelist` property in `solr.xml`. This whitelist is automatically configured for SolrCloud but needs explicit configuration for master/slave mode. Read more details in the section <<distributed-requests.adoc#configuring-the-shardhandlerfactory,Configuring the ShardHandlerFactory>>.
+The nodes allowed in the `shards` parameter is configurable through the `shardsWhitelist` property in `solr.xml`. This whitelist is automatically configured for SolrCloud but needs explicit configuration for leader/follower mode. Read more details in the section <<distributed-requests.adoc#configuring-the-shardhandlerfactory,Configuring the ShardHandlerFactory>>.
 
 == Limitations to Distributed Search
 
diff --git a/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc b/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
index 8e7baff..e3aa3ea 100644
--- a/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
+++ b/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
@@ -1,5 +1,5 @@
 = Documents, Fields, and Schema Design
-:page-children: overview-of-documents-fields-and-schema-design, solr-field-types, defining-fields, copying-fields, dynamic-fields, other-schema-elements, schema-api, putting-the-pieces-together, docvalues, schemaless-mode
+:page-children: overview-of-documents-fields-and-schema-design, solr-field-types, defining-fields, copying-fields, dynamic-fields, other-schema-elements, schema-api, putting-the-pieces-together, docvalues, schemaless-mode, luke-request-handler
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
diff --git a/solr/solr-ref-guide/src/images/dataimport-screen/dataimport.png b/solr/solr-ref-guide/src/images/dataimport-screen/dataimport.png
deleted file mode 100644
index 7444c27..0000000
--- a/solr/solr-ref-guide/src/images/dataimport-screen/dataimport.png
+++ /dev/null
Binary files differ
diff --git a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
index d37fc53..cd00c1b 100644
--- a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
+++ b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
@@ -44,11 +44,9 @@
 |API Endpoints |Class & Javadocs |Paramset
 |v1: `solr/admin/info/health`
 
-v2: `api/node/health` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/HealthCheckHandler.html[HealthCheckHandler] |`_ADMIN_HEALTH`
+v2: `api/node/health` |{solr-javadocs}/solr-core/org/apache/solr/handler/admin/HealthCheckHandler.html[HealthCheckHandler] |
 |===
 +
-This endpoint can also take the collection or core name in the path (`solr/<collection>/admin/health` or `solr/<core>/admin/health`).
-+
 This endpoint also accepts additional request parameters. Please see {solr-javadocs}/solr-core/org/apache/solr/handler/admin/HealthCheckHandler.html[Javadocs] for details.
 
 Logging:: Retrieve and modify registered loggers.
@@ -63,7 +61,7 @@
 
 Luke:: Expose the internal Lucene index. This handler must have a collection name in the path to the endpoint.
 +
-*Documentation*: https://cwiki.apache.org/confluence/display/solr/LukeRequestHandler
+*Documentation*: <<luke-request-handler.adoc#luke-request-handler,Luke Request Handler>>
 +
 [cols="3*.",frame=none,grid=cols,options="header"]
 |===
@@ -184,7 +182,7 @@
 |`solr/debug/dump` |{solr-javadocs}/solr-core/org/apache/solr/handler/DumpRequestHandler.html[DumpRequestHandler] |`_DEBUG_DUMP`
 |===
 
-Replication:: Replicate indexes for SolrCloud recovery and Master/Slave index distribution. This handler must have a core name in the path to the endpoint.
+Replication:: Replicate indexes for SolrCloud recovery and Leader/Follower index distribution. This handler must have a core name in the path to the endpoint.
 +
 [cols="3*.",frame=none,grid=cols,options="header"]
 |===
diff --git a/solr/solr-ref-guide/src/index-replication.adoc b/solr/solr-ref-guide/src/index-replication.adoc
index ce7d66e..e635ca1 100644
--- a/solr/solr-ref-guide/src/index-replication.adoc
+++ b/solr/solr-ref-guide/src/index-replication.adoc
@@ -16,11 +16,11 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Index Replication distributes complete copies of a master index to one or more slave servers. The master server continues to manage updates to the index. All querying is handled by the slaves. This division of labor enables Solr to scale to provide adequate responsiveness to queries against large search volumes.
+Index Replication distributes complete copies of a leader index to one or more follower servers. The leader server continues to manage updates to the index. All querying is handled by the followers. This division of labor enables Solr to scale to provide adequate responsiveness to queries against large search volumes.
 
-The figure below shows a Solr configuration using index replication. The master server's index is replicated on the slaves.
+The figure below shows a Solr configuration using index replication. The leader server's index is replicated on the followers.
 
-.A Solr index can be replicated across multiple slave servers, which then process requests.
+.A Solr index can be replicated across multiple follower servers, which then process requests.
 image::images/index-replication/worddav2b7e14725d898b4104cdd9c502fc77cd.png[image,width=159,height=235]
 
 
@@ -38,9 +38,9 @@
 .Replication In SolrCloud
 [NOTE]
 ====
-Although there is no explicit concept of "master/slave" nodes in a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster, the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" – but this is done in a peer to peer manner.
+Although there is no explicit concept of "leader/follower" nodes in a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster, the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" – but this is done in a peer to peer manner.
 
-When using SolrCloud, the `ReplicationHandler` must be available via the `/replication` path. Solr does this implicitly unless overridden explicitly in your `solrconfig.xml`, but if you wish to override the default behavior, make certain that you do not explicitly set any of the "master" or "slave" configuration options mentioned below, or they will interfere with normal SolrCloud operation.
+When using SolrCloud, the `ReplicationHandler` must be available via the `/replication` path. Solr does this implicitly unless overridden explicitly in your `solrconfig.xml`, but if you wish to override the default behavior, make certain that you do not explicitly set any of the "leader" or "follower" configuration options mentioned below, or they will interfere with normal SolrCloud operation.
 ====
 
 == Replication Terminology
@@ -51,19 +51,19 @@
 A Lucene index is a directory of files. These files make up the searchable and returnable data of a Solr Core.
 
 Distribution::
-The copying of an index from the master server to all slaves. The distribution process takes advantage of Lucene's index file structure.
+The copying of an index from the leader server to all followers. The distribution process takes advantage of Lucene's index file structure.
 
 Inserts and Deletes::
 As inserts and deletes occur in the index, the directory remains unchanged. Documents are always inserted into newly created segment files. Documents that are deleted are not removed from the segment files. They are flagged in the file, deletable, and are not removed from the segments until the segment is merged as part of normal index updates.
 
-Master and Slave::
-A Solr replication master is a single node which receives all updates initially and keeps everything organized. Solr replication slave nodes receive no updates directly, instead all changes (such as inserts, updates, deletes, etc.) are made against the single master node. Changes made on the master are distributed to all the slave nodes which service all query requests from the clients.
+Leader and Follower::
+A Solr replication leader is a single node which receives all updates initially and keeps everything organized. Solr replication follower nodes receive no updates directly, instead all changes (such as inserts, updates, deletes, etc.) are made against the single leader node. Changes made on the leader are distributed to all the follower nodes which service all query requests from the clients.
 
 Update::
 An update is a single change request against a single Solr instance. It may be a request to delete a document, add a new document, change a document, delete all documents matching a query, etc. Updates are handled synchronously within an individual Solr instance.
 
 Optimization::
-A process that compacts the index and merges segments in order to improve query performance. Optimization should only be run on the master nodes. An optimized index may give query performance gains compared to an index that has become fragmented over a period of time with many updates. Distributing an optimized index requires a much longer time than the distribution of new segments to an un-optimized index.
+A process that compacts the index and merges segments in order to improve query performance. Optimization should only be run on the leader nodes. An optimized index may give query performance gains compared to an index that has become fragmented over a period of time with many updates. Distributing an optimized index requires a much longer time than the distribution of new segments to an un-optimized index.
 
 WARNING: optimizing is not recommended unless it can be performed regularly as it may lead to a significantly larger portion of the index consisting of deleted documents than would normally be the case.
 
@@ -74,17 +74,17 @@
 A parameter that controls the number of segments in an index. For example, when mergeFactor is set to 3, Solr will fill one segment with documents until the limit maxBufferedDocs is met, then it will start a new segment. When the number of segments specified by mergeFactor is reached (in this example, 3) then Solr will merge all the segments into a single index file, then begin writing new documents to a new segment.
 
 Snapshot::
-A directory containing hard links to the data files of an index. Snapshots are distributed from the master nodes when the slaves pull them, "smart copying" any segments the slave node does not have in snapshot directory that contains the hard links to the most recent index data files.
+A directory containing hard links to the data files of an index. Snapshots are distributed from the leader nodes when the followers pull them, "smart copying" any segments the follower node does not have in snapshot directory that contains the hard links to the most recent index data files.
 
 
 == Configuring the ReplicationHandler
 
-In addition to `ReplicationHandler` configuration options specific to the master/slave roles, there are a few special configuration options that are generally supported (even when using SolrCloud).
+In addition to `ReplicationHandler` configuration options specific to the leader/follower roles, there are a few special configuration options that are generally supported (even when using SolrCloud).
 
 * `maxNumberOfBackups` an integer value dictating the maximum number of backups this node will keep on disk as it receives `backup` commands.
 * Similar to most other request handlers in Solr you may configure a set of <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#searchhandlers,defaults, invariants, and/or appends>> parameters corresponding with any request parameters supported by the `ReplicationHandler` when <<HTTP API Commands for the ReplicationHandler,processing commands>>.
 
-=== Configuring the Replication RequestHandler on a Master Server
+=== Configuring the Replication RequestHandler on a Leader Server
 
 Before running a replication, you should set the following parameters on initialization of the handler:
 
@@ -103,12 +103,12 @@
 `commitReserveDuration`::
 If your commits are very frequent and your network is slow, you can tweak this parameter to increase the amount of time expected to be required to transfer data. The default is `00:00:10` i.e., 10 seconds.
 
-The example below shows a possible 'master' configuration for the `ReplicationHandler`, including a fixed number of backups and an invariant setting for the `maxWriteMBPerSec` request parameter to prevent slaves from saturating its network interface
+The example below shows a possible 'leader' configuration for the `ReplicationHandler`, including a fixed number of backups and an invariant setting for the `maxWriteMBPerSec` request parameter to prevent followers from saturating its network interface
 
 [source,xml]
 ----
 <requestHandler name="/replication" class="solr.ReplicationHandler">
-  <lst name="master">
+  <lst name="leader">
     <str name="replicateAfter">optimize</str>
     <str name="backupAfter">optimize</str>
     <str name="confFiles">schema.xml,stopwords.txt,elevate.xml</str>
@@ -123,32 +123,32 @@
 
 ==== Replicating solrconfig.xml
 
-In the configuration file on the master server, include a line like the following:
+In the configuration file on the leader server, include a line like the following:
 
 [source,xml]
 ----
-<str name="confFiles">solrconfig_slave.xml:solrconfig.xml,x.xml,y.xml</str>
+<str name="confFiles">solrconfig_follower.xml:solrconfig.xml,x.xml,y.xml</str>
 ----
 
-This ensures that the local configuration `solrconfig_slave.xml` will be saved as `solrconfig.xml` on the slave. All other files will be saved with their original names.
+This ensures that the local configuration `solrconfig_follower.xml` will be saved as `solrconfig.xml` on the follower. All other files will be saved with their original names.
 
-On the master server, the file name of the slave configuration file can be anything, as long as the name is correctly identified in the `confFiles` string; then it will be saved as whatever file name appears after the colon ':'.
+On the leader server, the file name of the follower configuration file can be anything, as long as the name is correctly identified in the `confFiles` string; then it will be saved as whatever file name appears after the colon ':'.
 
-=== Configuring the Replication RequestHandler on a Slave Server
+=== Configuring the Replication RequestHandler on a Follower Server
 
-The code below shows how to configure a ReplicationHandler on a slave.
+The code below shows how to configure a ReplicationHandler on a follower.
 
 [source,xml]
 ----
 <requestHandler name="/replication" class="solr.ReplicationHandler">
-  <lst name="slave">
+  <lst name="follower">
 
-    <!-- fully qualified url for the replication handler of master. It is
+    <!-- fully qualified url for the replication handler of leader. It is
          possible to pass on this as a request param for the fetchindex command -->
-    <str name="masterUrl">http://remote_host:port/solr/core_name/replication</str>
+    <str name="leaderUrl">http://remote_host:port/solr/core_name/replication</str>
 
-    <!-- Interval in which the slave should poll master.  Format is HH:mm:ss .
-         If this is absent slave does not poll automatically.
+    <!-- Interval in which the follower should poll leader.  Format is HH:mm:ss .
+         If this is absent follower does not poll automatically.
 
          But a fetchindex can be triggered from the admin or the http API -->
 
@@ -158,13 +158,13 @@
 
     <!-- To use compression while transferring the index files. The possible
          values are internal|external.  If the value is 'external' make sure
-         that your master Solr has the settings to honor the accept-encoding header.
+         that your leader Solr has the settings to honor the accept-encoding header.
          If it is 'internal' everything will be taken care of automatically.
          USE THIS ONLY IF YOUR BANDWIDTH IS LOW.
          THIS CAN ACTUALLY SLOW DOWN REPLICATION IN A LAN -->
     <str name="compression">internal</str>
 
-    <!-- The following values are used when the slave connects to the master to
+    <!-- The following values are used when the follower connects to the leader to
          download the index files.  Default values implicitly set as 5000ms and
          10000ms respectively. The user DOES NOT need to specify these unless the
          bandwidth is extremely low or if there is an extremely high latency -->
@@ -172,7 +172,7 @@
     <str name="httpConnTimeout">5000</str>
     <str name="httpReadTimeout">10000</str>
 
-    <!-- If HTTP Basic authentication is enabled on the master, then the slave
+    <!-- If HTTP Basic authentication is enabled on the leader, then the follower
          can be configured with the following -->
 
     <str name="httpBasicAuthUser">username</str>
@@ -183,23 +183,23 @@
 
 == Setting Up a Repeater with the ReplicationHandler
 
-A master may be able to serve only so many slaves without affecting performance. Some organizations have deployed slave servers across multiple data centers. If each slave downloads the index from a remote data center, the resulting download may consume too much network bandwidth. To avoid performance degradation in cases like this, you can configure one or more slaves as repeaters. A repeater is simply a node that acts as both a master and a slave.
+A leader may be able to serve only so many followers without affecting performance. Some organizations have deployed follower servers across multiple data centers. If each follower downloads the index from a remote data center, the resulting download may consume too much network bandwidth. To avoid performance degradation in cases like this, you can configure one or more followers as repeaters. A repeater is simply a node that acts as both a leader and a follower.
 
-* To configure a server as a repeater, the definition of the Replication `requestHandler` in the `solrconfig.xml` file must include file lists of use for both masters and slaves.
-* Be sure to set the `replicateAfter` parameter to commit, even if `replicateAfter` is set to optimize on the main master. This is because on a repeater (or any slave), a commit is called only after the index is downloaded. The optimize command is never called on slaves.
-* Optionally, one can configure the repeater to fetch compressed files from the master through the compression parameter to reduce the index download time.
+* To configure a server as a repeater, the definition of the Replication `requestHandler` in the `solrconfig.xml` file must include file lists of use for both leaders and followers.
+* Be sure to set the `replicateAfter` parameter to commit, even if `replicateAfter` is set to optimize on the main leader. This is because on a repeater (or any follower), a commit is called only after the index is downloaded. The optimize command is never called on followers.
+* Optionally, one can configure the repeater to fetch compressed files from the leader through the compression parameter to reduce the index download time.
 
 Here is an example of a ReplicationHandler configuration for a repeater:
 
 [source,xml]
 ----
 <requestHandler name="/replication" class="solr.ReplicationHandler">
-  <lst name="master">
+  <lst name="leader">
     <str name="replicateAfter">commit</str>
     <str name="confFiles">schema.xml,stopwords.txt,synonyms.txt</str>
   </lst>
-  <lst name="slave">
-    <str name="masterUrl">http://master.solr.company.com:8983/solr/core_name/replication</str>
+  <lst name="follower">
+    <str name="leaderUrl">http://leader.solr.company.com:8983/solr/core_name/replication</str>
     <str name="pollInterval">00:00:60</str>
   </lst>
 </requestHandler>
@@ -207,13 +207,13 @@
 
 == Commit and Optimize Operations
 
-When a commit or optimize operation is performed on the master, the RequestHandler reads the list of file names which are associated with each commit point. This relies on the `replicateAfter` parameter in the configuration to decide which types of events should trigger replication.
+When a commit or optimize operation is performed on the leader, the RequestHandler reads the list of file names which are associated with each commit point. This relies on the `replicateAfter` parameter in the configuration to decide which types of events should trigger replication.
 
 These operations are supported:
 
-* `commit`: Triggers replication whenever a commit is performed on the master index.
-* `optimize`: Triggers replication whenever the master index is optimized.
-* `startup`: Triggers replication whenever the master index starts up.
+* `commit`: Triggers replication whenever a commit is performed on the leader index.
+* `optimize`: Triggers replication whenever the leader index is optimized.
+* `startup`: Triggers replication whenever the leader index starts up.
 
 The `replicateAfter` parameter can accept multiple arguments. For example:
 
@@ -224,91 +224,91 @@
 <str name="replicateAfter">optimize</str>
 ----
 
-== Slave Replication
+== Follower Replication
 
-The master is totally unaware of the slaves.
+The leader is totally unaware of the followers.
 
-The slave continuously keeps polling the master (depending on the `pollInterval` parameter) to check the current index version of the master. If the slave finds out that the master has a newer version of the index it initiates a replication process. The steps are as follows:
+The follower continuously keeps polling the leader (depending on the `pollInterval` parameter) to check the current index version of the leader. If the follower finds out that the leader has a newer version of the index it initiates a replication process. The steps are as follows:
 
-* The slave issues a `filelist` command to get the list of the files. This command returns the names of the files as well as some metadata (for example, size, a lastmodified timestamp, an alias if any).
-* The slave checks with its own index if it has any of those files in the local index. It then runs the filecontent command to download the missing files. This uses a custom format (akin to the HTTP chunked encoding) to download the full content or a part of each file. If the connection breaks in between, the download resumes from the point it failed. At any point, the slave tries 5 times before giving up a replication altogether.
-* The files are downloaded into a temp directory, so that if either the slave or the master crashes during the download process, no files will be corrupted. Instead, the current replication will simply abort.
-* After the download completes, all the new files are moved to the live index directory and the file's timestamp is same as its counterpart on the master.
-* A commit command is issued on the slave by the Slave's ReplicationHandler and the new index is loaded.
+* The follower issues a `filelist` command to get the list of the files. This command returns the names of the files as well as some metadata (for example, size, a lastmodified timestamp, an alias if any).
+* The follower checks with its own index if it has any of those files in the local index. It then runs the filecontent command to download the missing files. This uses a custom format (akin to the HTTP chunked encoding) to download the full content or a part of each file. If the connection breaks in between, the download resumes from the point it failed. At any point, the follower tries 5 times before giving up a replication altogether.
+* The files are downloaded into a temp directory, so that if either the follower or the leader crashes during the download process, no files will be corrupted. Instead, the current replication will simply abort.
+* After the download completes, all the new files are moved to the live index directory and the file's timestamp is same as its counterpart on the leader.
+* A commit command is issued on the follower by the Follower's ReplicationHandler and the new index is loaded.
 
 === Replicating Configuration Files
 
-To replicate configuration files, list them using using the `confFiles` parameter. Only files found in the `conf` directory of the master's Solr instance will be replicated.
+To replicate configuration files, list them using using the `confFiles` parameter. Only files found in the `conf` directory of the leader's Solr instance will be replicated.
 
-Solr replicates configuration files only when the index itself is replicated. That means even if a configuration file is changed on the master, that file will be replicated only after there is a new commit/optimize on master's index.
+Solr replicates configuration files only when the index itself is replicated. That means even if a configuration file is changed on the leader, that file will be replicated only after there is a new commit/optimize on leader's index.
 
-Unlike the index files, where the timestamp is good enough to figure out if they are identical, configuration files are compared against their checksum. The `schema.xml` files (on master and slave) are judged to be identical if their checksums are identical.
+Unlike the index files, where the timestamp is good enough to figure out if they are identical, configuration files are compared against their checksum. The `schema.xml` files (on leader and follower) are judged to be identical if their checksums are identical.
 
 As a precaution when replicating configuration files, Solr copies configuration files to a temporary directory before moving them into their ultimate location in the conf directory. The old configuration files are then renamed and kept in the same `conf/` directory. The ReplicationHandler does not automatically clean up these old files.
 
 If a replication involved downloading of at least one configuration file, the ReplicationHandler issues a core-reload command instead of a commit command.
 
-=== Resolving Corruption Issues on Slave Servers
+=== Resolving Corruption Issues on Follower Servers
 
-If documents are added to the slave, then the slave is no longer in sync with its master. However, the slave will not undertake any action to put itself in sync, until the master has new index data.
+If documents are added to the follower, then the follower is no longer in sync with its leader. However, the follower will not undertake any action to put itself in sync, until the leader has new index data.
 
-When a commit operation takes place on the master, the index version of the master becomes different from that of the slave. The slave then fetches the list of files and finds that some of the files present on the master are also present in the local index but with different sizes and timestamps. This means that the master and slave have incompatible indexes.
+When a commit operation takes place on the leader, the index version of the leader becomes different from that of the follower. The follower then fetches the list of files and finds that some of the files present on the leader are also present in the local index but with different sizes and timestamps. This means that the leader and follower have incompatible indexes.
 
-To correct this problem, the slave then copies all the index files from master to a new index directory and asks the core to load the fresh index from the new directory.
+To correct this problem, the follower then copies all the index files from leader to a new index directory and asks the core to load the fresh index from the new directory.
 
 == HTTP API Commands for the ReplicationHandler
 
 You can use the HTTP commands below to control the ReplicationHandler's operations.
 
 `enablereplication`::
-Enable replication on the "master" for all its slaves.
+Enable replication on the "leader" for all its followers.
 +
 [source,bash]
-http://_master_host:port_/solr/_core_name_/replication?command=enablereplication
+http://_leader_host:port_/solr/_core_name_/replication?command=enablereplication
 
 `disablereplication`::
-Disable replication on the master for all its slaves.
+Disable replication on the leader for all its followers.
 +
 [source,bash]
-http://_master_host:port_/solr/_core_name_/replication?command=disablereplication
+http://_leader_host:port_/solr/_core_name_/replication?command=disablereplication
 
 `indexversion`::
-Return the version of the latest replicatable index on the specified master or slave.
+Return the version of the latest replicatable index on the specified leader or follower.
 +
 [source,bash]
 http://_host:port_/solr/_core_name_/replication?command=indexversion
 
 `fetchindex`::
-Force the specified slave to fetch a copy of the index from its master.
+Force the specified follower to fetch a copy of the index from its leader.
 +
 [source.bash]
-http://_slave_host:port_/solr/_core_name_/replication?command=fetchindex
+http://_follower_host:port_/solr/_core_name_/replication?command=fetchindex
 +
-If you like, you can pass an extra attribute such as `masterUrl` or `compression` (or any other parameter which is specified in the `<lst name="slave">` tag) to do a one time replication from a master. This obviates the need for hard-coding the master in the slave.
+If you like, you can pass an extra attribute such as `leaderUrl` or `compression` (or any other parameter which is specified in the `<lst name="follower">` tag) to do a one time replication from a leader. This obviates the need for hard-coding the leader in the follower.
 
 `abortfetch`::
-Abort copying an index from a master to the specified slave.
+Abort copying an index from a leader to the specified follower.
 +
 [source,bash]
-http://_slave_host:port_/solr/_core_name_/replication?command=abortfetch
+http://_follower_host:port_/solr/_core_name_/replication?command=abortfetch
 
 `enablepoll`::
-Enable the specified slave to poll for changes on the master.
+Enable the specified follower to poll for changes on the leader.
 +
 [source,bash]
-http://_slave_host:port_/solr/_core_name_/replication?command=enablepoll
+http://_follower_host:port_/solr/_core_name_/replication?command=enablepoll
 
 `disablepoll`::
-Disable the specified slave from polling for changes on the master.
+Disable the specified follower from polling for changes on the leader.
 +
 [source,bash]
-http://_slave_host:port_/solr/_core_name_/replication?command=disablepoll
+http://_follower_host:port_/solr/_core_name_/replication?command=disablepoll
 
 `details`::
 Retrieve configuration details and current status.
 +
 [source,bash]
-http://_slave_host:port_/solr/_core_name_/replication?command=details
+http://_follower_host:port_/solr/_core_name_/replication?command=details
 
 `filelist`::
 Retrieve a list of Lucene files present in the specified host's index.
@@ -319,10 +319,10 @@
 You can discover the generation number of the index by running the `indexversion` command.
 
 `backup`::
-Create a backup on master if there are committed index data in the server; otherwise, does nothing.
+Create a backup on leader if there are committed index data in the server; otherwise, does nothing.
 +
 [source,bash]
-http://_master_host:port_/solr/_core_name_/replication?command=backup
+http://_leader_host:port_/solr/_core_name_/replication?command=backup
 +
 This command is useful for making periodic backups. There are several supported request parameters:
 +
@@ -335,7 +335,7 @@
 Restore a backup from a backup repository.
 +
 [source,bash]
-http://_master_host:port_/solr/_core_name_/replication?command=restore
+http://_leader_host:port_/solr/_core_name_/replication?command=restore
 +
 This command is used to restore a backup. There are several supported request parameters:
 +
@@ -347,7 +347,7 @@
 Check the status of a running restore operation.
 +
 [source,bash]
-http://_master_host:port_/solr/_core_name_/replication?command=restorestatus
+http://_leader_host:port_/solr/_core_name_/replication?command=restorestatus
 +
 This command is used to check the status of a restore operation. This command takes no parameters.
 +
@@ -357,7 +357,7 @@
 Delete any backup created using the `backup` command.
 +
 [source,bash]
-http://_master_host:port_ /solr/_core_name_/replication?command=deletebackup
+http://_leader_host:port_ /solr/_core_name_/replication?command=deletebackup
 +
 There are two supported parameters:
 
@@ -369,15 +369,15 @@
 
 Optimizing an index is not something most users should generally worry about - but in particular users should be aware of the impacts of optimizing an index when using the `ReplicationHandler`.
 
-The time required to optimize a master index can vary dramatically. A small index may be optimized in minutes. A very large index may take hours. The variables include the size of the index and the speed of the hardware.
+The time required to optimize a leader index can vary dramatically. A small index may be optimized in minutes. A very large index may take hours. The variables include the size of the index and the speed of the hardware.
 
-Distributing a newly optimized index may take only a few minutes or up to an hour or more, again depending on the size of the index and the performance capabilities of network connections and disks. During optimization the machine is under load and does not process queries very well. Given a schedule of updates being driven a few times an hour to the slaves, we cannot run an optimize with every committed snapshot.
+Distributing a newly optimized index may take only a few minutes or up to an hour or more, again depending on the size of the index and the performance capabilities of network connections and disks. During optimization the machine is under load and does not process queries very well. Given a schedule of updates being driven a few times an hour to the followers, we cannot run an optimize with every committed snapshot.
 
 Copying an optimized index means that the *entire* index will need to be transferred during the next `snappull`. This is a large expense, but not nearly as huge as running the optimize everywhere.
 
-Consider this example: on a three-slave one-master configuration, distributing a newly-optimized index takes approximately 80 seconds _total_. Rolling the change across a tier would require approximately ten minutes per machine (or machine group). If this optimize were rolled across the query tier, and if each slave node being optimized were disabled and not receiving queries, a rollout would take at least twenty minutes and potentially as long as an hour and a half. Additionally, the files would need to be synchronized so that the _following_ the optimize, `snappull` would not think that the independently optimized files were different in any way. This would also leave the door open to independent corruption of indexes instead of each being a perfect copy of the master.
+Consider this example: on a three-follower one-leader configuration, distributing a newly-optimized index takes approximately 80 seconds _total_. Rolling the change across a tier would require approximately ten minutes per machine (or machine group). If this optimize were rolled across the query tier, and if each follower node being optimized were disabled and not receiving queries, a rollout would take at least twenty minutes and potentially as long as an hour and a half. Additionally, the files would need to be synchronized so that the _following_ the optimize, `snappull` would not think that the independently optimized files were different in any way. This would also leave the door open to independent corruption of indexes instead of each being a perfect copy of the leader.
 
-Optimizing on the master allows for a straight-forward optimization operation. No query slaves need to be taken out of service. The optimized index can be distributed in the background as queries are being normally serviced. The optimization can occur at any time convenient to the application providing index updates.
+Optimizing on the leader allows for a straight-forward optimization operation. No query followers need to be taken out of service. The optimized index can be distributed in the background as queries are being normally serviced. The optimization can occur at any time convenient to the application providing index updates.
 
 While optimizing may have some benefits in some situations, a rapidly changing index will not retain those benefits for long, and since optimization is an intensive process, it may be better to consider other options, such as lowering the merge factor (discussed in the section on <<indexconfig-in-solrconfig.adoc#merge-factors,Index Configuration>>).
 
diff --git a/solr/solr-ref-guide/src/index.adoc b/solr/solr-ref-guide/src/index.adoc
index 6a750e9..eccd002 100644
--- a/solr/solr-ref-guide/src/index.adoc
+++ b/solr/solr-ref-guide/src/index.adoc
@@ -11,6 +11,7 @@
     solrcloud, \
     legacy-scaling-and-distribution, \
     circuit-breakers, \
+    rate-limiters, \
     solr-plugins, \
     the-well-configured-solr-instance, \
     monitoring-solr, \
@@ -124,6 +125,8 @@
 *<<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>*: This section tells you how to grow a Solr distribution by dividing a large index into sections called shards, which are then distributed across multiple servers, or by replicating a single index across multiple services.
 
 *<<circuit-breakers.adoc#circuit-breakers,Circuit Breakers>>*: This section talks about circuit breakers, a way of allowing a higher stability of Solr nodes and increased service level guarantees of requests that are accepted by Solr.
+
+*<<rate-limiters.adoc#rate-limiters,Request Rate Limiters>>*: This section talks about request rate limiters, a way of guaranteeing throughput per request type and dedicating resource quotas by resource type. Rate limiter configurations are per instance/JVM and applied to the entire JVM, not at a core/collection level.
 ****
 
 .Advanced Configuration
diff --git a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
index 7759674..8615a4b8 100644
--- a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
@@ -82,6 +82,9 @@
 <mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory">
   <int name="maxMergeAtOnce">10</int>
   <int name="segmentsPerTier">10</int>
+  <double name="forceMergeDeletesPctAllowed">10.0</double>
+  <double name="deletesPctAllowed">33.0</double>
+
 </mergePolicyFactory>
 ----
 
@@ -104,6 +107,13 @@
 
 Conversely, keeping more segments can accelerate indexing, because merges happen less often, making an update is less likely to trigger a merge. But searches become more computationally expensive and will likely be slower, because search terms must be looked up in more index segments. Faster index updates also means shorter commit turnaround times, which means more timely search results.
 
+=== Controlling Deleted Document Percentages
+
+When a document is deleted or updated, the document is marked as deleted but it not removed from the index until the segment is merged. There are two parameters that can can be adjusted when using the default TieredMergePolicy that influence the number of deleted documents in an index.
+
+* `forceMergeDeletesPctAllowed (default 10.0)`. When the external expungeDeletes command is issued, any segment that has more than this percent deleted documents will be merged into a new segment and the data associated with the deleted documents will be purged. A value of 0.0 will make expungeDeletes behave essentially identically to `optimize`.
+* `deletesPctAllowed (default 33.0)`. During normal segment merging, a "best effort" is made to insure that the total percentage of deleted documents in the index is below this threshold.  Valid settings are between 20% and 50%. 33% was chosen as the default because as this setting approaches 20%, considerable load is added to the system.
+
 === Customizing Merge Policies
 
 If the configuration options for the built-in merge policies do not fully suit your use case, you can customize them: either by creating a custom merge policy factory that you specify in your configuration, or by configuring a {solr-javadocs}/solr-core/org/apache/solr/index/WrapperMergePolicyFactory.html[merge policy wrapper] which uses a `wrapped.prefix` configuration option to control how the factory it wraps will be configured:
diff --git a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
index e805bff..993a4a2 100644
--- a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
+++ b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
@@ -4,7 +4,6 @@
   uploading-data-with-index-handlers, +
   indexing-nested-documents, +
   uploading-data-with-solr-cell-using-apache-tika, +
-  uploading-structured-data-store-data-with-the-data-import-handler, +
   updating-parts-of-documents, +
   detecting-languages-during-indexing, +
   de-duplication, +
@@ -42,8 +41,6 @@
 
 * *<<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>*: Information about using the Solr Cell framework to upload data for indexing.
 
-* *<<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>*: Information about uploading and indexing data from a structured data store.
-
 * *<<updating-parts-of-documents.adoc#updating-parts-of-documents,Updating Parts of Documents>>*: Information about how to use atomic updates and optimistic concurrency with Solr.
 
 * *<<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>*: Information about using language identification during the indexing process.
diff --git a/solr/solr-ref-guide/src/installing-solr.adoc b/solr/solr-ref-guide/src/installing-solr.adoc
index 4e3872a..962375e 100644
--- a/solr/solr-ref-guide/src/installing-solr.adoc
+++ b/solr/solr-ref-guide/src/installing-solr.adoc
@@ -109,9 +109,6 @@
 exampledocs::
 This is a small set of simple CSV, XML, and JSON files that can be used with `bin/post` when first getting started with Solr. For more information about using `bin/post` with these files, see <<post-tool.adoc#post-tool,Post Tool>>.
 
-example-DIH::
-This directory includes a few example DataImport Handler (DIH) configurations to help you get started with importing structured content in a database, an email server, or even an Atom feed. Each example will index a different set of data; see the README there for more details about these examples.
-
 files::
 The `files` directory provides a basic search UI for documents such as Word or PDF that you may have stored locally. See the README there for details on how to use this example.
 
@@ -151,7 +148,7 @@
 bin/solr -e techproducts
 ----
 
-Currently, the available examples you can run are: techproducts, dih, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#running-with-example-configurations,Running with Example Configurations>> for details on each example.
+Currently, the available examples you can run are: techproducts, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#running-with-example-configurations,Running with Example Configurations>> for details on each example.
 
 .Getting Started with SolrCloud
 NOTE: Running the `cloud` example starts Solr in <<solrcloud.adoc#solrcloud,SolrCloud>> mode. For more information on starting Solr in SolrCloud mode, see the section <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
diff --git a/solr/solr-ref-guide/src/json-faceting-domain-changes.adoc b/solr/solr-ref-guide/src/json-faceting-domain-changes.adoc
index e44b65f..97478ee 100644
--- a/solr/solr-ref-guide/src/json-faceting-domain-changes.adoc
+++ b/solr/solr-ref-guide/src/json-faceting-domain-changes.adoc
@@ -77,7 +77,7 @@
 ** When no values are specified, no filter is applied and no error is thrown.
 ** When many values are specified, each value is parsed and used as filters in conjunction.
 
-Here is the example of referencing DSL queries: 
+Here is the example of referencing DSL queries:
 
 [source,bash]
 ----
@@ -216,7 +216,7 @@
 
 When a collection contains <<indexing-nested-documents.adoc#indexing-nested-documents, Nested Documents>>, the `blockChildren` or `blockParent` domain options can be used transform an existing domain containing one type of document, into a domain containing the documents with the specified relationship (child or parent of) to the documents from the original domain.
 
-Both of these options work similarly to the corresponding <<other-parsers.adoc#block-join-query-parsers,Block Join Query Parsers>> by taking in a single String query that exclusively matches all parent documents in the collection.  If `blockParent` is used, then the resulting domain will contain all parent documents of the children from the original domain.  If `blockChildren` is used, then the resulting domain will contain all child documents of the parents from the original domain. Quite often facets over child documents needs to be counted in parent documents, this can be done by `uniqueBlock(\_root_)` as described in <<json-facet-api#uniqueblock-and-block-join-counts, Block Join Facet Counts>>.      
+Both of these options work similarly to the corresponding <<other-parsers.adoc#block-join-query-parsers,Block Join Query Parsers>> by taking in a single String query that exclusively matches all parent documents in the collection.  If `blockParent` is used, then the resulting domain will contain all parent documents of the children from the original domain.  If `blockChildren` is used, then the resulting domain will contain all child documents of the parents from the original domain. Quite often facets over child documents needs to be counted in parent documents, this can be done by `uniqueBlock(\_root_)` as described in <<json-facet-api#uniqueblock-and-block-join-counts, Block Join Facet Counts>>.
 
 [source,json,subs="verbatim,callouts"]]
 ----
@@ -241,7 +241,7 @@
 
 A `join` domain change option can be used to specify arbitrary `from` and `to` fields to use in transforming from the existing domain to a related set of documents.
 
-This works very similar to the <<other-parsers.adoc#join-query-parser,Join Query Parser>>, and has the same limitations when dealing with multi-shard collections.
+This works similarly to the <<other-parsers.adoc#join-query-parser,Join Query Parser>>, and has the same limitations when dealing with multi-shard collections.
 
 Example:
 [source,json]
@@ -268,6 +268,8 @@
 
 ----
 
+`join` domain changes support an optional `method` parameter, which allows users to specify which join implementation they would like to use in this domain transform.  Solr offers several join implementations, each with different performance characteristics.  For more information on these implementations and their tradeoffs, see the `method` parameter documentation <<other-parsers.adoc#parameters,here>>.  Join domain changes support all `method` values except `crossCollection`.
+
 == Graph Traversal Domain Changes
 
 A `graph` domain change option works similarly to the `join` domain option, but can do traversal multiple hops `from` the existing domain `to` other documents.
diff --git a/solr/solr-ref-guide/src/luke-request-handler.adoc b/solr/solr-ref-guide/src/luke-request-handler.adoc
new file mode 100644
index 0000000..0414af0
--- /dev/null
+++ b/solr/solr-ref-guide/src/luke-request-handler.adoc
@@ -0,0 +1,77 @@
+= Luke Request Handler
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+The Luke Request Handler offers programmatic access to the information provided on the <<schema-browser-screen#schema-browser-screen,Schema Browser>> page of the Admin UI.
+It is modeled after the Luke, the Lucene Index Browser by Andrzej Bialecki.  It is an implicit handler, so you don't need to define it.
+
+The Luke Request Handler accepts the following parameters:
+
+`show`::
+The data about the index to include in the response.  Options are `schema`, `index`, `doc`, `all`.  `index` describes the high level details about the index.  `schema` returns details about the `schema` plus the `index` data.  `doc` works in conjunction with `docId` or `id` parameters and returns details about a specific document plus the `index` data.
+
+`id`::
+Get a document using the uniqueKeyField specified in schema.xml.
+
+`docId`::
+Get a document using a Lucene documentID.
+
+`fl`::
+Limit the returned values to a set of fields. This is useful if you want to increase the `numTerms` and don't want a massive response.
+
+`numTerms`::
+How many top terms for each field. The default is 10.
+
+`includeIndexFieldFlags`::
+Choose whether /luke should return the index-flags for each field. Fetching and returning the index-flags for each field in the index has non-zero cost, and can slow down requests to /luke.
+
+
+== LukeRequestHandler Examples
+
+The following examples assume you are running Solr's `techproducts` example configuration:
+
+[source,bash]
+----
+bin/solr start -e techproducts
+----
+
+To return summary information about the index:
+
+[source,text]
+http://localhost:8983/solr/techproducts/admin/luke?numTerms=0
+
+To return schema details about the index:
+
+[source,text]
+http://localhost:8983/solr/techproducts/admin/luke?show=schema
+
+To drill into a specific field `manu`, then you drop the `show` parameter and add the `fl` parameter:
+
+[source,text]
+http://localhost:8983/solr/techproducts/admin/luke?fl=manu
+
+To see the specifics of a document using the Solr uniqueKeyField field:
+
+[source,text]
+http://localhost:8983/solr/techproducts/admin/luke?fl=manu&id=TWINX2048-3200PRO
+
+Alternatively, to work through the Lucene native id:
+
+[source,text]
+http://localhost:8983/solr/techproducts/admin/luke?fl=manu&docId=0
+
+From SolrJ, you can access /luke using the {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/request/LukeRequest.html[`LukeRequest`] object.
diff --git a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
index dbc2c91..7676f4f 100644
--- a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
+++ b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
@@ -44,7 +44,7 @@
 
 === Cross Data Center Replication
 
-Replication across data centers is now possible with <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>>. Using an active-passive model, a SolrCloud cluster can be replicated to another data center, and monitored with a new API.
+Replication across data centers is now possible with Cross Data Center Replication. Using an active-passive model, a SolrCloud cluster can be replicated to another data center, and monitored with a new API.
 
 === Graph QueryParser
 
diff --git a/solr/solr-ref-guide/src/major-changes-in-solr-8.adoc b/solr/solr-ref-guide/src/major-changes-in-solr-8.adoc
index 13184f8..643c578 100644
--- a/solr/solr-ref-guide/src/major-changes-in-solr-8.adoc
+++ b/solr/solr-ref-guide/src/major-changes-in-solr-8.adoc
@@ -390,7 +390,7 @@
 
 *Legacy Scaling (non-SolrCloud)*
 
-* In the <<index-replication.adoc#index-replication,master-slave model>> of scaling Solr, a slave no longer commits an empty index when a completely new index is detected on master during replication. To return to the previous behavior pass `false` to `skipCommitOnMasterVersionZero` in the slave section of replication handler configuration, or pass it to the `fetchindex` command.
+* In the <<index-replication.adoc#index-replication,leader-follower model>> of scaling Solr, a follower no longer commits an empty index when a completely new index is detected on leader during replication. To return to the previous behavior pass `false` to `skipCommitOnLeaderVersionZero` in the follower section of replication handler configuration, or pass it to the `fetchindex` command.
 
 If you are upgrading from a version earlier than Solr 7.3, please see previous version notes below.
 
@@ -493,7 +493,7 @@
 
 *ReplicationHandler*
 
-* In the ReplicationHandler, the `master.commitReserveDuration` sub-element is deprecated. Instead please configure a direct `commitReserveDuration` element for use in all modes (master, slave, cloud).
+* In the ReplicationHandler, the `leader.commitReserveDuration` sub-element is deprecated. Instead please configure a direct `commitReserveDuration` element for use in all modes (leader, follower, cloud).
 
 *RunExecutableListener*
 
diff --git a/solr/solr-ref-guide/src/major-changes-in-solr-9.adoc b/solr/solr-ref-guide/src/major-changes-in-solr-9.adoc
index ff02e8f..4deea75 100644
--- a/solr/solr-ref-guide/src/major-changes-in-solr-9.adoc
+++ b/solr/solr-ref-guide/src/major-changes-in-solr-9.adoc
@@ -118,6 +118,13 @@
     Reference guide pages for autoscaling
     autoAddReplicas feature
 
+* SOLR-14702: All references to "master" and "slave" were replaced in the code with "leader"
+  and "follower". This includes API calls for the replication handler and metrics. For rolling
+  upgrades into 9.0, you need to be on Solr version 8.7 or greater. Some metrics also changed, alerts and
+  monitors on Solr KPIs that mention "master" or "slave" will also now be "leader" and "follower"
+
+* SOLR-14783: Data Import Handler (DIH) has been removed from Solr. The community package is available at: https://github.com/rohitbemax/dataimporthandler (Alexandre Rafalovitch)
+  
 === Upgrade Prerequisites in Solr 9
 
 * Upgrade all collections in stateFormat=1 to stateFormat=2 *before* upgrading to Solr 9, as Solr 9 does not support the
diff --git a/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc b/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc
index 87cf318..36f9b1a 100644
--- a/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc
+++ b/solr/solr-ref-guide/src/meta-docs/asciidoc-syntax.adoc
@@ -88,7 +88,7 @@
 ----
 --
 
-When building the Guide, the `solr-root-path` attribute will be automatically set correctly for the (temporary) `build/solr-ref-guide/content` directory used by Ant.
+When building the Guide, the `solr-root-path` attribute will be automatically set correctly for the (temporary) `solr-ref-guide/build/content` directory used by Gradle.
 
 In order for editors (such as ATOM) to be able to offer "live preview" of the `*.adoc` files using these includes, the `solr-root-path` attribute must also be set as a document level attribute in each file, with the correct relative path.
 
diff --git a/solr/solr-ref-guide/src/meta-docs/jekyll.adoc b/solr/solr-ref-guide/src/meta-docs/jekyll.adoc
index 2253607..6c89238 100644
--- a/solr/solr-ref-guide/src/meta-docs/jekyll.adoc
+++ b/solr/solr-ref-guide/src/meta-docs/jekyll.adoc
@@ -206,14 +206,12 @@
 
 == Building the HTML Site
 
-An Ant target `ant build-site` when run from the `solr/solr-ref-guide` directory will build the full HTML site (found in `solr/build/solr-ref-guide/html-site`).
+A Gradle target `gradlew buildSite` when run from the `installation` directory will build the full HTML site (found in `solr/solr-ref-guide/build/html-site`).
 
 This target builds the navigation for the left-hand menu, and converts all `.adoc` files to `.html`, including navigation and inter-document links.
 
 Building the HTML has several dependencies that will need to be installed on your local machine. Review the `README.adoc` file in the `solr/solr-ref-guide` directory for specific details.
 
-Using the Gradle build does not require any local dependencies. Simply use `./gradlew buildSite` to generate the HTML files using Gradle (these will be found in `solr/solr-ref-guide/build/html-site`).
-
 === Build Validation
 
-When you run `ant build-site` to build the HTML, several additional validations occur during that process. See `solr-ref-guide/tools/CheckLinksAndAnchors.java` for details of what that tool does to validate content.
+When you run `gradlew buildSite` to build the HTML, several additional validations occur during that process. See `solr-ref-guide/tools/CheckLinksAndAnchors.java` for details of what that tool does to validate content.
diff --git a/solr/solr-ref-guide/src/meta-docs/publish.adoc b/solr/solr-ref-guide/src/meta-docs/publish.adoc
index 6072e6b..010f66a 100644
--- a/solr/solr-ref-guide/src/meta-docs/publish.adoc
+++ b/solr/solr-ref-guide/src/meta-docs/publish.adoc
@@ -45,7 +45,7 @@
 . Run:
 +
 [source,bash]
-$ ant clean default
+$ gradlew clean buildSite
 +
 This will produce pages with a DRAFT watermark across them. While these are fine for initial DRAFT publication, see the section <<Publish the Final Guide>> for steps to produce final production-ready HTML pages.
 . The resulting Guide will be in `solr/build/solr-ref-guide`. The HTML files themselves will be in `solr/build/solr-ref-guide/html-site`.
@@ -73,7 +73,7 @@
 Build the Guide locally with a parameter for the Guide version. This requires the same <<Pre-Requisites,pre-requisites>> from above.
 
 [source,bash]
-$ant clean default -Dsolr-guide-version=X.Y
+$gradlew clean buildSite -Dsolr-guide-version=X.Y
 
 where `X.Y` is the version you want to publish (i.e., `7.7`).
 
@@ -110,7 +110,7 @@
 
 *Publish the changes to production*
 
-You can use your favourite git client to merge master into branch `production`. Or you can use GitHub website:
+You can use your favourite git client to merge leader into branch `production`. Or you can use GitHub website:
 
 . Make a new pull request from https://github.com/apache/lucene-site/compare/production%2E%2E%2Emaster
 . Note: If there are other changes staged, you will see those as well if you merge `master` into `production`
diff --git a/solr/solr-ref-guide/src/near-real-time-searching.adoc b/solr/solr-ref-guide/src/near-real-time-searching.adoc
index ff632a3..ba453c8 100644
--- a/solr/solr-ref-guide/src/near-real-time-searching.adoc
+++ b/solr/solr-ref-guide/src/near-real-time-searching.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Near Real Time (NRT) search means that documents are available for search soon after being indexed. NRT searching is one of the main features of SolrCloud and is rarely attempted in master/slave configurations.
+Near Real Time (NRT) search means that documents are available for search soon after being indexed. NRT searching is one of the main features of SolrCloud and is rarely attempted in leader/follower configurations.
 
 Document durability and searchability are controlled by `commits`. The "Near" in "Near Real Time" is configurable to meet the needs of your application. Commits are either "hard" or "soft" and can be issued by a client (say SolrJ), via a REST call or configured to occur automatically in `solrconfig.xml`. The recommendation usually gives is to configure your commit strategy in `solrconfig.xml` (see below) and avoid issuing commits externally.
 
diff --git a/solr/solr-ref-guide/src/package-manager.adoc b/solr/solr-ref-guide/src/package-manager.adoc
index 89da4aa..7f9b55f 100644
--- a/solr/solr-ref-guide/src/package-manager.adoc
+++ b/solr/solr-ref-guide/src/package-manager.adoc
@@ -176,6 +176,15 @@
 $ bin/solr package undeploy <package-name> -collections <collection1>[,<collection2>,...]
 ----
 
+=== Uninstall a Package
+
+If a package has been undeployed or was never deployed, then it can be uninstalled as follows:
+
+[source,bash]
+----
+$ bin/solr package uninstall <package-name>:<package-version>
+----
+
 or
 
 [source,bash]
diff --git a/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc b/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
index 7df0c97..f209233 100644
--- a/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
@@ -172,24 +172,49 @@
 <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
 ----
 
-=== useCircuitBreakers
+=== circuitBreaker
 
-Global control flag for enabling circuit breakers.
+This set of configurations control the behaviour of circuit breakers.
 
 [source,xml]
 ----
-<useCircuitBreakers>true</useCircuitBreakers>
+<circuitBreaker class="solr.CircuitBreakerManager" enabled="true">
+  <!-- All specific configs in this section -->
+</circuitBreaker>
 ----
 
-=== memoryCircuitBreakerThresholdPct
+To control whether Circuit Breakers are globally enabled, use the "enabled" attribute.
+
+=== Memory Circuit Breaker Settings
+
+To turn memory circuit breaker on/off, use the following flag:
+[source,xml]
+----
+<str name="memEnabled">true</str>
+----
 
 Memory threshold in percentage for JVM heap usage defined in percentage of maximum heap allocated
 to the JVM (-Xmx). Ideally, this value should be in the range of 75-80% of maximum heap allocated
-to the JVM.
+to the JVM. The enabled flag can be used to control the specific toggle for this circuit breaker.
 
 [source,xml]
 ----
-<memoryCircuitBreakerThresholdPct>75</memoryCircuitBreakerThresholdPct>
+<str name="memThreshold">75</str>
+----
+
+=== CPU Circuit Breaker Settings
+
+To control turning on/off this feature, use the following flag:
+[source,xml]
+----
+<str name="cpuEnabled">true</str>
+----
+
+Defines the triggering threshold in terms of the average per minute CPU load. The enabled flag can be used to control the specific toggle for this circuit breaker.
+
+[source,xml]
+----
+<str name="cpuThreshold">75</str>
 ----
 
 === useColdSearcher
@@ -203,7 +228,7 @@
 
 === maxWarmingSearchers
 
-This parameter sets the maximum number of searchers that may be warming up in the background at any given time. Exceeding this limit will raise an error. For read-only slaves, a value of two is reasonable. Masters should probably be set a little higher.
+This parameter sets the maximum number of searchers that may be warming up in the background at any given time. Exceeding this limit will raise an error. For read-only followers, a value of two is reasonable. Leaders should probably be set a little higher.
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/rate-limiters.adoc b/solr/solr-ref-guide/src/rate-limiters.adoc
new file mode 100644
index 0000000..6c0d75a
--- /dev/null
+++ b/solr/solr-ref-guide/src/rate-limiters.adoc
@@ -0,0 +1,131 @@
+= Request Rate Limiters
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+Solr allows rate limiting per request type. Each request type can be allocated a maximum allowed number of concurrent requests
+that can be active. The default rate limiting is implemented for updates and searches.
+
+If a request exceeds the request quota, further incoming requests are rejected with HTTP error code 429 (Too Many Requests).
+
+Note that rate limiting works at an instance (JVM) level, not at a core or collection level. Consider that when planning capacity.
+There is future work planned to have finer grained execution here (https://issues.apache.org/jira/browse/SOLR-14710[SOLR-14710]).
+
+== When To Use Rate Limiters
+Rate limiters should be used when the user wishes to allocate a guaranteed capacity of the request threadpool to a specific
+request type. Indexing and search requests are mostly competing with each other for CPU resources. This becomes especially
+pronounced under high stress in production workloads. The current implementation has a query rate limiter which can free up
+resources for indexing.
+
+== Rate Limiter Configurations
+The default rate limiter is search rate limiter. Accordingly, it can be configured in `web.xml` under `initParams` for
+`SolrRequestFilter`.
+
+[source,xml]
+----
+<filter-name>SolrRequestFilter</filter-name>
+----
+
+=== Enable Query Rate Limiter
+Controls enabling of query rate limiter. Default value is `false`.
+
+[source,xml]
+----
+<param-name>isQueryRateLimiterEnabled</param-name>
+----
+[source,xml]
+----
+<param-value>true</param-value>
+----
+
+=== Maximum Number Of Concurrent Requests
+Allows setting maximum concurrent search requests at a given point in time. Default value is number of cores * 3.
+
+[source,xml]
+----
+<param-name>maxQueryRequests</param-name>
+----
+[source,xml]
+----
+<param-value>15</param-value>
+----
+
+=== Request Slot Allocation Wait Time
+Wait time in ms for which a request will wait for a slot to be available when all slots are full,
+before the request is put into the wait queue. This allows requests to have a chance to proceed if
+the unavailability of the request slots for this rate limiter is a transient phenomenon. Default value
+is -1, indicating no wait. 0 will represent the same -- no wait. Note that higher request allocation times
+can lead to larger queue times and can potentially lead to longer wait times for queries.
+[source,xml]
+----
+<param-name>queryWaitForSlotAllocationInMS</param-name>
+----
+[source,xml]
+----
+<param-value>100</param-value>
+----
+
+=== Slot Borrowing Enabled
+If slot borrowing (described below) is enabled or not. Default value is false.
+
+NOTE: This feature is experimental and can cause slots to be blocked if the
+borrowing request is long lived.
+
+[source,xml]
+----
+<param-name>queryAllowSlotBorrowing</param-name>
+----
+[source,xml]
+----
+<param-value>true</param-value>
+----
+
+=== Guaranteed Slots
+The number of guaranteed slots that the query rate limiter will reserve irrespective
+of the load of query requests. This is used only if slot borrowing is enabled and acts
+as the threshold beyond which query rate limiter will not allow other request types to
+borrow slots from its quota. Default value is allowed number of concurrent requests / 2.
+
+NOTE: This feature is experimental and can cause slots to be blocked if the
+borrowing request is long lived.
+
+[source,xml]
+----
+<param-name>queryGuaranteedSlots</param-name>
+----
+[source,xml]
+----
+<param-value>200</param-value>
+----
+
+== Salient Points
+
+These are some of the things to keep in mind when using rate limiters.
+
+=== Over Subscribing
+It is possible to define a size of quota for a request type which exceeds the size
+of the available threadpool. Solr does not enforce rules on the size of a quota that
+can be define for a request type. This is intentionally done to allow users full
+control on their quota allocation. However, if the quota exceeds the available threadpool's
+size, the standard queuing policies of the threadpool will kick in.
+
+=== Slot Borrowing
+If a quota does not have backlog but other quotas do, then the relatively less busier quota can
+"borrow" slot from the busier quotas. This is done on a round robin basis today with a futuristic
+pending task to make it a priority based model (https://issues.apache.org/jira/browse/SOLR-14709).
+
+NOTE: This feature is experimental and gives no guarantee of borrowed slots being
+returned in time.
diff --git a/solr/solr-ref-guide/src/replication-screen.adoc b/solr/solr-ref-guide/src/replication-screen.adoc
index 2637bd9..757f156 100644
--- a/solr/solr-ref-guide/src/replication-screen.adoc
+++ b/solr/solr-ref-guide/src/replication-screen.adoc
@@ -16,11 +16,11 @@
 // specific language governing permissions and limitations
 // under the License.
 
-The Replication screen shows you the current replication state for the core you have specified. <<solrcloud.adoc#solrcloud,SolrCloud>> has supplanted much of this functionality, but if you are still using Master-Slave index replication, you can use this screen to:
+The Replication screen shows you the current replication state for the core you have specified. <<solrcloud.adoc#solrcloud,SolrCloud>> has supplanted much of this functionality, but if you are still using Leader-Follower index replication, you can use this screen to:
 
-. View the replicatable index state. (on a master node)
-. View the current replication status (on a slave node)
-. Disable replication. (on a master node)
+. View the replicatable index state. (on a leader node)
+. View the current replication status (on a follower node)
+. Disable replication. (on a leader node)
 
 .Caution When Using SolrCloud
 [IMPORTANT]
diff --git a/solr/solr-ref-guide/src/request-parameters-api.adoc b/solr/solr-ref-guide/src/request-parameters-api.adoc
index 94be9b0..2f26f8d 100644
--- a/solr/solr-ref-guide/src/request-parameters-api.adoc
+++ b/solr/solr-ref-guide/src/request-parameters-api.adoc
@@ -208,4 +208,4 @@
 
 == Examples Using the Request Parameters API
 
-The Solr "films" example demonstrates the use of the parameters API. You can use this example in your Solr installation (in the `example/films` directory) or view the files in the Apache GitHub mirror at https://github.com/apache/lucene-solr/tree/master/solr/example/films.
+The Solr "films" example demonstrates the use of the parameters API. You can use this example in your Solr installation (in the `example/films` directory) or view the files in the Apache GitHub mirror at https://github.com/apache/lucene-solr/tree/leader/solr/example/films.
diff --git a/solr/solr-ref-guide/src/schema-browser-screen.adoc b/solr/solr-ref-guide/src/schema-browser-screen.adoc
index b5d0acd..5cd2fcf 100644
--- a/solr/solr-ref-guide/src/schema-browser-screen.adoc
+++ b/solr/solr-ref-guide/src/schema-browser-screen.adoc
@@ -33,3 +33,5 @@
 ====
 Term Information is loaded from single arbitrarily selected core from the collection, to provide a representative sample for the collection. Full <<faceting.adoc#faceting,Field Facet>> query results are needed to see precise term counts across the entire collection.
 ====
+
+For programmatic access to the underlying information in this screen please reference the <<luke-request-handler.adoc#luke-request-handler,Luke Request Handler>>
diff --git a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
index 3aa07cb..ca0c3ef 100644
--- a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
@@ -32,7 +32,7 @@
 
 == Leaders and Replicas
 
-In SolrCloud there are no masters or slaves. Instead, every shard consists of at least one physical *replica*, exactly one of which is a *leader*. Leaders are automatically elected, initially on a first-come-first-served basis, and then based on the ZooKeeper process described at http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/recipes.html#sc_leaderElection.
+In SolrCloud there are no leaders or followers. Instead, every shard consists of at least one physical *replica*, exactly one of which is a *leader*. Leaders are automatically elected, initially on a first-come-first-served basis, and then based on the ZooKeeper process described at http://zookeeper.apache.org/doc/r{ivy-zookeeper-version}/recipes.html#sc_leaderElection.
 
 If a leader goes down, one of the other replicas is automatically elected as the new leader.
 
@@ -122,6 +122,8 @@
 
 In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.
 
+NOTE: Using auto soft commit or commitWithin requires the client app to embrace the realities of "eventual consistency". Solr will make documents searchable at _roughly_ the same time across replicas of a collection but there are no hard guarantees. Consequently, in rare cases, it's possible for a document to show up in one search only for it not to appear in a subsequent search occurring immediately after the first search when the second search is routed to a different replica. Also, documents added in a particular order (even in the same batch) might become searchable out of the order of submission when there is sharding. The document will become visible on all replicas of a shard after the next auto commit or commitWithin interval expires
+
 To enforce a policy where client applications should not send explicit commits, you should update all client applications that index data into SolrCloud. However, that is not always feasible, so Solr provides the `IgnoreCommitOptimizeUpdateProcessorFactory`, which allows you to ignore explicit commits and/or optimize requests from client applications without having refactor your client application code.
 
 To activate this request processor you'll need to add the following to your `solrconfig.xml`:
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
index b192f9c..88d7c45 100644
--- a/solr/solr-ref-guide/src/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -77,7 +77,6 @@
 
 * cloud
 * techproducts
-* dih
 * schemaless
 +
 See the section <<Running with Example Configurations>> below for more details on the example configurations.
@@ -206,11 +205,6 @@
 * *techproducts*: This example starts Solr in standalone mode with a schema designed for the sample documents included in the `$SOLR_HOME/example/exampledocs` directory.
 +
 The configset used can be found in `$SOLR_HOME/server/solr/configsets/sample_techproducts_configs`.
-* *dih*: This example starts Solr in standalone mode with the DataImportHandler (DIH) enabled and several example `dataconfig.xml` files pre-configured for different types of data supported with DIH (such as, database contents, email, RSS feeds, etc.).
-+
-The configset used is customized for DIH, and is found in `$SOLR_HOME/example/example-DIH/solr/conf`.
-+
-For more information about DIH, see the section <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.
 * *schemaless*: This example starts Solr in standalone mode using a managed schema, as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>, and provides a very minimal pre-defined schema. Solr will run in <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents.
 +
 The configset used can be found in `$SOLR_HOME/server/solr/configsets/_default`.
diff --git a/solr/solr-ref-guide/src/solr-glossary.adoc b/solr/solr-ref-guide/src/solr-glossary.adoc
index 5c471af..27f1c7e 100644
--- a/solr/solr-ref-guide/src/solr-glossary.adoc
+++ b/solr/solr-ref-guide/src/solr-glossary.adoc
@@ -144,7 +144,7 @@
 
 [[replication]]<<index-replication.adoc#index-replication,Replication>>::
 
-A method of copying a master index from one server to one or more "slave" or "child" servers.
+A method of copying a leader index from one server to one or more "follower" or "child" servers.
 
 [[requesthandler]]<<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,RequestHandler>>::
 Logic and configuration parameters that tell Solr how to handle incoming "requests", whether the requests are to return search results, to index documents, or to handle other custom situations.
diff --git a/solr/solr-ref-guide/src/solr-plugins.adoc b/solr/solr-ref-guide/src/solr-plugins.adoc
index 115bbd1..26b5ece 100644
--- a/solr/solr-ref-guide/src/solr-plugins.adoc
+++ b/solr/solr-ref-guide/src/solr-plugins.adoc
@@ -1,8 +1,6 @@
 = Solr Plugins
 :page-children: libs, \
-    package-manager, \
-    adding-custom-plugins-in-solrcloud-mode
-
+    package-manager
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -57,8 +55,3 @@
 Describes a new and experimental system to manage packages of plugins in SolrCloud.
 It includes CLI commands, cluster-wide installation, use of plugin registries that host plugins, cryptographically signed plugins for security, and more.
 Only some plugins support this as of now (support for more types of plugins coming soon).
-
-* <<adding-custom-plugins-in-solrcloud-mode.adoc#adding-custom-plugins-in-solrcloud-mode,Blob and Runtimelib>>:
-Describes a deprecated system that predates the above package management system.
-It's functionality is a subset of the package management system.
-It will no longer be supported in Solr 9.
diff --git a/solr/solr-ref-guide/src/solr-tutorial.adoc b/solr/solr-ref-guide/src/solr-tutorial.adoc
index 0eb1a8a..37380d9 100644
--- a/solr/solr-ref-guide/src/solr-tutorial.adoc
+++ b/solr/solr-ref-guide/src/solr-tutorial.adoc
@@ -912,11 +912,6 @@
 +
 You may get errors as it works through your documents. These might be caused by the field guessing, or the file type may not be supported. Indexing content such as this demonstrates the need to plan Solr for your data, which requires understanding it and perhaps also some trial and error.
 
-DataImportHandler::
-Solr includes a tool called the <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Data Import Handler (DIH)>> which can connect to databases (if you have a jdbc driver), mail servers, or other structured data sources. There are several examples included for feeds, GMail, and a small HSQL database.
-+
-The `README.md` file in `example/example-DIH` will give you details on how to start working with this tool.
-
 SolrJ::
 SolrJ is a Java-based client for interacting with Solr. Use <<using-solrj.adoc#using-solrj,SolrJ>> for JVM-based languages or other <<client-apis.adoc#client-apis,Solr clients>> to programmatically create documents to send to Solr.
 
diff --git a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
index ecf4489..91b011f 100644
--- a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
+++ b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
@@ -247,8 +247,6 @@
 be enabled with a system parameter passed at start up before it can be used.
 For details, please see the section <<package-manager.adoc#package-manager,Package Management>>.
 
-With this feature Solr's <<adding-custom-plugins-in-solrcloud-mode.adoc#adding-custom-plugins-in-solrcloud-mode,Blob Store>>
-functionality is now deprecated and will likely be removed in 9.0.
 
 *Security*
 
diff --git a/solr/solr-ref-guide/src/solrcloud.adoc b/solr/solr-ref-guide/src/solrcloud.adoc
index 73f5e60..1e3353f 100644
--- a/solr/solr-ref-guide/src/solrcloud.adoc
+++ b/solr/solr-ref-guide/src/solrcloud.adoc
@@ -1,5 +1,5 @@
 = SolrCloud
-:page-children: getting-started-with-solrcloud, how-solrcloud-works, solrcloud-resilience, solrcloud-configuration-and-parameters, rule-based-replica-placement, cross-data-center-replication-cdcr
+:page-children: getting-started-with-solrcloud, how-solrcloud-works, solrcloud-resilience, solrcloud-configuration-and-parameters, rule-based-replica-placement
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -23,7 +23,7 @@
 * Automatic load balancing and fail-over for queries
 * ZooKeeper integration for cluster coordination and configuration.
 
-SolrCloud is flexible distributed search and indexing, without a master node to allocate nodes, shards and replicas. Instead, Solr uses ZooKeeper to manage these locations, depending on configuration files and schemas. Queries and updates can be sent to any server. Solr will use the information in the ZooKeeper database to figure out which servers need to handle the request.
+SolrCloud is flexible distributed search and indexing, without a leader node to allocate nodes, shards and replicas. Instead, Solr uses ZooKeeper to manage these locations, depending on configuration files and schemas. Queries and updates can be sent to any server. Solr will use the information in the ZooKeeper database to figure out which servers need to handle the request.
 
 In this section, we'll cover everything you need to know about using Solr in SolrCloud mode. We've split up the details into the following topics:
 
@@ -43,5 +43,4 @@
 ** <<command-line-utilities.adoc#command-line-utilities,Command Line Utilities>>
 ** <<solrcloud-with-legacy-configuration-files.adoc#solrcloud-with-legacy-configuration-files,SolrCloud with Legacy Configuration Files>>
 ** <<configsets-api.adoc#configsets-api,Configsets API>>
-* <<rule-based-replica-placement.adoc#rule-based-replica-placement,Rule-based Replica Placement>>
-* <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication (CDCR)>>
\ No newline at end of file
+* <<rule-based-replica-placement.adoc#rule-based-replica-placement,Rule-based Replica Placement>>
\ No newline at end of file
diff --git a/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc b/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
index 4c0deee..e7b651f 100644
--- a/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
@@ -78,7 +78,7 @@
 
 === commitWithin
 
-The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to slave servers in a master/slave environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example:
+The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to follower servers in a leader/follower environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
index 1c090a0..4cf0005 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
@@ -17,7 +17,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Index Handlers are Request Handlers designed to add, delete and update documents to the index. In addition to having plugins for importing rich documents <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,using Tika>> or from structured data sources using the <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Data Import Handler>>, Solr natively supports indexing structured documents in XML, CSV and JSON.
+Index Handlers are Request Handlers designed to add, delete and update documents to the index. In addition to having plugins for importing rich documents <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,using Tika>>, Solr natively supports indexing structured documents in XML, CSV and JSON.
 
 The recommended way to configure and use request handlers is with path based names that map to paths in the request url. However, request handlers can also be specified with the `qt` (query type) parameter if the <<requestdispatcher-in-solrconfig.adoc#requestdispatcher-in-solrconfig,`requestDispatcher`>> is appropriately configured. It is possible to access the same handler using more than one name, which can be useful if you wish to specify different sets of default options.
 
diff --git a/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc b/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc
deleted file mode 100644
index 98c315e..0000000
--- a/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc
+++ /dev/null
@@ -1,1077 +0,0 @@
-= Uploading Structured Data Store Data with the Data Import Handler
-:toclevels: 1
-// Licensed to the Apache Software Foundation (ASF) under one
-// or more contributor license agreements.  See the NOTICE file
-// distributed with this work for additional information
-// regarding copyright ownership.  The ASF licenses this file
-// to you under the Apache License, Version 2.0 (the
-// "License"); you may not use this file except in compliance
-// with the License.  You may obtain a copy of the License at
-//
-//   http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing,
-// software distributed under the License is distributed on an
-// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-// KIND, either express or implied.  See the License for the
-// specific language governing permissions and limitations
-// under the License.
-
-WARNING: The Data Import Handler is deprecated is scheduled to be removed in 9.0. This functionality will likely migrate to a 3rd-party plugin in the near future.
-
-Many search applications store the content to be indexed in a structured data store, such as a relational database. The Data Import Handler (DIH) provides a mechanism for importing content from a data store and indexing it.
-
-In addition to relational databases, DIH can index content from HTTP based data sources such as RSS and ATOM feeds, e-mail repositories, and structured XML where an XPath processor is used to generate fields.
-
-== DIH Concepts and Terminology
-
-Descriptions of the Data Import Handler use several familiar terms, such as entity and processor, in specific ways, as explained in the table below.
-
-Datasource::
-As its name suggests, a datasource defines the location of the data of interest. For a database, it's a DSN. For an HTTP datasource, it's the base URL.
-
-Entity::
-Conceptually, an entity is processed to generate a set of documents, containing multiple fields, which (after optionally being transformed in various ways) are sent to Solr for indexing. For a RDBMS data source, an entity is a view or table, which would be processed by one or more SQL statements to generate a set of rows (documents) with one or more columns (fields).
-
-Processor::
-An entity processor does the work of extracting content from a data source, transforming it, and adding it to the index. Custom entity processors can be written to extend or replace the ones supplied.
-
-Transformer::
-Each set of fields fetched by the entity may optionally be transformed. This process can modify the fields, create new fields, or generate multiple rows/documents form a single row. There are several built-in transformers in the DIH, which perform functions such as modifying dates and stripping HTML. It is possible to write custom transformers using the publicly available interface.
-
-== Solr's DIH Examples
-
-The `example/example-DIH` directory contains several collections to demonstrate many of the features of the data import handler. These are available with the `dih` example from the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script>>:
-
-[source,bash]
-----
-bin/solr -e dih
-----
-
-This launches a standalone Solr instance with several collections that correspond to detailed examples. The available examples are `atom`, `db`, `mail`, `solr`, and `tika`.
-
-All examples in this section assume you are running the DIH example server.
-
-== Configuring DIH
-
-=== Configuring solrconfig.xml for DIH
-
-The Data Import Handler has to be registered in `solrconfig.xml`. For example:
-
-[source,xml]
-----
-<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
-  <lst name="defaults">
-    <str name="config">/path/to/my/DIHconfigfile.xml</str>
-  </lst>
-</requestHandler>
-----
-
-The only required parameter is the `config` parameter, which specifies the location of the DIH configuration file that contains specifications for the data source, how to fetch data, what data to fetch, and how to process it to generate the Solr documents to be posted to the index.
-
-You can have multiple DIH configuration files. Each file would require a separate definition in the `solrconfig.xml` file, specifying a path to the file.
-
-=== Configuring the DIH Configuration File
-
-An annotated configuration file, based on the `db` collection in the `dih` example server, is shown below (this file is located in `example/example-DIH/solr/db/conf/db-data-config.xml`).
-
-This example shows how to extract fields from four tables defining a simple product database. More information about the parameters and options shown here will be described in the sections following.
-
-[source,xml]
-----
-<dataConfig>
-
-  <dataSource driver="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:./example-DIH/hsqldb/ex" --<1>
-    user="sa" password="secret"/> --<2>
-  <document> --<3>
-    <entity name="item" query="select * from item"
-            deltaQuery="select id from item where last_modified > '${dataimporter.last_index_time}'"> --<4>
-      <field column="NAME" name="name" />
-
-      <entity name="feature"
-              query="select DESCRIPTION from FEATURE where ITEM_ID='${item.ID}'"
-              deltaQuery="select ITEM_ID from FEATURE where last_modified > '${dataimporter.last_index_time}'"
-              parentDeltaQuery="select ID from item where ID=${feature.ITEM_ID}"> --<5>
-        <field name="features" column="DESCRIPTION" />
-      </entity>
-
-      <entity name="item_category"
-              query="select CATEGORY_ID from item_category where ITEM_ID='${item.ID}'"
-              deltaQuery="select ITEM_ID, CATEGORY_ID from item_category where last_modified > '${dataimporter.last_index_time}'"
-              parentDeltaQuery="select ID from item where ID=${item_category.ITEM_ID}">
-        <entity name="category"
-                query="select DESCRIPTION from category where ID = '${item_category.CATEGORY_ID}'"
-                deltaQuery="select ID from category where last_modified > '${dataimporter.last_index_time}'"
-                parentDeltaQuery="select ITEM_ID, CATEGORY_ID from item_category where CATEGORY_ID=${category.ID}">
-          <field column="description" name="cat" />
-        </entity>
-      </entity>
-    </entity>
-  </document>
-</dataConfig>
-----
-<1> The first element is the `dataSource`, in this case an HSQLDB database. The path to the JDBC driver and the JDBC URL and login credentials are all specified here. Other permissible attributes include whether or not to autocommit to Solr, the batchsize used in the JDBC connection, and a `readOnly` flag.
-<2> The password attribute is optional if there is no password set for the DB. Alternately, the password can be encrypted; the section <<Encrypting a Database Password>> below describes how to do this.
-<3> A `document` element follows, containing multiple `entity` elements. Note that `entity` elements can be nested, and this allows the entity relationships in the sample database to be mirrored here, so that we can generate a denormalized Solr record which may include multiple features for one item, for instance.
-<4> The possible attributes for the `entity` element are described in later sections. Entity elements may contain one or more `field` elements, which map the data source field names to Solr fields, and optionally specify per-field transformations. This entity is the `root` entity.
-<5> This entity is nested and reflects the one-to-many relationship between an item and its multiple features. Note the use of variables; `${item.ID}` is the value of the column 'ID' for the current item (`item` referring to the entity name).
-
-Datasources can still be specified in `solrconfig.xml`. These must be specified in the defaults section of the handler in `solrconfig.xml`. However, these are not parsed until the main configuration is loaded.
-
-The entire configuration itself can be passed as a request parameter using the `dataConfig` parameter rather than using a file. When configuration errors are encountered, the error message is returned in XML format.  Due to security concerns, this only works if you start Solr with `-Denable.dih.dataConfigParam=true`.
-
-A `reload-config` command is also supported, which is useful for validating a new configuration file, or if you want to specify a file, load it, and not have it reloaded again on import. If there is an `xml` mistake in the configuration a user-friendly message is returned in `xml` format. You can then fix the problem and do a `reload-config`.
-
-TIP: You can also view the DIH configuration in the Solr Admin UI from the <<dataimport-screen.adoc#dataimport-screen,Dataimport Screen>>. It includes an interface to import content.
-
-==== DIH Request Parameters
-
-Request parameters can be substituted in configuration with placeholder `${dataimporter.request._paramname_}`, as in this example:
-
-[source,xml]
-----
-<dataSource driver="org.hsqldb.jdbcDriver"
-            url="${dataimporter.request.jdbcurl}"
-            user="${dataimporter.request.jdbcuser}"
-            password="${dataimporter.request.jdbcpassword}" />
-----
-
-These parameters can then be passed to the `full-import` command or defined in the `<defaults>` section in `solrconfig.xml`. This example shows the parameters with the full-import command:
-
-[source,bash]
-http://localhost:8983/solr/dih/dataimport?command=full-import&jdbcurl=jdbc:hsqldb:./example-DIH/hsqldb/ex&jdbcuser=sa&jdbcpassword=secret
-
-==== Encrypting a Database Password
-
-The database password can be encrypted if necessary to avoid plaintext passwords being exposed in unsecured files. To do this, we will replace the password in `data-config.xml` with an encrypted password. We will use the `openssl` tool for the encryption, and the encryption key will be stored in a file which is only readable to the `solr` process. Please follow these steps:
-
-. Create a strong encryption password and store it in a file. Then make sure it is readable only for the `solr` user. Example commands:
-+
-[source,text]
-echo -n "a-secret" > /var/solr/data/dih-encryptionkey
-chown solr:solr /var/solr/data/dih-encryptionkey
-chmod 600 /var/solr/data/dih-encryptionkey
-+
-CAUTION: Note that we use the `-n` argument to `echo` to avoid including a newline character at the end of the password. If you use another method to generate the encrypted password, make sure to avoid newlines as well.
-
-. Encrypt the JDBC database password using `openssl` as follows:
-+
-[source,text]
-echo -n "my-jdbc-password" | openssl enc -aes-128-cbc -a -salt -md md5 -pass file:/var/solr/data/dih-encryptionkey
-+
-The output of the command will be a long string such as `U2FsdGVkX18QMjY0yfCqlfBMvAB4d3XkwY96L7gfO2o=`. You will use this as `password` in your `data-config.xml` file.
-
-. In your `data-config.xml`, you'll add the `password` and `encryptKeyFile` parameters to the `<datasource>` configuration, as in this example:
-+
-[source,xml]
-<dataSource driver="org.hsqldb.jdbcDriver"
-    url="jdbc:hsqldb:./example-DIH/hsqldb/ex"
-    user="sa"
-    password="U2FsdGVkX18QMjY0yfCqlfBMvAB4d3XkwY96L7gfO2o="
-    encryptKeyFile="/var/solr/data/dih-encryptionkey" />
-
-
-
-== DataImportHandler Commands
-
-DIH commands are sent to Solr via an HTTP request. The following operations are supported.
-
-abort::
-Aborts an ongoing operation. For example: `\http://localhost:8983/solr/dih/dataimport?command=abort`.
-
-delta-import::
-For incremental imports and change detection. Only the <<The SQL Entity Processor,SqlEntityProcessor>> supports delta imports.
-+
-For example: `\http://localhost:8983/solr/dih/dataimport?command=delta-import`.
-+
-This command supports the same `clean`, `commit`, `optimize` and `debug` parameters as `full-import` command described below.
-
-full-import::
-A Full Import operation can be started with a URL such as `\http://localhost:8983/solr/dih/dataimport?command=full-import`. The command returns immediately.
-+
-The operation will be started in a new thread and the _status_ attribute in the response should be shown as _busy_. The operation may take some time depending on the size of dataset. Queries to Solr are not blocked during full-imports.
-+
-When a `full-import` command is executed, it stores the start time of the operation in a file located at `conf/dataimport.properties`. This stored timestamp is used when a `delta-import` operation is executed.
-+
-Commands available to `full-import` are:
-+
-clean:::
-Default is true. Tells whether to clean up the index before the indexing is started.
-commit:::
-Default is true. Tells whether to commit after the operation.
-debug:::
-Default is false. Runs the command in debug mode and is used by the interactive development mode.
-+
-Note that in debug mode, documents are never committed automatically. If you want to run debug mode and commit the results too, add `commit=true` as a request parameter.
-entity:::
-The name of an entity directly under the `<document>` tag in the configuration file. Use this to execute one or more entities selectively.
-+
-Multiple "entity" parameters can be passed on to run multiple entities at once. If nothing is passed, all entities are executed.
-optimize:::
-Default is true. Tells Solr whether to optimize after the operation.
-synchronous:::
-Blocks request until import is completed. Default is false.
-
-reload-config::
-If the configuration file has been changed and you wish to reload it without restarting Solr, run the command `\http://localhost:8983/solr/dih/dataimport?command=reload-config`
-
-status::
-This command returns statistics on the number of documents created, deleted, queries run, rows fetched, status, and so on. For example:  `\http://localhost:8983/solr/dih/dataimport?command=status`.
-
-show-config::
-This command responds with configuration: `\http://localhost:8983/solr/dih/dataimport?command=show-config`.
-
-
-== Property Writer
-
-The `propertyWriter` element defines the date format and locale for use with delta queries. It is an optional configuration. Add the element to the DIH configuration file, directly under the `dataConfig` element.
-
-[source,xml]
-----
-<propertyWriter dateFormat="yyyy-MM-dd HH:mm:ss" type="SimplePropertiesWriter"
-                directory="data" filename="my_dih.properties" locale="en-US" />
-----
-
-The parameters available are:
-
-dateFormat::
-A `java.text.SimpleDateFormat` to use when converting the date to text. The default is `yyyy-MM-dd HH:mm:ss`.
-
-type::
-The implementation class. Use `SimplePropertiesWriter` for non-SolrCloud installations. If using SolrCloud, use `ZKPropertiesWriter`.
-+
-If this is not specified, it will default to the appropriate class depending on if SolrCloud mode is enabled.
-
-directory::
-Used with the `SimplePropertiesWriter` only. The directory for the properties file. If not specified, the default is `conf`.
-
-filename::
-Used with the `SimplePropertiesWriter` only. The name of the properties file.
-+
-If not specified, the default is the requestHandler name (as defined in `solrconfig.xml`, appended by ".properties" (such as, `dataimport.properties`).
-
-locale::
-The locale. If not defined, the ROOT locale is used. It must be specified as language-country (https://tools.ietf.org/html/bcp47[BCP 47 language tag]). For example, `en-US`.
-
-== Data Sources
-
-A data source specifies the origin of data and its type. Somewhat confusingly, some data sources are configured within the associated entity processor. Data sources can also be specified in `solrconfig.xml`, which is useful when you have multiple environments (for example, development, QA, and production) differing only in their data sources.
-
-You can create a custom data source by writing a class that extends `org.apache.solr.handler.dataimport.DataSource`.
-
-The mandatory attributes for a data source definition are its name and type. The name identifies the data source to an Entity element.
-
-The types of data sources available are described below.
-
-=== ContentStreamDataSource
-
-This takes the POST data as the data source. This can be used with any EntityProcessor that uses a `DataSource<Reader>`.
-
-=== FieldReaderDataSource
-
-This can be used where a database field contains XML which you wish to process using the XPathEntityProcessor. You would set up a configuration with both JDBC and FieldReader data sources, and two entities, as follows:
-
-[source,xml]
-----
-<dataSource name="a1" driver="org.hsqldb.jdbcDriver" ...  />
-<dataSource name="a2" type="FieldReaderDataSource" />
-<document>
-
-  <!-- processor for database -->
-  <entity name ="e1" dataSource="a1" processor="SqlEntityProcessor" pk="docid"
-          query="select * from t1 ...">
-
-    <!-- nested XpathEntity; the field in the parent which is to be used for
-         XPath is set in the "datafield" attribute in place of the "url" attribute -->
-    <entity name="e2" dataSource="a2" processor="XPathEntityProcessor"
-            dataField="e1.fieldToUseForXPath">
-
-      <!-- XPath configuration follows -->
-      ...
-    </entity>
-  </entity>
-</document>
-----
-
-The `FieldReaderDataSource` can take an `encoding` parameter, which will default to "UTF-8" if not specified. It must be specified as language-country. For example, `en-US`.
-
-=== FileDataSource
-
-This can be used like a <<URLDataSource>>, but is used to fetch content from files on disk. The only difference from `URLDataSource`, when accessing disk files, is how a pathname is specified.
-
-This data source accepts these optional attributes.
-
-basePath::
-The base path relative to which the value is evaluated if it is not absolute.
-
-encoding::
-Defines the character encoding to use. If not defined, UTF-8 is used.
-
-=== JdbcDataSource
-
-This is the default datasource. It's used with the <<The SQL Entity Processor,SqlEntityProcessor>>. See the example in the <<FieldReaderDataSource>> section for details on configuration. `JdbcDatasource` supports at least the following attributes:
-
-driver, url, user, password, encryptKeyFile::
-Usual JDBC connection properties.
-
-batchSize::
-Passed to `Statement#setFetchSize`, default value 500.
-+
-For MySQL driver, which doesn't honor fetchSize and pulls whole resultSet, which often lead to OutOfMemoryError.
-+
-In this case, set `batchSize=-1` that pass setFetchSize(Integer.MIN_VALUE), and switch result set to pull row by row
-
-All of them substitute properties via `$\{placeholders}`.
-
-=== URLDataSource
-
-This data source is often used with <<The XPathEntityProcessor,XPathEntityProcessor>> to fetch content from an underlying `file://` or `http://` location. Here's an example:
-
-[source,xml]
-----
-<dataSource name="a"
-            type="URLDataSource"
-            baseUrl="http://host:port/"
-            encoding="UTF-8"
-            connectionTimeout="5000"
-            readTimeout="10000"/>
-----
-
-The URLDataSource type accepts these optional parameters:
-
-baseURL::
-Specifies a new baseURL for pathnames. You can use this to specify host/port changes between Dev/QA/Prod environments. Using this attribute isolates the changes to be made to the `solrconfig.xml`
-
-connectionTimeout::
-Specifies the length of time in milliseconds after which the connection should time out. The default value is 5000ms.
-
-encoding::
-By default the encoding in the response header is used. You can use this property to override the default encoding.
-
-readTimeout::
-Specifies the length of time in milliseconds after which a read operation should time out. The default value is 10000ms.
-
-
-== Entity Processors
-
-Entity processors extract data, transform it, and add it to a Solr index. Examples of entities include views or tables in a data store.
-
-Each processor has its own set of attributes, described in its own section below. In addition, there are several attributes common to all entities which may be specified:
-
-dataSource::
-The name of a data source. If there are multiple data sources defined, use this attribute with the name of the data source for this entity.
-
-name::
-Required. The unique name used to identify an entity.
-
-pk::
-The primary key for the entity. It is optional, and required only when using delta-imports. It has no relation to the uniqueKey defined in `schema.xml` but they can both be the same.
-+
-This attribute is mandatory if you do delta-imports and then refer to the column name in `${dataimporter.delta.<column-name>}` which is used as the primary key.
-
-processor::
-Default is <<The SQL Entity Processor,SqlEntityProcessor>>. Required only if the datasource is not RDBMS.
-
-onError::
-Defines what to do if an error is encountered.
-+
-Permissible values are:
-
-abort::: Stops the import.
-
-skip::: Skips the current document.
-
-continue::: Ignores the error and processing continues.
-
-preImportDeleteQuery::
-Before a `full-import` command, use this query this to cleanup the index instead of using `\*:*`. This is honored only on an entity that is an immediate sub-child of `<document>`.
-
-postImportDeleteQuery::
-Similar to `preImportDeleteQuery`, but it executes after the import has completed.
-
-rootEntity::
-By default the entities immediately under `<document>` are root entities. If this attribute is set to false, the entity directly falling under that entity will be treated as the root entity (and so on). For every row returned by the root entity, a document is created in Solr.
-
-transformer::
-Optional. One or more transformers to be applied on this entity.
-
-cacheImpl::
-Optional. A class (which must implement `DIHCache`) to use for caching this entity when doing lookups from an entity which wraps it. Provided implementation is `SortedMapBackedCache`.
-
-cacheKey::
-The name of a property of this entity to use as a cache key if `cacheImpl` is specified.
-
-cacheLookup::
-An entity + property name that will be used to lookup cached instances of this entity if `cacheImpl` is specified.
-
-where::
-An alternative way to specify `cacheKey` and `cacheLookup` concatenated with '='.
-+
-For example, `where="CODE=People.COUNTRY_CODE"` is equivalent to `cacheKey="CODE" cacheLookup="People.COUNTRY_CODE"`
-
-child="true"::
-Enables indexing document blocks aka <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Nested Child Documents>> for searching with <<other-parsers.adoc#other-parsers,Block Join Query Parsers>>. It can be only specified on the `<entity>` element under another root entity. It switches from default behavior (merging field values) to nesting documents as children documents.
-+
-Note: parent `<entity>` should add a field which is used as a parent filter in query time.
-
-join="zipper"::
-Enables merge join, aka "zipper" algorithm, for joining parent and child entities without cache. It should be specified at child (nested) `<entity>`. It implies that parent and child queries return results ordered by keys, otherwise it throws an exception. Keys should be specified either with `where` attribute or with `cacheKey` and `cacheLookup`.
-
-=== Entity Caching
-Caching of entities in DIH is provided to avoid repeated lookups for same entities again and again. The default `SortedMapBackedCache` is a `HashMap` where a key is a field in the row and the value is a bunch of rows for that same key.
-
-In the example below, each `manufacturer` entity is cached using the `id` property as a cache key. Cache lookups will be performed for each `product` entity based on the product's `manu` property. When the cache has no data for a particular key, the query is run and the cache is populated
-
-[source,xml]
-----
-<entity name="product" query="select description,sku, manu from product" >
-  <entity name="manufacturer" query="select id, name from manufacturer"
-          cacheKey="id" cacheLookup="product.manu" cacheImpl="SortedMapBackedCache"/>
-</entity>
-----
-
-=== The SQL Entity Processor
-
-The SqlEntityProcessor is the default processor. The associated <<JdbcDataSource>> should be a JDBC URL.
-
-The entity attributes specific to this processor are shown in the table below. These are in addition to the attributes common to all entity processors described above.
-
-query::
-Required. The SQL query used to select rows.
-
-deltaQuery::
-SQL query used if the operation is delta-import. This query selects the primary keys of the rows which will be parts of the delta-update. The pks will be available to the deltaImportQuery through the variable `${dataimporter.delta.<column-name>}`.
-
-parentDeltaQuery::
-SQL query used if the operation is `delta-import`.
-
-deletedPkQuery::
-SQL query used if the operation is `delta-import`.
-
-deltaImportQuery::
-SQL query used if the operation is `delta-import`. If this is not present, DIH tries to construct the import query by (after identifying the delta) modifying the 'query' (this is error prone).
-+
-There is a namespace `${dataimporter.delta.<column-name>}` which can be used in this query. For example, `select * from tbl where id=${dataimporter.delta.id}`.
-
-=== The XPathEntityProcessor
-
-This processor is used when indexing XML formatted data. The data source is typically <<URLDataSource>> or <<FileDataSource>>. XPath can also be used with the <<The FileListEntityProcessor,FileListEntityProcessor>> described below, to generate a document from each file.
-
-The entity attributes unique to this processor are shown below. These are in addition to the attributes common to all entity processors described above.
-
-Processor::
-Required. Must be set to `XpathEntityProcessor`.
-
-url::
-Required. The HTTP URL or file location.
-
-stream::
-Optional: Set to true for a large file or download.
-
-forEach::
-Required unless you define `useSolrAddSchema`. The XPath expression which demarcates each record. This will be used to set up the processing loop.
-
-xsl::
-Optional: Its value (a URL or filesystem path) is the name of a resource used as a preprocessor for applying the XSL transformation.
-
-useSolrAddSchema::
-Set this to true if the content is in the form of the standard Solr update XML schema.
-
-Each `<field>` element in the entity can have the following attributes as well as the default ones.
-
-xpath::
-Required. The XPath expression which will extract the content from the record for this field. Only a subset of XPath syntax is supported.
-
-commonField::
-Optional. If true, then when this field is encountered in a record it will be copied to future records when creating a Solr document.
-
-flatten::
-Optional. If set to true, then any children text nodes are collected to form the value of a field.
-+
-[WARNING]
-The default value is false, meaning that if there are any sub-elements of the node pointed to by the XPath expression, they will be quietly omitted.
-
-Here is an example from the `atom` collection in the `dih` example (data-config file found at `example/example-DIH/solr/atom/conf/atom-data-config.xml`):
-
-[source,xml]
-----
-<dataConfig>
-  <dataSource type="URLDataSource"/>
-  <document>
-
-    <entity name="stackoverflow"
-            url="https://stackoverflow.com/feeds/tag/solr"
-            processor="XPathEntityProcessor"
-            forEach="/feed|/feed/entry"
-            transformer="HTMLStripTransformer,RegexTransformer">
-
-      <!-- Pick this value up from the feed level and apply to all documents -->
-      <field column="lastchecked_dt" xpath="/feed/updated" commonField="true"/>
-
-      <!-- Keep only the final numeric part of the URL -->
-      <field column="id" xpath="/feed/entry/id" regex=".*/" replaceWith=""/>
-
-      <field column="title"    xpath="/feed/entry/title"/>
-      <field column="author"   xpath="/feed/entry/author/name"/>
-      <field column="category" xpath="/feed/entry/category/@term"/>
-      <field column="link"     xpath="/feed/entry/link[@rel='alternate']/@href"/>
-
-      <!-- Use transformers to convert HTML into plain text.
-        There is also an UpdateRequestProcess to trim remaining spaces.
-      -->
-      <field column="summary" xpath="/feed/entry/summary" stripHTML="true" regex="( |\n)+" replaceWith=" "/>
-
-      <!-- Ignore namespaces when matching XPath -->
-      <field column="rank" xpath="/feed/entry/rank"/>
-
-      <field column="published_dt" xpath="/feed/entry/published"/>
-      <field column="updated_dt" xpath="/feed/entry/updated"/>
-    </entity>
-
-  </document>
-</dataConfig>
-----
-
-=== The MailEntityProcessor
-
-The MailEntityProcessor uses the Java Mail API to index email messages using the IMAP protocol.
-
-The MailEntityProcessor works by connecting to a specified mailbox using a username and password, fetching the email headers for each message, and then fetching the full email contents to construct a document (one document for each mail message).
-
-The entity attributes unique to the MailEntityProcessor are shown below. These are in addition to the attributes common to all entity processors described above.
-
-processor::
-Required. Must be set to `MailEntityProcessor`.
-
-user::
-Required. Username for authenticating to the IMAP server; this is typically the email address of the mailbox owner.
-
-password::
-Required. Password for authenticating to the IMAP server.
-
-host::
-Required. The IMAP server to connect to.
-
-protocol::
-Required. The IMAP protocol to use, valid values are: imap, imaps, gimap, and gimaps.
-
-fetchMailsSince::
-Optional. Date/time used to set a filter to import messages that occur after the specified date; expected format is: `yyyy-MM-dd HH:mm:ss`.
-
-folders::
-Required. Comma-delimited list of folder names to pull messages from, such as "inbox".
-
-recurse::
-Optional. Default is true. Flag to indicate if the processor should recurse all child folders when looking for messages to import.
-
-include::
-Optional. Comma-delimited list of folder patterns to include when processing folders (can be a literal value or regular expression).
-
-exclude::
-Optional. Comma-delimited list of folder patterns to exclude when processing folders (can be a literal value or regular expression). Excluded folder patterns take precedence over include folder patterns.
-
-processAttachement or processAttachments::
-Optional. Default is true. Use Tika to process message attachments.
-
-includeContent::
-Optional. Default is true. Include the message body when constructing Solr documents for indexing.
-
-Here is an example from the `mail` collection of the `dih` example (data-config file found at `example/example-DIH/mail/conf/mail-data-config.xml`):
-
-[source,xml]
-----
-<dataConfig>
-  <document>
-      <entity processor="MailEntityProcessor"
-              user="email@gmail.com"
-              password="password"
-              host="imap.gmail.com"
-              protocol="imaps"
-              fetchMailsSince="2014-06-30 00:00:00"
-              batchSize="20"
-              folders="inbox"
-              processAttachement="false"
-              name="mail_entity"/>
-  </document>
-</dataConfig>
-----
-
-==== Importing New Emails Only
-
-After running a full import, the MailEntityProcessor keeps track of the timestamp of the previous import so that subsequent imports can use the fetchMailsSince filter to only pull new messages from the mail server. This occurs automatically using the DataImportHandler `dataimport.properties` file (stored in `conf`).
-
-For instance, if you set `fetchMailsSince="2014-08-22 00:00:00"` in your `mail-data-config.xml`, then all mail messages that occur after this date will be imported on the first run of the importer. Subsequent imports will use the date of the previous import as the `fetchMailsSince` filter, so that only new emails since the last import are indexed each time.
-
-==== GMail Extensions
-
-When connecting to a GMail account, you can improve the efficiency of the MailEntityProcessor by setting the protocol to *gimap* or *gimaps*.
-
-This allows the processor to send the `fetchMailsSince` filter to the GMail server to have the date filter applied on the server, which means the processor only receives new messages from the server. However, GMail only supports date granularity, so the server-side filter may return previously seen messages if run more than once a day.
-
-=== The TikaEntityProcessor
-
-The TikaEntityProcessor uses Apache Tika to process incoming documents. This is similar to <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>, but using DataImportHandler options instead.
-
-The parameters for this processor are described in the table below. These are in addition to the attributes common to all entity processors described above.
-
-dataSource::
-This parameter defines the data source and an optional name which can be referred to in later parts of the configuration if needed. This is the same `dataSource` explained in the description of general entity processor attributes above.
-+
-The available data source types for this processor are:
-+
-* BinURLDataSource: used for HTTP resources, but can also be used for files.
-* BinContentStreamDataSource: used for uploading content as a stream.
-* BinFileDataSource: used for content on the local filesystem.
-
-url::
-Required. The path to the source file(s), as a file path or a traditional internet URL.
-
-htmlMapper::
-Optional. Allows control of how Tika parses HTML. If this parameter is defined, it must be either *default* or *identity*; if it is absent, "default" is assumed.
-+
-The "default" mapper strips much of the HTML from documents while the "identity" mapper passes all HTML as-is with no modifications.
-
-format::
-The output format. The options are *text*, *xml*, *html* or *none*. The default is "text" if not defined. The format "none" can be used if metadata only should be indexed and not the body of the documents.
-
-parser::
-Optional. The default parser is `org.apache.tika.parser.AutoDetectParser`. If a custom or other parser should be used, it should be entered as a fully-qualified name of the class and path.
-
-fields::
-The list of fields from the input documents and how they should be mapped to Solr fields. If the attribute `meta` is defined as "true", the field will be obtained from the metadata of the document and not parsed from the body of the main text.
-
-extractEmbedded::
-Instructs the TikaEntityProcessor to extract embedded documents or attachments when *true*. If false, embedded documents and attachments will be ignored.
-
-onError::
-By default, the TikaEntityProcessor will stop processing documents if it finds one that generates an error. If you define `onError` to "skip", the TikaEntityProcessor will instead skip documents that fail processing and log a message that the document was skipped.
-
-Here is an example from the `tika` collection of the `dih` example (data-config file found in `example/example-DIH/tika/conf/tika-data-config.xml`):
-
-[source,xml]
-----
-<dataConfig>
-  <dataSource type="BinFileDataSource"/>
-  <document>
-    <entity name="file" processor="FileListEntityProcessor" dataSource="null"
-            baseDir="${solr.install.dir}/example/exampledocs" fileName=".*pdf"
-            rootEntity="false">
-
-      <field column="file" name="id"/>
-
-      <entity name="pdf" processor="TikaEntityProcessor"
-              url="${file.fileAbsolutePath}" format="text">
-
-        <field column="Author" name="author" meta="true"/>
-        <!-- in the original PDF, the Author meta-field name is upper-cased,
-          but in Solr schema it is lower-cased
-         -->
-
-        <field column="title" name="title" meta="true"/>
-        <field column="dc:format" name="format" meta="true"/>
-
-        <field column="text" name="text"/>
-
-      </entity>
-    </entity>
-  </document>
-</dataConfig>
-----
-
-=== The FileListEntityProcessor
-
-This processor is basically a wrapper, and is designed to generate a set of files satisfying conditions specified in the attributes which can then be passed to another processor, such as the <<The XPathEntityProcessor,XPathEntityProcessor>>.
-
-The entity information for this processor would be nested within the FileListEntity entry. It generates five implicit fields: `fileAbsolutePath`, `fileDir`, `fileSize`, `fileLastModified`, and `file`, which can be used in the nested processor. This processor does not use a data source.
-
-The attributes specific to this processor are described in the table below:
-
-fileName::
-Required. A regular expression pattern to identify files to be included.
-
-basedir::
-Required. The base directory (absolute path).
-
-recursive::
-Whether to search directories recursively. Default is 'false'.
-
-excludes::
-A regular expression pattern to identify files which will be excluded.
-
-newerThan::
-A date in the format `yyyy-MM-ddHH:mm:ss` or a date math expression (`NOW - 2YEARS`).
-
-olderThan::
-A date, using the same formats as newerThan.
-
-rootEntity::
-This should be set to false. This ensures that each row (filepath) emitted by this processor is considered to be a document.
-
-dataSource::
-Must be set to null.
-
-The example below shows the combination of the FileListEntityProcessor with another processor which will generate a set of fields from each file found.
-
-[source,xml]
-----
-<dataConfig>
-  <dataSource type="FileDataSource"/>
-  <document>
-    <!-- this outer processor generates a list of files satisfying the conditions
-         specified in the attributes -->
-    <entity name="f" processor="FileListEntityProcessor"
-            fileName=".*xml"
-            newerThan="'NOW-30DAYS'"
-            recursive="true"
-            rootEntity="false"
-            dataSource="null"
-            baseDir="/my/document/directory">
-
-      <!-- this processor extracts content using XPath from each file found -->
-
-      <entity name="nested" processor="XPathEntityProcessor"
-              forEach="/rootelement" url="${f.fileAbsolutePath}" >
-        <field column="name" xpath="/rootelement/name"/>
-        <field column="number" xpath="/rootelement/number"/>
-      </entity>
-    </entity>
-  </document>
-</dataConfig>
-----
-
-=== LineEntityProcessor
-
-This EntityProcessor reads all content from the data source on a line by line basis and returns a field called `rawLine` for each line read. The content is not parsed in any way; however, you may add transformers to manipulate the data within the `rawLine` field, or to create other additional fields.
-
-The lines read can be filtered by two regular expressions specified with the `acceptLineRegex` and `omitLineRegex` attributes.
-
-The LineEntityProcessor has the following attributes:
-
-url::
-A required attribute that specifies the location of the input file in a way that is compatible with the configured data source. If this value is relative and you are using FileDataSource or URLDataSource, it assumed to be relative to baseLoc.
-
-acceptLineRegex::
-An optional attribute that if present discards any line which does not match the regular expression.
-
-omitLineRegex::
-An optional attribute that is applied after any `acceptLineRegex` and that discards any line which matches this regular expression.
-
-For example:
-
-[source,xml]
-----
-<entity name="jc"
-        processor="LineEntityProcessor"
-        acceptLineRegex="^.*\.xml$"
-        omitLineRegex="/obsolete"
-        url="file:///Volumes/ts/files.lis"
-        rootEntity="false"
-        dataSource="myURIreader1"
-        transformer="RegexTransformer,DateFormatTransformer">
-</entity>
-----
-
-While there are use cases where you might need to create a Solr document for each line read from a file, it is expected that in most cases that the lines read by this processor will consist of a pathname, which in turn will be consumed by another entity processor, such as the XPathEntityProcessor.
-
-=== PlainTextEntityProcessor
-
-This EntityProcessor reads all content from the data source into an single implicit field called `plainText`. The content is not parsed in any way, however you may add <<Transformers,transformers>> to manipulate the data within the `plainText` as needed, or to create other additional fields.
-
-For example:
-
-[source,xml]
-----
-<entity processor="PlainTextEntityProcessor" name="x" url="http://abc.com/a.txt" dataSource="data-source-name">
-  <!-- copies the text to a field called 'text' in Solr-->
-  <field column="plainText" name="text"/>
-</entity>
-----
-
-Ensure that the dataSource is of type `DataSource<Reader>` (`FileDataSource`, `URLDataSource`).
-
-=== SolrEntityProcessor
-
-This EntityProcessor imports data from different Solr instances and cores. The data is retrieved based on a specified filter query. This EntityProcessor is useful in cases you want to copy your Solr index and want to modify the data in the target index.
-
-The SolrEntityProcessor can only copy fields that are stored in the source index.
-
-The SolrEntityProcessor supports the following parameters:
-
-url::
-Required. The URL of the source Solr instance and/or core.
-
-query::
-Required. The main query to execute on the source index.
-
-fq::
-Any filter queries to execute on the source index. If more than one filter query is defined, they must be separated by a comma.
-
-rows::
-The number of rows to return for each iteration. The default is 50 rows.
-
-fl::
-A comma-separated list of fields to fetch from the source index. Note, these fields must be stored in the source Solr instance.
-
-qt::
-The search handler to use, if not the default.
-
-wt::
-The response format to use, either *javabin* or *xml*.
-
-timeout::
-The query timeout in seconds. The default is 5 minutes (300 seconds).
-
-cursorMark="true"::
-Use this to enable cursor for efficient result set scrolling.
-
-sort="id asc"::
-This should be used to specify a sort parameter referencing the uniqueKey field of the source Solr instance. See <<pagination-of-results.adoc#pagination-of-results,Pagination of Results>> for details.
-
-Here is a simple example of a SolrEntityProcessor:
-
-[source,xml]
-<dataConfig>
-  <document>
-    <entity name="sep" processor="SolrEntityProcessor"
-            url="http://127.0.0.1:8983/solr/db "
-            query="*:*"
-            fl="*,orig_version_l:_version_,ignored_price_c:price_c"/>
-  </document>
-</dataConfig>
-
-== Transformers
-
-Transformers manipulate the fields in a document returned by an entity. A transformer can create new fields or modify existing ones. You must tell the entity which transformers your import operation will be using, by adding an attribute containing a comma separated list to the `<entity>` element.
-
-[source,xml]
-----
-<entity name="abcde" transformer="org.apache.solr....,my.own.transformer,..." />
-----
-
-Specific transformation rules are then added to the attributes of a `<field>` element, as shown in the examples below. The transformers are applied in the order in which they are specified in the transformer attribute.
-
-The DataImportHandler contains several built-in transformers.
-You can also write your own custom transformers if necessary.
-The ScriptTransformer described below offers an alternative method for writing your own transformers.
-
-=== ClobTransformer
-
-You can use the ClobTransformer to create a string out of a CLOB in a database. A http://en.wikipedia.org/wiki/Character_large_object[CLOB] is a character large object: a collection of character data typically stored in a separate location that is referenced in the database.
-
-The ClobTransformer accepts these attributes:
-
-clob::
-Boolean value to signal if ClobTransformer should process this field or not. If this attribute is omitted, then the corresponding field is not transformed.
-
-sourceColName::
-The source column to be used as input. If this is absent source and target are same
-
-Here's an example of invoking the ClobTransformer.
-
-[source,xml]
-----
-<entity name="example" transformer="ClobTransformer" ...>
-  <field column="hugeTextField" clob="true" />
-  ...
-</entity>
-----
-
-=== The DateFormatTransformer
-
-This transformer converts dates from one format to another. This would be useful, for example, in a situation where you wanted to convert a field with a fully specified date/time into a less precise date format, for use in faceting.
-
-DateFormatTransformer applies only on the fields with an attribute `dateTimeFormat`. Other fields are not modified.
-
-This transformer recognizes the following attributes:
-
-dateTimeFormat::
-The format used for parsing this field. This must comply with the syntax of the {java-javadocs}java/text/SimpleDateFormat.html[Java SimpleDateFormat] class.
-
-sourceColName::
-The column on which the dateFormat is to be applied. If this is absent source and target are same.
-
-locale::
-The locale to use for date transformations. If not defined, the ROOT locale is used. It must be specified as language-country (https://tools.ietf.org/html/bcp47[BCP 47 language tag]). For example, `en-US`.
-
-Here is example code that returns the date rounded up to the month "2007-JUL":
-
-[source,xml]
-----
-<entity name="en" pk="id" transformer="DateFormatTransformer" ... >
-  ...
-  <field column="date" sourceColName="fulldate" dateTimeFormat="yyyy-MMM"/>
-</entity>
-----
-
-=== The HTMLStripTransformer
-
-You can use this transformer to strip HTML out of a field.
-
-There is one attribute for this transformer, `stripHTML`, which is a boolean value (true or false) to signal if the HTMLStripTransformer should process the field or not.
-
-For example:
-
-[source,xml]
-----
-<entity name="e" transformer="HTMLStripTransformer" ... >
-  <field column="htmlText" stripHTML="true" />
-  ...
-</entity>
-----
-
-=== The LogTransformer
-
-You can use this transformer to log data to the console or log files. For example:
-
-[source,xml]
-----
-<entity ...
-        transformer="LogTransformer"
-        logTemplate="The name is ${e.name}" logLevel="debug">
-  ....
-</entity>
-----
-
-Unlike other transformers, the LogTransformer does not apply to any field, so the attributes are applied on the entity itself.
-
-=== The NumberFormatTransformer
-
-Use this transformer to parse a number from a string, converting it into the specified format, and optionally using a different locale.
-
-NumberFormatTransformer will be applied only to fields with an attribute `formatStyle`.
-
-This transformer recognizes the following attributes:
-
-formatStyle::
-The format used for parsing this field. The value of the attribute must be one of `number`, `percent`, `integer`, or `currency`. This uses the semantics of the Java NumberFormat class.
-
-sourceColName::
-The column on which the NumberFormat is to be applied. This is attribute is absent. The source column and the target column are the same.
-
-locale::
-The locale to be used for parsing the strings. The locale. If not defined, the ROOT locale is used. It must be specified as language-country (https://tools.ietf.org/html/bcp47[BCP 47 language tag]). For example, `en-US`.
-
-For example:
-
-[source,xml]
-----
-<entity name="en" pk="id" transformer="NumberFormatTransformer" ...>
-  ...
-
-  <!-- treat this field as UK pounds -->
-
-  <field name="price_uk" column="price" formatStyle="currency" locale="en-UK"/>
-</entity>
-----
-
-=== The RegexTransformer
-
-The regex transformer helps in extracting or manipulating values from fields (from the source) using Regular Expressions. The actual class name is `org.apache.solr.handler.dataimport.RegexTransformer`. But as it belongs to the default package the package-name can be omitted.
-
-The table below describes the attributes recognized by the regex transformer.
-
-regex::
-The regular expression that is used to match against the column or sourceColName's value(s). If replaceWith is absent, each regex _group_ is taken as a value and a list of values is returned.
-
-sourceColName::
-The column on which the regex is to be applied. If not present, then the source and target are identical.
-
-splitBy::
-Used to split a string. It returns a list of values. Note, this is a regular expression so it may need to be escaped (e.g., via back-slashes).
-
-groupNames::
-A comma separated list of field column names, used where the regex contains groups and each group is to be saved to a different field. If some groups are not to be named leave a space between commas.
-
-replaceWith::
-Used along with regex. It is equivalent to the method `new String(<sourceColVal>).replaceAll(<regex>, <replaceWith>)`.
-
-Here is an example of configuring the regex transformer:
-
-[source,xml]
-----
-<entity name="foo" transformer="RegexTransformer"
-        query="select full_name, emailids from foo"> --<1>
-  <field column="full_name"/> --<2>
-  <field column="firstName" regex="Mr(\w*)\b.*" sourceColName="full_name"/>
-  <field column="lastName" regex="Mr.*?\b(\w*)" sourceColName="full_name"/>
-
-  <!-- another way of doing the same -->
-
-  <field column="fullName" regex="Mr(\w*)\b(.*)" groupNames="firstName,lastName"/>
-  <field column="mailId" splitBy="," sourceColName="emailids"/> --<3>
-</entity>
-----
-
-<1> In this example, `regex` and `sourceColName` are custom attributes used by the transformer.
-<2> The transformer reads the field `full_name` from the result set and transforms it to two new target fields, `firstName` and `lastName`. Even though the query returned only one column, `full_name`, in the result set, the Solr document gets two extra fields `firstName` and `lastName` which are "derived" fields. These new fields are only created if the regexp matches.
-<3> The `emailids` field in the table can be a comma-separated value. It ends up producing one or more email IDs, and we expect the `mailId` to be a multivalued field in Solr.
-
-Note that this transformer can be used to either split a string into tokens based on a splitBy pattern, or to perform a string substitution as per `replaceWith`, or it can assign groups within a pattern to a list of `groupNames`. It decides what it is to do based upon the above attributes `splitBy`, `replaceWith` and `groupNames` which are looked for in order. This first one found is acted upon and other unrelated attributes are ignored.
-
-=== The ScriptTransformer
-
-The script transformer allows arbitrary transformer functions to be written in any scripting language supported by Java, such as Javascript, JRuby, Jython, Groovy, or BeanShell. Javascript is integrated into Java by default; you'll need to integrate other languages yourself.
-
-Each function you write must accept a row variable (which corresponds to a `Java Map<String,Object>`, thus permitting `get,put,remove` operations). Thus you can modify the value of an existing field or add new fields. The return value of the function is the returned object.
-
-The script is inserted into the DIH configuration file at the top level and is called once for each row.
-
-Here is a simple example.
-
-[source,xml]
-----
-<dataconfig>
-
-  <!-- simple script to generate a new row, converting a temperature from Fahrenheit to Centigrade -->
-
-  <script><![CDATA[
-    function f2c(row) {
-      var tempf, tempc;
-      tempf = row.get('temp_f');
-      if (tempf != null) {
-        tempc = (tempf - 32.0)*5.0/9.0;
-        row.put('temp_c', temp_c);
-      }
-      return row;
-    }
-    ]]>
-  </script>
-  <document>
-
-    <!-- the function is specified as an entity attribute -->
-
-    <entity name="e1" pk="id" transformer="script:f2c" query="select * from X">
-      ....
-    </entity>
-  </document>
-</dataConfig>
-----
-
-=== The TemplateTransformer
-
-You can use the template transformer to construct or modify a field value, perhaps using the value of other fields. You can insert extra text into the template.
-
-[source,xml]
-----
-<entity name="en" pk="id" transformer="TemplateTransformer" ...>
-  ...
-  <!-- generate a full address from fields containing the component parts -->
-  <field column="full_address" template="${en.street},${en.city},${en.zip}" />
-</entity>
-----
-
-== Special Commands for DIH
-
-You can pass special commands to the DIH by adding any of the variables listed below to any row returned by any component:
-
-$skipDoc::
-Skip the current document; that is, do not add it to Solr. The value can be the string `true` or `false`.
-
-$skipRow::
-Skip the current row. The document will be added with rows from other entities. The value can be the string `true` or `false`.
-
-$deleteDocById::
-Delete a document from Solr with this ID. The value has to be the `uniqueKey` value of the document.
-
-$deleteDocByQuery::
-Delete documents from Solr using this query. The value must be a Solr Query.
diff --git a/solr/solr-ref-guide/src/using-the-solr-administration-user-interface.adoc b/solr/solr-ref-guide/src/using-the-solr-administration-user-interface.adoc
index a74b3ed..e764ed8 100644
--- a/solr/solr-ref-guide/src/using-the-solr-administration-user-interface.adoc
+++ b/solr/solr-ref-guide/src/using-the-solr-administration-user-interface.adoc
@@ -30,7 +30,6 @@
 * *<<collection-specific-tools.adoc#collection-specific-tools,Collection-Specific Tools>>* is a section explaining additional screens available for each collection.
 // TODO: SOLR-10655 BEGIN: refactor this into a 'collection-screens-list.include.adoc' file for reuse
 ** <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data found in specific fields.
-** <<dataimport-screen.adoc#dataimport-screen,Dataimport>> - shows you information about the current status of the Data Import Handler.
 ** <<documents-screen.adoc#documents-screen,Documents>> - provides a simple form allowing you to execute various Solr indexing commands directly from the browser.
 ** <<files-screen.adoc#files-screen,Files>> - shows the current core configuration files such as `solrconfig.xml`.
 ** <<query-screen.adoc#query-screen,Query>> - lets you submit a structured query about various elements of a core.
diff --git a/solr/solr-ref-guide/tools/CustomizedAsciidoctorAntTask.java b/solr/solr-ref-guide/tools/CustomizedAsciidoctorAntTask.java
deleted file mode 100644
index 5c1d700..0000000
--- a/solr/solr-ref-guide/tools/CustomizedAsciidoctorAntTask.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-import org.asciidoctor.ant.AsciidoctorAntTask;
-
-/**
- * Customized version of the default AsciidoctorAntTask
- * To deal with the fact that we want sourceDocumentName="" treated the same as unspecified (ie: null)
- * in order to be able to wrap in a macro with defaults
- */
-public class CustomizedAsciidoctorAntTask extends AsciidoctorAntTask {
-  @SuppressWarnings("UnusedDeclaration")
-  public void setSourceDocumentName(String sourceDocumentName) {
-    if ("".equals(sourceDocumentName)) {
-      sourceDocumentName = null;
-    }
-    super.setSourceDocumentName(sourceDocumentName);
-  }
-}
-
- 
diff --git a/solr/solr-ref-guide/tools/asciidoctor-antlib.xml b/solr/solr-ref-guide/tools/asciidoctor-antlib.xml
deleted file mode 100644
index d67e3e1..0000000
--- a/solr/solr-ref-guide/tools/asciidoctor-antlib.xml
+++ /dev/null
@@ -1,22 +0,0 @@
-<?xml version="1.0"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
--->
-<antlib>
-   <typedef name="convert" classname="CustomizedAsciidoctorAntTask"/>
-</antlib>
diff --git a/solr/solrj/build.xml b/solr/solrj/build.xml
deleted file mode 100644
index 3393438..0000000
--- a/solr/solrj/build.xml
+++ /dev/null
@@ -1,88 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="solr-solrj" default="default" xmlns:ivy="antlib:org.apache.ivy.ant">
-  <description>Solrj - Solr Java Client</description>
-
-  <property name="test.lib.dir" location="test-lib"/>
-
-  <import file="../common-build.xml"/>
-
-  <!-- Specialized compile classpath: to only depend on what solrj should depend on (e.g. not lucene) -->
-  <path id="classpath">
-    <fileset dir="${common-solr.dir}/solrj/lib" excludes="${common.classpath.excludes}"/>
-  </path>
-
-  <!-- Specialized common-solr.test.classpath, to remove the Solr core test output -->
-  <path id="test.classpath">
-    <fileset dir="${test.lib.dir}" includes="*.jar"/>
-    <pathelement path="${common-solr.dir}/build/solr-test-framework/classes/java"/>
-    <pathelement path="${tests.userdir}"/>
-    <path refid="test.base.classpath"/>
-    <path refid="solr.base.classpath"/>
-    <pathelement path="${example}/resources"/>
-  </path>
-
-  <target name="resolve" depends="ivy-availability-check,ivy-fail,ivy-configure">
-    <sequential>
-      <ivy:retrieve conf="compile" type="jar,bundle" sync="${ivy.sync}" log="download-only" symlink="${ivy.symlink}"/>
-      <ivy:retrieve conf="test" type="jar,bundle,test" sync="${ivy.sync}" log="download-only" symlink="${ivy.symlink}"
-                    pattern="${test.lib.dir}/[artifact]-[revision](-[classifier]).[ext]"/>
-    </sequential>
-  </target>
-
-  <!-- Specialized to depend on nothing -->
-  <target name="javadocs" depends="compile-core,define-lucene-javadoc-url,check-javadocs-uptodate"
-          unless="javadocs-uptodate-${name}">
-    <sequential>
-      <mkdir dir="${javadoc.dir}/${name}"/>
-      <solr-invoke-javadoc>
-        <solrsources>
-          <packageset dir="${src.dir}"/>
-        </solrsources>
-      </solr-invoke-javadoc>
-      <solr-jarify basedir="${javadoc.dir}/${name}" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-     </sequential>
-  </target>
-
-  <!-- Specialized to use lucene's classpath too, because it refs e.g. qp syntax 
-       (even though it doesnt compile with it) 
-       TODO: would be nice to fix this up better, but it's hard because of
-       the different ways solr links to lucene javadocs -->
-  <target name="-ecj-javadoc-lint-src" depends="-ecj-resolve">
-    <ecj-macro srcdir="${src.dir}" configuration="${common.dir}/tools/javadoc/ecj.javadocs.prefs">
-      <classpath>
-        <path refid="classpath"/>
-        <path refid="solr.lucene.libs"/>
-      </classpath>
-    </ecj-macro>
-  </target>
-
-
-  <target name="dist" depends="common-solr.dist">
-    <mkdir  dir="${dist}/solrj-lib" />
-    <copy todir="${dist}/solrj-lib">
-      <fileset dir="${common-solr.dir}/solrj/lib">
-        <include name="*.jar"/>
-      </fileset>
-    </copy>
-  </target>
-
-  <target name="-dist-maven" depends="-dist-maven-src-java"/>
-
-  <target name="-install-to-maven-local-repo" depends="-install-src-java-to-maven-local-repo"/>
-</project>
diff --git a/solr/solrj/ivy.xml b/solr/solrj/ivy.xml
deleted file mode 100644
index 5ce77aa..0000000
--- a/solr/solrj/ivy.xml
+++ /dev/null
@@ -1,77 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="solrj"/>
-
-  <configurations defaultconfmapping="compile->master;test->master">
-    <!-- artifacts in the "compile" configuration will go into solr/solrj/lib/ -->
-    <conf name="compile" transitive="false"/>
-    <!-- artifacts in the "test" configuration will go into solr/solrj/test-lib/ -->
-    <conf name="test" transitive="false"/>
-  </configurations>
-
-  <dependencies>
-    <dependency org="org.apache.zookeeper" name="zookeeper" rev="${/org.apache.zookeeper/zookeeper}" conf="compile"/>
-    <dependency org="org.apache.zookeeper" name="zookeeper-jute" rev="${/org.apache.zookeeper/zookeeper-jute}" conf="compile"/>
-    <!-- needed to instantiate an embedded Zookeeper server as of 3.6.1-->
-    <dependency org="org.xerial.snappy" name="snappy-java" rev="${/org.xerial.snappy/snappy-java}" conf="compile" />
-    <dependency org="commons-lang" name="commons-lang" rev="${/commons-lang/commons-lang}" conf="compile" />
-
-    <dependency org="org.apache.httpcomponents" name="httpclient" rev="${/org.apache.httpcomponents/httpclient}" conf="compile"/>
-    <dependency org="org.apache.httpcomponents" name="httpmime" rev="${/org.apache.httpcomponents/httpmime}" conf="compile"/>
-    <dependency org="org.apache.httpcomponents" name="httpcore" rev="${/org.apache.httpcomponents/httpcore}" conf="compile"/>
-    <dependency org="commons-io" name="commons-io" rev="${/commons-io/commons-io}" conf="compile"/>
-    <dependency org="org.apache.commons" name="commons-math3" rev="${/org.apache.commons/commons-math3}" conf="compile"/>
-    <dependency org="org.codehaus.woodstox" name="woodstox-core-asl" rev="${/org.codehaus.woodstox/woodstox-core-asl}" conf="compile"/>
-    <dependency org="org.codehaus.woodstox" name="stax2-api" rev="${/org.codehaus.woodstox/stax2-api}" conf="compile"/>
-    <dependency org="org.slf4j" name="slf4j-api" rev="${/org.slf4j/slf4j-api}" conf="compile"/>
-    <dependency org="org.slf4j" name="jcl-over-slf4j" rev="${/org.slf4j/jcl-over-slf4j}" conf="compile"/>
-
-    <dependency org="org.eclipse.jetty.http2" name="http2-client" rev="${/org.eclipse.jetty.http2/http2-client}" conf="compile"/>
-    <dependency org="org.eclipse.jetty.http2" name="http2-http-client-transport" rev="${/org.eclipse.jetty.http2/http2-http-client-transport}" conf="compile"/>
-    <dependency org="org.eclipse.jetty.http2" name="http2-common" rev="${/org.eclipse.jetty.http2/http2-common}" conf="compile"/>
-    <dependency org="org.eclipse.jetty.http2" name="http2-hpack" rev="${/org.eclipse.jetty.http2/http2-hpack}" conf="compile"/>
-
-    <dependency org="org.eclipse.jetty" name="jetty-client" rev="${/org.eclipse.jetty/jetty-client}" conf="compile"/>
-    <dependency org="org.eclipse.jetty" name="jetty-util" rev="${/org.eclipse.jetty/jetty-util}" conf="compile"/>
-    <dependency org="org.eclipse.jetty" name="jetty-http" rev="${/org.eclipse.jetty/jetty-http}" conf="compile"/>
-    <dependency org="org.eclipse.jetty" name="jetty-io" rev="${/org.eclipse.jetty/jetty-io}" conf="compile"/>
-    <dependency org="org.eclipse.jetty" name="jetty-alpn-java-client" rev="${/org.eclipse.jetty/jetty-alpn-java-client}" conf="compile"/>
-    <dependency org="org.eclipse.jetty" name="jetty-alpn-client" rev="${/org.eclipse.jetty/jetty-alpn-client}" conf="compile"/>
-
-    <dependency org="io.netty" name="netty-common" rev="${/io.netty/netty-common}" conf="compile"/>
-    <dependency org="io.netty" name="netty-resolver" rev="${/io.netty/netty-resolver}" conf="compile"/>
-    <dependency org="io.netty" name="netty-buffer" rev="${/io.netty/netty-buffer}" conf="compile"/>
-    <dependency org="io.netty" name="netty-codec" rev="${/io.netty/netty-codec}" conf="compile"/>
-    <dependency org="io.netty" name="netty-handler" rev="${/io.netty/netty-handler}" conf="compile"/>
-    <dependency org="io.netty" name="netty-transport" rev="${/io.netty/netty-transport}" conf="compile"/>
-    <dependency org="io.netty" name="netty-transport-native-epoll" rev="${/io.netty/netty-transport-native-epoll}" conf="compile"/>
-    <dependency org="io.netty" name="netty-transport-native-unix-common" rev="${/io.netty/netty-transport-native-unix-common}" conf="compile"/>
-
-    <dependency org="org.apache.logging.log4j" name="log4j-slf4j-impl" rev="${/org.apache.logging.log4j/log4j-slf4j-impl}" conf="test"/>
-
-    <dependency org="org.mockito" name="mockito-core" rev="${/org.mockito/mockito-core}" conf="test"/>
-    <dependency org="net.bytebuddy" name="byte-buddy" rev="${/net.bytebuddy/byte-buddy}" conf="test"/>
-    <dependency org="org.objenesis" name="objenesis" rev="${/org.objenesis/objenesis}" conf="test"/>
-
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/>
-  </dependencies>
-</ivy-module>
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/SolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/SolrClient.java
index 01f5775..cee4e9a 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/SolrClient.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/SolrClient.java
@@ -1305,4 +1305,12 @@
     return binder;
   }
 
+  /**
+   * This method defines the context in which this Solr client
+   * is being used (e.g. for internal communication between Solr
+   * nodes or as an external client). The default value is {@code SolrClientContext#Client}
+   */
+  public SolrRequest.SolrClientContext getContext() {
+    return SolrRequest.SolrClientContext.CLIENT;
+  }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/SolrRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/SolrRequest.java
index 31dcda0..22e615f 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/SolrRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/SolrRequest.java
@@ -54,6 +54,20 @@
     DELETE
   };
 
+  public enum SolrRequestType {
+    QUERY,
+    UPDATE,
+    SECURITY,
+    ADMIN,
+    STREAMING,
+    UNSPECIFIED
+  };
+
+  public enum SolrClientContext {
+    CLIENT,
+    SERVER
+  };
+
   public static final Set<String> SUPPORTED_METHODS = Set.of(
       METHOD.GET.toString(),
       METHOD.POST.toString(),
@@ -168,6 +182,11 @@
     this.queryParams = queryParams;
   }
 
+  /**
+   * This method defines the type of this Solr request.
+   */
+  public abstract String getRequestType();
+
   public abstract SolrParams getParams();
 
   /**
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
index 7810629..e95c5d0 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
@@ -44,6 +44,7 @@
 import org.apache.commons.io.IOUtils;
 import org.apache.http.Header;
 import org.apache.http.HttpEntity;
+import org.apache.http.HttpMessage;
 import org.apache.http.HttpResponse;
 import org.apache.http.HttpStatus;
 import org.apache.http.NameValuePair;
@@ -98,6 +99,8 @@
   private static final Charset FALLBACK_CHARSET = StandardCharsets.UTF_8;
   private static final String DEFAULT_PATH = "/select";
   private static final long serialVersionUID = -946812319974801896L;
+
+  protected static final Set<Integer> UNMATCHED_ACCEPTED_ERROR_CODES = Collections.singleton(429);
   
   /**
    * User-Agent String.
@@ -358,7 +361,9 @@
     if (parser == null) {
       parser = this.parser;
     }
-    
+
+    Header[] contextHeaders = buildRequestSpecificHeaders(request);
+
     // The parser 'wt=' and 'version=' params are used instead of the original
     // params
     ModifiableSolrParams wparams = new ModifiableSolrParams(params);
@@ -387,7 +392,10 @@
         throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "GET can't send streams!");
       }
 
-      return new HttpGet(basePath + path + wparams.toQueryString());
+      HttpGet result = new HttpGet(basePath + path + wparams.toQueryString());
+
+      populateHeaders(result, contextHeaders);
+      return result;
     }
 
     if (SolrRequest.METHOD.DELETE == request.getMethod()) {
@@ -428,6 +436,9 @@
             contentWriter.write(outstream);
           }
         });
+
+        populateHeaders(postOrPut, contextHeaders);
+
         return postOrPut;
 
       } else if (streams == null || isMultipart) {
@@ -624,6 +635,12 @@
       if (procCt != null) {
         String procMimeType = ContentType.parse(procCt).getMimeType().trim().toLowerCase(Locale.ROOT);
         if (!procMimeType.equals(mimeType)) {
+          if (isUnmatchedErrorCode(mimeType, httpStatus)) {
+            throw new RemoteSolrException(baseUrl, httpStatus, "non ok status: " + httpStatus
+                  + ", message:" + response.getStatusLine().getReasonPhrase(),
+                  null);
+          }
+
           // unexpected mime type
           String msg = "Expected mime type " + procMimeType + " but got " + mimeType + ".";
           Charset exceptionCharset = charset != null? charset : FALLBACK_CHARSET;
@@ -699,6 +716,37 @@
       }
     }
   }
+
+  // When raising an error using HTTP sendError, mime types can be mismatched. This is specifically true when
+  // SolrDispatchFilter uses the sendError mechanism since the expected MIME type of response is not HTML but
+  // HTTP sendError generates a HTML output, which can lead to mismatch
+  private boolean isUnmatchedErrorCode(String mimeType, int httpStatus) {
+    if (mimeType == null) {
+      return false;
+    }
+
+    if (mimeType.equalsIgnoreCase("text/html") && UNMATCHED_ACCEPTED_ERROR_CODES.contains(httpStatus)) {
+      return true;
+    }
+
+    return false;
+  }
+
+  private Header[] buildRequestSpecificHeaders(@SuppressWarnings({"rawtypes"}) final SolrRequest request) {
+    Header[] contextHeaders = new Header[2];
+
+    //TODO: validate request context here: https://issues.apache.org/jira/browse/SOLR-14720
+    contextHeaders[0] = new BasicHeader(CommonParams.SOLR_REQUEST_CONTEXT_PARAM, getContext().toString());
+
+    contextHeaders[1] = new BasicHeader(CommonParams.SOLR_REQUEST_TYPE_PARAM, request.getRequestType());
+
+    return contextHeaders;
+  }
+
+  private void populateHeaders(HttpMessage message, Header[] contextHeaders) {
+    message.addHeader(contextHeaders[0]);
+    message.addHeader(contextHeaders[1]);
+  }
   
   // -------------------------------------------------------------------
   // -------------------------------------------------------------------
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttp2SolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttp2SolrClient.java
index 96f96bf..8898092 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttp2SolrClient.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttp2SolrClient.java
@@ -40,8 +40,8 @@
  * {@link Http2SolrClient}. This is useful when you
  * have multiple Solr servers and the requests need to be Load Balanced among them.
  *
- * Do <b>NOT</b> use this class for indexing in master/slave scenarios since documents must be sent to the
- * correct master; no inter-node routing is done.
+ * Do <b>NOT</b> use this class for indexing in leader/follower scenarios since documents must be sent to the
+ * correct leader; no inter-node routing is done.
  *
  * In SolrCloud (leader/replica) scenarios, it is usually better to use
  * {@link CloudSolrClient}, but this class may be used
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
index bc4efbb..0f39fc0 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
@@ -33,8 +33,8 @@
  * {@link HttpSolrClient}. This is useful when you
  * have multiple Solr servers and the requests need to be Load Balanced among them.
  *
- * Do <b>NOT</b> use this class for indexing in master/slave scenarios since documents must be sent to the
- * correct master; no inter-node routing is done.
+ * Do <b>NOT</b> use this class for indexing in leader/follower scenarios since documents must be sent to the
+ * correct leader; no inter-node routing is done.
  *
  * In SolrCloud (leader/replica) scenarios, it is usually better to use
  * {@link CloudSolrClient}, but this class may be used
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBSolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBSolrClient.java
index c1e6af7..5488f66 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBSolrClient.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBSolrClient.java
@@ -213,7 +213,8 @@
       if (previousEx == null) {
         suffix = ":" + zombieServers.keySet();
       }
-      if (isTimeExceeded(timeAllowedNano, timeOutTime)) {
+      // Skipping check time exceeded for the first request
+      if (numServersTried > 0 && isTimeExceeded(timeAllowedNano, timeOutTime)) {
         throw new SolrServerException("Time allowed to handle this request exceeded"+suffix, previousEx);
       }
       if (serverStr == null) {
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TupleStream.java b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TupleStream.java
index 90bfb0e..9372fd6 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TupleStream.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/TupleStream.java
@@ -22,7 +22,6 @@
 import java.io.Serializable;
 import java.util.ArrayList;
 import java.util.List;
-import java.util.Optional;
 import java.util.Set;
 import java.util.UUID;
 import java.util.Map;
@@ -141,20 +140,34 @@
       shards = shardsMap.get(collection);
     } else {
       //SolrCloud Sharding
-      CloudSolrClient cloudSolrClient =
-          Optional.ofNullable(streamContext.getSolrClientCache()).orElseGet(SolrClientCache::new).getCloudSolrClient(zkHost);
+      SolrClientCache solrClientCache = (streamContext != null ? streamContext.getSolrClientCache() : null);
+      final SolrClientCache localSolrClientCache; // tracks any locally allocated cache that needs to be closed locally
+      if (solrClientCache == null) { // streamContext was null OR streamContext.getSolrClientCache() returned null
+        solrClientCache = localSolrClientCache = new SolrClientCache();
+      } else {
+        localSolrClientCache = null;
+      }
+      CloudSolrClient cloudSolrClient = solrClientCache.getCloudSolrClient(zkHost);
       ZkStateReader zkStateReader = cloudSolrClient.getZkStateReader();
       ClusterState clusterState = zkStateReader.getClusterState();
       Slice[] slices = CloudSolrStream.getSlices(collection, zkStateReader, true);
       Set<String> liveNodes = clusterState.getLiveNodes();
 
 
-      ModifiableSolrParams solrParams = new ModifiableSolrParams(streamContext.getRequestParams());
+      RequestReplicaListTransformerGenerator requestReplicaListTransformerGenerator;
+      final ModifiableSolrParams solrParams;
+      if (streamContext != null) {
+        solrParams = new ModifiableSolrParams(streamContext.getRequestParams());
+        requestReplicaListTransformerGenerator = streamContext.getRequestReplicaListTransformerGenerator();
+      } else {
+        solrParams = new ModifiableSolrParams();
+        requestReplicaListTransformerGenerator = null;
+      }
+      if (requestReplicaListTransformerGenerator == null) {
+        requestReplicaListTransformerGenerator = new RequestReplicaListTransformerGenerator();
+      }
       solrParams.add(requestParams);
 
-      RequestReplicaListTransformerGenerator requestReplicaListTransformerGenerator =
-          Optional.ofNullable(streamContext.getRequestReplicaListTransformerGenerator()).orElseGet(RequestReplicaListTransformerGenerator::new);
-
       ReplicaListTransformer replicaListTransformer = requestReplicaListTransformerGenerator.getReplicaListTransformer(solrParams);
 
       for(Slice slice : slices) {
@@ -170,10 +183,15 @@
           shards.add(sortedReplicas.get(0).getCoreUrl());
         }
       }
+      if (localSolrClientCache != null) {
+        localSolrClientCache.close();
+      }
     }
-    Object core = streamContext.get("core");
-    if (streamContext != null && streamContext.isLocal() && core != null) {
-      shards.removeIf(shardUrl -> !shardUrl.contains((CharSequence) core));
+    if (streamContext != null) {
+      Object core = streamContext.get("core");
+      if (streamContext.isLocal() && core != null) {
+        shards.removeIf(shardUrl -> !shardUrl.contains((CharSequence) core));
+      }
     }
 
     return shards;
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/AbstractUpdateRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/AbstractUpdateRequest.java
index cb9e74e..2b9870d 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/AbstractUpdateRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/AbstractUpdateRequest.java
@@ -116,6 +116,11 @@
     return new UpdateResponse();
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.UPDATE.toString();
+  }
+
   public boolean isWaitSearcher() {
     return params != null && params.getBool(UpdateParams.WAIT_SEARCHER, false);
   }
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java
index e0955c2..bffdc4e 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java
@@ -146,6 +146,11 @@
     return jsonStr();
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
+
   /**
    * Base class for asynchronous collection admin requests
    */
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/ConfigSetAdminRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/ConfigSetAdminRequest.java
index ab06a9f..e171aa1 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/ConfigSetAdminRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/ConfigSetAdminRequest.java
@@ -100,6 +100,11 @@
     }
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
+
   // CREATE request
   public static class Create extends ConfigSetSpecificAdminRequest<Create> {
     protected static String PROPERTY_PREFIX = "configSetProp";
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/CoreAdminRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/CoreAdminRequest.java
index 692b54d..9e571b7 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/CoreAdminRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/CoreAdminRequest.java
@@ -43,6 +43,11 @@
   protected String other = null;
   protected boolean isIndexInfoNeeded = true;
   protected CoreAdminParams.CoreAdminAction action = null;
+
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
   
   //a create core request
   public static class Create extends CoreAdminRequest {
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/DelegationTokenRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/DelegationTokenRequest.java
index 697a6ad..160ae3c 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/DelegationTokenRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/DelegationTokenRequest.java
@@ -81,6 +81,11 @@
 
     @Override
     public DelegationTokenResponse.Get createResponse(SolrClient client) { return new DelegationTokenResponse.Get(); }
+
+    @Override
+    public String getRequestType() {
+      return SolrRequestType.ADMIN.toString();
+    }
   }
 
   public static class Renew extends DelegationTokenRequest<Renew, DelegationTokenResponse.Renew> {
@@ -108,6 +113,11 @@
 
     @Override
     public DelegationTokenResponse.Renew createResponse(SolrClient client) { return new DelegationTokenResponse.Renew(); }
+
+    @Override
+    public String getRequestType() {
+      return SolrRequestType.ADMIN.toString();
+    }
   }
 
   public static class Cancel extends DelegationTokenRequest<Cancel, DelegationTokenResponse.Cancel> {
@@ -136,5 +146,10 @@
 
     @Override
     public DelegationTokenResponse.Cancel createResponse(SolrClient client) { return new DelegationTokenResponse.Cancel(); }
+
+    @Override
+    public String getRequestType() {
+      return SolrRequestType.ADMIN.toString();
+    }
   }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/DirectXmlRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/DirectXmlRequest.java
index ef5e954..6d0d328 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/DirectXmlRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/DirectXmlRequest.java
@@ -54,6 +54,11 @@
     return params;
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.UPDATE.toString();
+  }
+
 
   public void setParams(SolrParams params) {
     this.params = params;
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/DocumentAnalysisRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/DocumentAnalysisRequest.java
index be985e1..09263be 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/DocumentAnalysisRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/DocumentAnalysisRequest.java
@@ -209,4 +209,9 @@
     return showMatch;
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.QUERY.toString();
+  }
+
 }
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/FieldAnalysisRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/FieldAnalysisRequest.java
index aeea9e0..7ca6812 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/FieldAnalysisRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/FieldAnalysisRequest.java
@@ -87,6 +87,11 @@
     return params;
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.QUERY.toString();
+  }
+
   //================================================ Helper Methods ==================================================
 
   /**
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/GenericSolrRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/GenericSolrRequest.java
index d878925..27eb410 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/GenericSolrRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/GenericSolrRequest.java
@@ -51,4 +51,9 @@
   protected SimpleSolrResponse createResponse(SolrClient client) {
     return response;
   }
+
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.UNSPECIFIED.toString();
+  }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/HealthCheckRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/HealthCheckRequest.java
index 50d4d8c..f3aeb40 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/HealthCheckRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/HealthCheckRequest.java
@@ -48,5 +48,8 @@
     return new HealthCheckResponse();
   }
 
-
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/LukeRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/LukeRequest.java
index 6e89c84..a43b4a9 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/LukeRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/LukeRequest.java
@@ -129,5 +129,10 @@
     return params;
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
+
 }
 
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/QueryRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/QueryRequest.java
index 1c2fda4..d80ab64 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/QueryRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/QueryRequest.java
@@ -76,5 +76,9 @@
     return query;
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.QUERY.toString();
+  }
 }
 
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/SolrPing.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/SolrPing.java
index 43801c6..75cfb52 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/SolrPing.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/SolrPing.java
@@ -54,6 +54,11 @@
   public ModifiableSolrParams getParams() {
     return params;
   }
+
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
   
   /**
    * Remove the action parameter from this request. This will result in the same
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/V2Request.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/V2Request.java
index 5334edd..932eb6b 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/V2Request.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/V2Request.java
@@ -133,6 +133,11 @@
     return super.getResponseParser();
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
+
   public static class Builder {
     private String resource;
     private METHOD method = METHOD.GET;
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/schema/AbstractSchemaRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/schema/AbstractSchemaRequest.java
index b20a570..ed0f315 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/schema/AbstractSchemaRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/schema/AbstractSchemaRequest.java
@@ -37,4 +37,8 @@
     return params;
   }
 
+  @Override
+  public String getRequestType() {
+    return SolrRequestType.ADMIN.toString();
+  }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/ApiType.java b/solr/solrj/src/java/org/apache/solr/cluster/api/ApiType.java
new file mode 100644
index 0000000..b469b5f
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/ApiType.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+/**
+ * Types of API calls
+ */
+public enum ApiType {
+    V1("solr"),
+    V2("api");
+    final String prefix;
+
+    ApiType(String prefix) {
+        this.prefix = prefix;
+    }
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/CollectionConfig.java b/solr/solrj/src/java/org/apache/solr/cluster/api/CollectionConfig.java
new file mode 100644
index 0000000..4f6b82c
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/CollectionConfig.java
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+
+public interface CollectionConfig {
+
+  String name();
+
+  SimpleMap<Resource> resources();
+
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/HashRange.java b/solr/solrj/src/java/org/apache/solr/cluster/api/HashRange.java
new file mode 100644
index 0000000..e02bf9b
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/HashRange.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.solr.cluster.api;
+
+/**
+ * A range of hash that is stored in a shard
+ */
+public interface HashRange {
+
+  /** minimum value (inclusive) */
+  int min();
+
+  /** maximum value (inclusive) */
+  int max();
+
+  /** Check if a given hash falls in this range */
+  default boolean includes(int hash) {
+    return hash >= min() && hash <= max();
+  }
+
+  /** Check if another range is a subset of this range */
+  default boolean isSubset(HashRange subset) {
+    return min() <= subset.min() && max() >= subset.max();
+  }
+
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/Resource.java b/solr/solrj/src/java/org/apache/solr/cluster/api/Resource.java
new file mode 100644
index 0000000..470590e
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/Resource.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+import org.apache.solr.common.SolrException;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+/**A binary resource. The impl is agnostic of the content type */
+public interface Resource {
+    /**
+     * This is a full path. e.g schema.xml , solrconfig.xml , lang/stopwords.txt etc
+     */
+    String name();
+    /** read a file/resource.
+     * The caller should consume the stream completely and should not hold a reference to this stream.
+     * This method closes the stream soon after the method returns
+     * @param resourceConsumer This should be a full path. e.g schema.xml , solrconfig.xml , lang/stopwords.txt etc
+     */
+    void get(Consumer resourceConsumer) throws SolrException;
+
+    interface Consumer {
+        void read(InputStream is) throws IOException;
+    }
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/Router.java b/solr/solrj/src/java/org/apache/solr/cluster/api/Router.java
new file mode 100644
index 0000000..e2a147e
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/Router.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+/**identify shards for a given routing key or document id */
+public interface Router {
+
+    /**shard name for a given routing key */
+    String shard(String routingKey);
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/Shard.java b/solr/solrj/src/java/org/apache/solr/cluster/api/Shard.java
new file mode 100644
index 0000000..d618fcd
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/Shard.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+/**A shard of a collection */
+public interface Shard {
+
+  /**name of the shard */
+  String name();
+
+  /**collection this shard belongs to */
+  String collection();
+
+  /**hash range of this shard. null if this is not using hash based router */
+  HashRange range();
+
+  /**  replicas of the shard */
+  SimpleMap<ShardReplica> replicas();
+
+  /**
+   * Name of the replica that is acting as the leader at the moment
+   */
+  String leader();
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/ShardReplica.java b/solr/solrj/src/java/org/apache/solr/cluster/api/ShardReplica.java
new file mode 100644
index 0000000..1a9d834
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/ShardReplica.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+import org.apache.solr.common.cloud.Replica;
+
+/** replica of a shard */
+public interface ShardReplica {
+  /** Name of this replica */
+  String name();
+
+  /** The shard which it belongs to */
+  String shard();
+
+  /** collection which it belongs to */
+  String collection();
+
+  /** Name of the node where this replica is present */
+  String node();
+
+  /** Name of the core where this is hosted */
+  String core();
+
+  /** type of the replica */
+  Replica.Type type();
+
+  /** Is the replica alive now */
+  boolean alive();
+
+  /**Size of the index in bytes. Keep in mind that this may result in a network call.
+   * Also keep in mind that the value that you get is at best an approximation.
+   * The exact size may vary from replica to replica
+   */
+  long indexSize();
+
+  /**Is this replica the leader */
+  boolean isLeader();
+
+  /**Baseurl for this replica
+   */
+  String url(ApiType type);
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/SimpleMap.java b/solr/solrj/src/java/org/apache/solr/cluster/api/SimpleMap.java
new file mode 100644
index 0000000..ca747b9
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/SimpleMap.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+import org.apache.solr.common.MapWriter;
+
+import java.io.IOException;
+import java.util.function.BiConsumer;
+import java.util.function.BiFunction;
+import java.util.function.Consumer;
+import java.util.function.Function;
+
+/**
+ * A simplified read-only key-value structure. It is designed to support large datasets without consuming lot of memory
+ * The objective is to provide implementations that are cheap and memory efficient to implement and consume.
+ * The keys are always {@link CharSequence} objects, The values can be of any type
+ */
+public interface SimpleMap<T> extends MapWriter {
+
+  /**get a value by key. If not present , null is returned */
+  T get(String key);
+
+  /**Navigate through all keys and values */
+  void forEachEntry(BiConsumer<String, ? super T> fun);
+
+  /** iterate through all keys
+   * The default impl is suboptimal. Proper implementations must do it more efficiently
+   * */
+  default void forEachKey(Consumer<String> fun) {
+    forEachEntry((k, t) -> fun.accept(k));
+  }
+
+  int size();
+  /**
+   * iterate through all keys but abort in between if required
+   *  The default impl is suboptimal. Proper implementations must do it more efficiently
+   * @param fun Consume each key and return a boolean to signal whether to proceed or not. If true , continue. If false stop
+   * */
+  default void abortableForEachKey(Function<String, Boolean> fun) {
+    abortableForEach((key, t) -> fun.apply(key));
+  }
+
+
+  /**
+   * Navigate through all key-values but abort in between if required.
+   * The default impl is suboptimal. Proper implementations must do it more efficiently
+   * @param fun Consume each entry and return a boolean to signal whether to proceed or not. If true, continue, if false stop
+   */
+  default void abortableForEach(BiFunction<String, ? super T, Boolean> fun) {
+    forEachEntry(new BiConsumer<String, T>() {
+      boolean end = false;
+      @Override
+      public void accept(String k, T v) {
+        if (end) return;
+        end = fun.apply(k, v);
+      }
+    });
+  }
+
+
+  @Override
+  default void writeMap(EntryWriter ew) throws IOException {
+    forEachEntry(ew::putNoEx);
+  }
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/SolrCluster.java b/solr/solrj/src/java/org/apache/solr/cluster/api/SolrCluster.java
new file mode 100644
index 0000000..794309a
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/SolrCluster.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+import org.apache.solr.common.SolrException;
+
+/** Represents a Solr cluster */
+
+public interface SolrCluster {
+
+  /** collections in the cluster */
+  SimpleMap<SolrCollection> collections() throws SolrException;
+
+  /** collections in the cluster and aliases */
+  SimpleMap<SolrCollection> collections(boolean includeAlias) throws SolrException;
+
+  /** nodes in the cluster */
+  SimpleMap<SolrNode> nodes() throws SolrException;
+
+
+  /** Config sets in the cluster*/
+  SimpleMap<CollectionConfig> configs() throws SolrException;
+
+  /** Name of the node in which the overseer is running */
+  String overseerNode() throws SolrException;
+
+  /**
+   * The name of the node in which this method is invoked from. returns null, if this is not invoked from a
+   * Solr node
+   */
+  String thisNode();
+
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/SolrCollection.java b/solr/solrj/src/java/org/apache/solr/cluster/api/SolrCollection.java
new file mode 100644
index 0000000..04eeee7
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/SolrCollection.java
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+/** Represents a collection in Solr */
+public interface SolrCollection {
+
+  String name();
+
+  /** shards of a collection */
+  SimpleMap<Shard> shards();
+
+  /** Name of the configset used by this collection */
+  String config();
+
+  /**Router used in this collection */
+  Router router();
+
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/SolrNode.java b/solr/solrj/src/java/org/apache/solr/cluster/api/SolrNode.java
new file mode 100644
index 0000000..872e3c2
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/SolrNode.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.cluster.api;
+
+/** A read only view of a Solr node */
+public interface SolrNode {
+
+  /** The node name */
+  String name();
+
+  /**Base http url for this node
+   *
+   */
+  String baseUrl(ApiType type);
+
+  /**
+   * Get all the cores in a given node.
+   * This usually involves a network call. So, it's likely to be expensive
+   */
+  SimpleMap<ShardReplica> cores();
+}
diff --git a/solr/solrj/src/java/org/apache/solr/cluster/api/package-info.java b/solr/solrj/src/java/org/apache/solr/cluster/api/package-info.java
new file mode 100644
index 0000000..c3084ed
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/cluster/api/package-info.java
@@ -0,0 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+ 
+/** 
+ * API interfaces for core SolrCloud classes
+ */
+package org.apache.solr.cluster.api;
diff --git a/solr/solrj/src/java/org/apache/solr/common/LazySolrCluster.java b/solr/solrj/src/java/org/apache/solr/common/LazySolrCluster.java
new file mode 100644
index 0000000..f339485
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/LazySolrCluster.java
@@ -0,0 +1,446 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.common;
+
+import org.apache.solr.cluster.api.*;
+import org.apache.solr.common.cloud.*;
+import org.apache.solr.common.util.Utils;
+import org.apache.solr.common.util.WrappedSimpleMap;
+import org.apache.zookeeper.KeeperException;
+
+import java.util.*;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.function.BiConsumer;
+
+import static org.apache.solr.common.cloud.ZkStateReader.URL_SCHEME;
+import static org.apache.solr.common.cloud.ZkStateReader.getCollectionPathRoot;
+
+/**
+ * Reference implementation for SolrCluster.
+ * As much as possible fetch all the values lazily because the value of anything
+ * can change any moment
+ * Creating an instance is a low cost operation. It does not result in a
+ * network call or large object creation
+ *
+ */
+
+public class LazySolrCluster implements SolrCluster {
+    final ZkStateReader zkStateReader;
+
+    private final Map<String, SolrCollectionImpl> cached = new ConcurrentHashMap<>();
+    private final SimpleMap<SolrCollection> collections;
+    private final SimpleMap<SolrCollection> collectionsAndAliases;
+    private final SimpleMap<SolrNode> nodes;
+    private SimpleMap<CollectionConfig> configs;
+
+    public LazySolrCluster(ZkStateReader zkStateReader) {
+        this.zkStateReader = zkStateReader;
+        collections = lazyCollectionsMap(zkStateReader);
+        collectionsAndAliases = lazyCollectionsWithAlias(zkStateReader);
+        nodes = lazyNodeMap();
+    }
+
+    private SimpleMap<CollectionConfig> lazyConfigMap() {
+        Set<String> configNames = new HashSet<>();
+        new SimpleZkMap(zkStateReader, ZkStateReader.CONFIGS_ZKNODE)
+                .abortableForEach((name, resource) -> {
+                    if (!name.contains("/")) {
+                        configNames.add(name);
+                        return Boolean.TRUE;
+                    }
+                    return Boolean.FALSE;
+                });
+
+        return new SimpleMap<CollectionConfig>() {
+            @Override
+            public CollectionConfig get(String key) {
+                if (configNames.contains(key)) {
+                    return new ConfigImpl(key);
+                } else {
+                    return null;
+                }
+            }
+
+            @Override
+            public void forEachEntry(BiConsumer<String, ? super CollectionConfig> fun) {
+                for (String name : configNames) {
+                    fun.accept(name, new ConfigImpl(name));
+                }
+            }
+
+            @Override
+            public int size() {
+                return configNames.size();
+            }
+        };
+    }
+
+    private SimpleMap<SolrNode> lazyNodeMap() {
+        return new SimpleMap<SolrNode>() {
+            @Override
+            public SolrNode get(String key) {
+                if (!zkStateReader.getClusterState().liveNodesContain(key)) {
+                    return null;
+                }
+                return new Node(key);
+            }
+
+            @Override
+            public void forEachEntry(BiConsumer<String, ? super SolrNode> fun) {
+                for (String s : zkStateReader.getClusterState().getLiveNodes()) {
+                    fun.accept(s, new Node(s));
+                }
+            }
+
+            @Override
+            public int size() {
+                return zkStateReader.getClusterState().getLiveNodes().size();
+            }
+        };
+    }
+
+    private SimpleMap<SolrCollection> lazyCollectionsWithAlias(ZkStateReader zkStateReader) {
+        return new SimpleMap<SolrCollection>() {
+            @Override
+            public SolrCollection get(String key) {
+                SolrCollection result = collections.get(key);
+                if (result != null) return result;
+                Aliases aliases = zkStateReader.getAliases();
+                List<String> aliasNames = aliases.resolveAliases(key);
+                if (aliasNames == null || aliasNames.isEmpty()) return null;
+                return _collection(aliasNames.get(0), null);
+            }
+
+            @Override
+            public void forEachEntry(BiConsumer<String, ? super SolrCollection> fun) {
+                collections.forEachEntry(fun);
+                Aliases aliases = zkStateReader.getAliases();
+                aliases.forEachAlias((s, colls) -> {
+                    if (colls == null || colls.isEmpty()) return;
+                    fun.accept(s, _collection(colls.get(0), null));
+                });
+
+            }
+
+            @Override
+            public int size() {
+                return collections.size() + zkStateReader.getAliases().size();
+            }
+        };
+    }
+
+    private SimpleMap<SolrCollection> lazyCollectionsMap(ZkStateReader zkStateReader) {
+        return new SimpleMap<SolrCollection>() {
+            @Override
+            public SolrCollection get(String key) {
+                return _collection(key, null);
+            }
+
+            @Override
+            public void forEachEntry(BiConsumer<String, ? super SolrCollection> fun) {
+                zkStateReader.getClusterState().forEachCollection(coll -> fun.accept(coll.getName(), _collection(coll.getName(), coll)));
+            }
+
+            @Override
+            public int size() {
+                return zkStateReader.getClusterState().size();
+            }
+        };
+    }
+
+    private SolrCollection _collection(String key, DocCollection c) {
+        if (c == null) c = zkStateReader.getCollection(key);
+        if (c == null) {
+            cached.remove(key);
+            return null;
+        }
+        SolrCollectionImpl existing = cached.get(key);
+        if (existing == null || existing.coll != c) {
+            cached.put(key, existing = new SolrCollectionImpl(c, zkStateReader));
+        }
+        return existing;
+    }
+
+    @Override
+    public SimpleMap<SolrCollection> collections() throws SolrException {
+        return collections;
+    }
+
+    @Override
+    public SimpleMap<SolrCollection> collections(boolean includeAlias) throws SolrException {
+        return includeAlias ? collectionsAndAliases : collections;
+    }
+
+    @Override
+    public SimpleMap<SolrNode> nodes() throws SolrException {
+        return nodes;
+    }
+
+    @Override
+    public SimpleMap<CollectionConfig> configs() throws SolrException {
+        if (configs == null) {
+            //these are lightweight objects and we don't care even if multiple objects ar ecreated b/c of a race condition
+            configs = lazyConfigMap();
+        }
+        return configs;
+    }
+
+    @Override
+    public String overseerNode() throws SolrException {
+        return null;
+    }
+
+    @Override
+    public String thisNode() {
+        return null;
+    }
+
+    private class SolrCollectionImpl implements SolrCollection {
+        final DocCollection coll;
+        final SimpleMap<Shard> shards;
+        final ZkStateReader zkStateReader;
+        final Router router;
+        String confName;
+
+        private SolrCollectionImpl(DocCollection coll, ZkStateReader zkStateReader) {
+            this.coll = coll;
+            this.zkStateReader = zkStateReader;
+            this.router = key -> coll.getRouter().getTargetSlice(key, null, null, null, null).getName();
+            LinkedHashMap<String, Shard> map = new LinkedHashMap<>();
+            for (Slice slice : coll.getSlices()) {
+                map.put(slice.getName(), new ShardImpl(this, slice));
+            }
+            shards = new WrappedSimpleMap<>(map);
+
+        }
+
+        @Override
+        public String name() {
+            return coll.getName();
+        }
+
+        @Override
+        public SimpleMap<Shard> shards() {
+            return shards;
+        }
+
+        @Override
+        @SuppressWarnings("rawtypes")
+        public String config() {
+            if (confName == null) {
+                // do this lazily . It's usually not necessary
+                try {
+                    byte[] d = zkStateReader.getZkClient().getData(getCollectionPathRoot(coll.getName()), null, null, true);
+                    if (d == null || d.length == 0) return null;
+                    Map m = (Map) Utils.fromJSON(d);
+                    confName = (String) m.get("configName");
+                } catch (KeeperException | InterruptedException e) {
+                    SimpleZkMap.throwZkExp(e);
+                    //cannot read from ZK
+                    return null;
+
+                }
+            }
+            return confName;
+        }
+
+        @Override
+        public Router router() {
+            return router;
+        }
+    }
+
+    private class ShardImpl implements Shard {
+        final SolrCollectionImpl collection;
+        final Slice slice;
+        final HashRange range;
+        final SimpleMap<ShardReplica> replicas;
+
+        private ShardImpl(SolrCollectionImpl collection, Slice slice) {
+            this.collection = collection;
+            this.slice = slice;
+            range = _range(slice);
+            replicas = _replicas();
+        }
+
+        private SimpleMap<ShardReplica> _replicas() {
+            Map<String, ShardReplica> replicas = new HashMap<>();
+            slice.forEach(replica -> replicas.put(replica.getName(), new ShardReplicaImpl(ShardImpl.this, replica)));
+            return new WrappedSimpleMap<>(replicas);
+        }
+
+        private HashRange _range(Slice slice) {
+            return slice.getRange() == null ?
+                    null :
+                    new HashRange() {
+                        @Override
+                        public int min() {
+                            return slice.getRange().min;
+                        }
+
+                        @Override
+                        public int max() {
+                            return slice.getRange().max;
+                        }
+                    };
+        }
+
+        @Override
+        public String name() {
+            return slice.getName();
+        }
+
+        @Override
+        public String collection() {
+            return collection.name();
+        }
+
+        @Override
+        public HashRange range() {
+            return range;
+        }
+
+        @Override
+        public SimpleMap<ShardReplica> replicas() {
+            return replicas;
+        }
+
+        @Override
+        public String leader() {
+            Replica leader = slice.getLeader();
+            return leader == null ? null : leader.getName();
+        }
+    }
+
+    private class ShardReplicaImpl implements ShardReplica {
+        private final ShardImpl shard;
+        private final Replica replica;
+
+        private ShardReplicaImpl(ShardImpl shard, Replica replica) {
+            this.shard = shard;
+            this.replica = replica;
+        }
+
+        @Override
+        public String name() {
+            return replica.getName();
+        }
+
+        @Override
+        public String shard() {
+            return shard.name();
+        }
+
+        @Override
+        public String collection() {
+            return shard.collection.name();
+        }
+
+        @Override
+        public String node() {
+            return replica.getNodeName();
+        }
+
+        @Override
+        public String core() {
+            return replica.getCoreName();
+        }
+
+        @Override
+        public Replica.Type type() {
+            return replica.getType();
+        }
+
+        @Override
+        public boolean alive() {
+            return zkStateReader.getClusterState().getLiveNodes().contains(node())
+                    && replica.getState() == Replica.State.ACTIVE;
+        }
+
+        @Override
+        public long indexSize() {
+            //todo implement later
+            throw new UnsupportedOperationException("Not yet implemented");
+        }
+
+        @Override
+        public boolean isLeader() {
+            return Objects.equals(shard.leader() , name());
+        }
+
+        @Override
+        public String url(ApiType type) {
+            String base = nodes.get(node()).baseUrl(type);
+            if (type == ApiType.V2) {
+                return base + "/cores/" + core();
+            } else {
+                return base + "/" + core();
+            }
+        }
+    }
+
+    private class Node implements SolrNode {
+        private final String name;
+
+        private Node(String name) {
+            this.name = name;
+        }
+
+        @Override
+        public String name() {
+            return name;
+        }
+
+        @Override
+        public String baseUrl(ApiType apiType) {
+            return Utils.getBaseUrlForNodeName(name, zkStateReader.getClusterProperty(URL_SCHEME, "http"), apiType == ApiType.V2);
+        }
+
+        @Override
+        public SimpleMap<ShardReplica> cores() {
+            //todo implement later
+            //this requires a call to the node
+            throw new UnsupportedOperationException("Not yet implemented");
+        }
+    }
+
+    private class ConfigImpl implements CollectionConfig {
+        final String name;
+        final SimpleMap<Resource> resources;
+        final String path;
+
+        private ConfigImpl(String name) {
+            this.name = name;
+            path = ZkStateReader.CONFIGS_ZKNODE + "/" + name;
+            this.resources = new SimpleZkMap(zkStateReader, path);
+        }
+
+        @Override
+        public SimpleMap<Resource> resources() {
+            return resources;
+        }
+
+        @Override
+        public String name() {
+            return name;
+        }
+
+    }
+
+}
+
diff --git a/solr/solrj/src/java/org/apache/solr/common/SimpleZkMap.java b/solr/solrj/src/java/org/apache/solr/common/SimpleZkMap.java
new file mode 100644
index 0000000..7d685c2
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/SimpleZkMap.java
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.common;
+
+import org.apache.solr.cluster.api.Resource;
+import org.apache.solr.cluster.api.SimpleMap;
+import org.apache.solr.common.cloud.ZkStateReader;
+import org.apache.zookeeper.KeeperException;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.function.BiConsumer;
+import java.util.function.BiFunction;
+
+/**A view of ZK as a {@link SimpleMap} impl. This gives a flat view of all paths instead of a tree view
+ * eg: a, b, c , a/a1, a/a2, a/a1/aa1 etc
+ * If possible,  use the {@link #abortableForEach(BiFunction)} to traverse
+ * DO not use the {@link #size()} method. It always return 0 because it is very expensive to compute that
+ *
+ */
+public class SimpleZkMap implements SimpleMap<Resource> {
+    private final ZkStateReader zkStateReader;
+    private final String basePath;
+
+    static final byte[] EMPTY_BYTES = new byte[0];
+
+
+    public SimpleZkMap(ZkStateReader zkStateReader, String path) {
+        this.zkStateReader = zkStateReader;
+        this.basePath = path;
+    }
+
+
+    @Override
+    public Resource get(String key) {
+        return readZkNode(basePath + key);
+    }
+
+    @Override
+    public void abortableForEach(BiFunction<String, ? super Resource, Boolean> fun) {
+        try {
+            recursiveRead("",
+                    zkStateReader.getZkClient().getChildren(basePath, null, true),
+                    fun);
+        } catch (KeeperException | InterruptedException e) {
+            throwZkExp(e);
+        }
+    }
+
+    @Override
+    public void forEachEntry(BiConsumer<String, ? super Resource> fun) {
+        abortableForEach((path, resource) -> {
+            fun.accept(path, resource);
+            return Boolean.TRUE;
+        });
+    }
+
+    @Override
+    public int size() {
+        return 0;
+    }
+
+    private Resource readZkNode(String path) {
+        return new Resource() {
+            @Override
+            public String name() {
+                return path;
+            }
+
+            @Override
+            public void get(Consumer consumer) throws SolrException {
+                try {
+                    byte[] data = zkStateReader.getZkClient().getData(basePath+"/"+  path, null, null, true);
+                    if (data != null && data.length > 0) {
+                        consumer.read(new ByteArrayInputStream(data));
+                    } else {
+                        consumer.read(new ByteArrayInputStream(EMPTY_BYTES));
+                    }
+                } catch (KeeperException | InterruptedException e) {
+                    throwZkExp(e);
+                } catch (IOException e) {
+                    throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Can;t read stream" , e);
+                }
+
+            }
+        };
+    }
+
+    private boolean recursiveRead(String parent, List<String> childrenList, BiFunction<String, ? super Resource, Boolean> fun) {
+        if(childrenList == null || childrenList.isEmpty()) return true;
+        try {
+            Map<String, List<String>> withKids = new LinkedHashMap<>();
+            for (String child : childrenList) {
+                String relativePath =  parent.isEmpty() ? child: parent+"/"+child;
+                if(!fun.apply(relativePath, readZkNode(relativePath))) return false;
+                List<String> l1 =  zkStateReader.getZkClient().getChildren(basePath+ "/"+ relativePath, null, true);
+                if(l1 != null && !l1.isEmpty()) {
+                    withKids.put(relativePath, l1);
+                }
+            }
+            //now we iterate through all nodes with sub paths
+            for (Map.Entry<String, List<String>> e : withKids.entrySet()) {
+                //has children
+                if(!recursiveRead(e.getKey(), e.getValue(), fun)) {
+                    return false;
+                }
+            }
+        } catch (KeeperException | InterruptedException e) {
+            throwZkExp(e);
+        }
+        return true;
+    }
+
+    static void throwZkExp(Exception e) {
+        if (e instanceof InterruptedException) {
+            Thread.currentThread().interrupt();
+        }
+        throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "ZK errror", e);
+    }
+
+}
diff --git a/solr/solrj/src/java/org/apache/solr/common/SolrException.java b/solr/solrj/src/java/org/apache/solr/common/SolrException.java
index 9909c2b..2375710 100644
--- a/solr/solrj/src/java/org/apache/solr/common/SolrException.java
+++ b/solr/solrj/src/java/org/apache/solr/common/SolrException.java
@@ -48,6 +48,7 @@
     NOT_FOUND( 404 ),
     CONFLICT( 409 ),
     UNSUPPORTED_MEDIA_TYPE( 415 ),
+    TOO_MANY_REQUESTS(429),
     SERVER_ERROR( 500 ),
     SERVICE_UNAVAILABLE( 503 ),
     INVALID_STATE( 510 ),
diff --git a/solr/solrj/src/java/org/apache/solr/common/annotation/SolrSingleThreaded.java b/solr/solrj/src/java/org/apache/solr/common/annotation/SolrSingleThreaded.java
deleted file mode 100644
index 3845468..0000000
--- a/solr/solrj/src/java/org/apache/solr/common/annotation/SolrSingleThreaded.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.solr.common.annotation;
-
-import static java.lang.annotation.ElementType.TYPE;
-import static java.lang.annotation.RetentionPolicy.SOURCE;
-
-import java.lang.annotation.Documented;
-import java.lang.annotation.Retention;
-import java.lang.annotation.Target;
-
-/**
- * Annotation for classes in Solr that are not thread safe. This provides a clear indication of the thread safety of the class.
- */
-@Documented
-@Retention(SOURCE)
-@Target(TYPE)
-public @interface SolrSingleThreaded {
-
-}
diff --git a/solr/solrj/src/java/org/apache/solr/common/annotation/SolrThreadUnsafe.java b/solr/solrj/src/java/org/apache/solr/common/annotation/SolrThreadUnsafe.java
new file mode 100644
index 0000000..7316237
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/annotation/SolrThreadUnsafe.java
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.common.annotation;
+
+import static java.lang.annotation.ElementType.TYPE;
+import static java.lang.annotation.RetentionPolicy.SOURCE;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.Retention;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation for classes in Solr that are not thread safe. This provides a clear indication of the thread safety of the class.
+ */
+@Documented
+@Retention(SOURCE)
+@Target(TYPE)
+public @interface SolrThreadUnsafe {
+
+}
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java b/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java
index 45fc9d8..a87735f 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java
@@ -24,6 +24,7 @@
 import java.util.Locale;
 import java.util.Map;
 import java.util.Objects;
+import java.util.function.BiConsumer;
 import java.util.function.UnaryOperator;
 import java.util.stream.Collectors;
 
@@ -72,6 +73,13 @@
     this.zNodeVersion = zNodeVersion;
   }
 
+  public void forEachAlias(BiConsumer<String, List<String>> consumer) {
+    collectionAliases.forEach((s, colls) -> consumer.accept(s, Collections.unmodifiableList(colls)));
+  }
+  public int size() {
+    return collectionAliases.size();
+  }
+
   /**
    * Create an instance from the JSON bytes read from zookeeper. Generally this should
    * only be done by a ZkStateReader.
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java
index 3c518e0..ebed0ff 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java
@@ -380,4 +380,8 @@
 
   }
 
+  public int size() {
+    return collectionStates.size();
+  }
+
 }
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/DocRouter.java b/solr/solrj/src/java/org/apache/solr/common/cloud/DocRouter.java
index 0f45231..4e12749 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/DocRouter.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/DocRouter.java
@@ -16,6 +16,7 @@
  */
 package org.apache.solr.common.cloud;
 
+import org.apache.solr.cluster.api.HashRange;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.SolrInputDocument;
 import org.apache.solr.common.params.SolrParams;
@@ -86,7 +87,7 @@
   // Hash ranges can't currently "wrap" - i.e. max must be greater or equal to min.
   // TODO: ranges may not be all contiguous in the future (either that or we will
   // need an extra class to model a collection of ranges)
-  public static class Range implements JSONWriter.Writable, Comparable<Range> {
+  public static class Range implements JSONWriter.Writable, Comparable<Range> , HashRange {
     public int min;  // inclusive
     public int max;  // inclusive
 
@@ -96,6 +97,16 @@
       this.max = max;
     }
 
+    @Override
+    public int min() {
+      return min;
+    }
+
+    @Override
+    public int max() {
+      return max;
+    }
+
     public boolean includes(int hash) {
       return hash >= min && hash <= max;
     }
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/SolrClassLoader.java b/solr/solrj/src/java/org/apache/solr/common/cloud/SolrClassLoader.java
new file mode 100644
index 0000000..98e920d
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/SolrClassLoader.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.common.cloud;
+
+
+/** A generic interface to load plugin classes */
+public interface SolrClassLoader {
+
+    <T> T newInstance(String cname, Class<T> expectedType, String... subpackages);
+
+    @SuppressWarnings({"rawtypes"})
+    <T> T newInstance(String cName, Class<T> expectedType, String[] subPackages, Class[] params, Object[] args);
+
+    <T> Class<? extends T> findClass(String cname, Class<T> expectedType);
+}
\ No newline at end of file
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java b/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java
index 051870d..1ac21fe 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java
@@ -761,6 +761,9 @@
         return "";
       }
       return new String(data, StandardCharsets.UTF_8);
+    } catch (NoNodeException nne) {
+      log.debug("Zookeeper does not have the /zookeeper/config znode, assuming old ZK version");
+      return "";
     } catch (KeeperException|InterruptedException ex) {
       throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed to get config from zookeeper", ex);
     }
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
index d49a39c..4273163 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
@@ -51,11 +51,7 @@
 import org.apache.solr.common.SolrException.ErrorCode;
 import org.apache.solr.common.params.CollectionAdminParams;
 import org.apache.solr.common.params.CoreAdminParams;
-import org.apache.solr.common.util.ExecutorUtil;
-import org.apache.solr.common.util.ObjectReleaseTracker;
-import org.apache.solr.common.util.Pair;
-import org.apache.solr.common.util.SolrNamedThreadFactory;
-import org.apache.solr.common.util.Utils;
+import org.apache.solr.common.util.*;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.apache.zookeeper.WatchedEvent;
@@ -2182,4 +2178,8 @@
       return result;
     }
   }
+
+  public DocCollection getCollection(String collection) {
+    return clusterState.getCollectionOrNull(collection);
+  }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/common/params/CommonParams.java b/solr/solrj/src/java/org/apache/solr/common/params/CommonParams.java
index 6f1d4e8..0a18e60 100644
--- a/solr/solrj/src/java/org/apache/solr/common/params/CommonParams.java
+++ b/solr/solrj/src/java/org/apache/solr/common/params/CommonParams.java
@@ -159,7 +159,7 @@
   boolean SEGMENT_TERMINATE_EARLY_DEFAULT = false;
 
   /**
-   * Timeout value in milliseconds.  If not set, or the value is &gt;= 0, there is no timeout.
+   * Timeout value in milliseconds.  If not set, or the value is &gt; 0, there is no timeout.
    */
   String TIME_ALLOWED = "timeAllowed";
 
@@ -294,6 +294,10 @@
   String NAME = "name";
   String VALUE_LONG = "val";
 
+  String SOLR_REQUEST_CONTEXT_PARAM = "Solr-Request-Context";
+
+  String SOLR_REQUEST_TYPE_PARAM = "Solr-Request-Type";
+
   String VERSION_FIELD="_version_";
 
   String FAIL_ON_VERSION_CONFLICTS ="failOnVersionConflicts";
diff --git a/solr/solrj/src/java/org/apache/solr/common/util/LinkedSimpleHashMap.java b/solr/solrj/src/java/org/apache/solr/common/util/LinkedSimpleHashMap.java
new file mode 100644
index 0000000..1fc6afc
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/util/LinkedSimpleHashMap.java
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.common.util;
+
+import org.apache.solr.cluster.api.SimpleMap;
+
+import java.util.LinkedHashMap;
+import java.util.function.BiConsumer;
+
+public class LinkedSimpleHashMap<T> extends LinkedHashMap<String, T>  implements SimpleMap<T> {
+    @Override
+    public T get(String key) {
+        return super.get(key);
+    }
+
+    @Override
+    public void forEachEntry(BiConsumer<String, ? super T> fun) {
+
+    }
+}
diff --git a/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java b/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
index 694eb93..9156619 100644
--- a/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
+++ b/solr/solrj/src/java/org/apache/solr/common/util/NamedList.java
@@ -30,7 +30,11 @@
 import java.util.Objects;
 import java.util.Set;
 import java.util.function.BiConsumer;
+import java.util.function.BiFunction;
+import java.util.function.Consumer;
+import java.util.function.Function;
 
+import org.apache.solr.cluster.api.SimpleMap;
 import org.apache.solr.common.MapWriter;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.params.MultiMapSolrParams;
@@ -63,7 +67,7 @@
  *
  */
 @SuppressWarnings({"unchecked", "rawtypes"})
-public class NamedList<T> implements Cloneable, Serializable, Iterable<Map.Entry<String,T>> , MapWriter {
+public class NamedList<T> implements Cloneable, Serializable, Iterable<Map.Entry<String,T>> , MapWriter, SimpleMap<T> {
 
   private static final long serialVersionUID = 1957981902839867821L;
   protected final List<Object> nvPairs;
@@ -509,7 +513,7 @@
 
       @Override
       public void forEach(BiConsumer action) {
-        NamedList.this.forEach(action);
+        NamedList.this.forEachEntry(action);
       }
     };
   }
@@ -854,10 +858,39 @@
     return this.nvPairs.equals(nl.nvPairs);
   }
 
-  public void forEach(BiConsumer<String, T> action) {
+
+  @Override
+  public void abortableForEach(BiFunction<String, ? super T, Boolean> fun) {
+    int sz = size();
+    for (int i = 0; i < sz; i++) {
+      if(!fun.apply(getName(i), getVal(i))) break;
+    }
+  }
+
+  @Override
+  public void abortableForEachKey(Function<String, Boolean> fun) {
+    int sz = size();
+    for (int i = 0; i < sz; i++) {
+      if(!fun.apply(getName(i))) break;
+    }
+  }
+
+  @Override
+  public void forEachKey(Consumer<String> fun) {
+    int sz = size();
+    for (int i = 0; i < sz; i++) {
+      fun.accept(getName(i));
+    }
+  }
+  public void forEach(BiConsumer<String, ? super T> action) {
     int sz = size();
     for (int i = 0; i < sz; i++) {
       action.accept(getName(i), getVal(i));
     }
   }
+
+  @Override
+  public void forEachEntry(BiConsumer<String, ? super T> fun) {
+    forEach(fun);
+  }
 }
diff --git a/solr/solrj/src/java/org/apache/solr/common/util/Utils.java b/solr/solrj/src/java/org/apache/solr/common/util/Utils.java
index 2e68b19..6aaa4d7 100644
--- a/solr/solrj/src/java/org/apache/solr/common/util/Utils.java
+++ b/solr/solrj/src/java/org/apache/solr/common/util/Utils.java
@@ -94,7 +94,44 @@
 import static java.util.Collections.unmodifiableSet;
 import static java.util.concurrent.TimeUnit.NANOSECONDS;
 
+
 public class Utils {
+  // Why static lambdas? Even though they require SuppressWarnings? This is
+  // purely an optimization. Comparing:
+  //
+  //    mapObject.computeIfAbsent(key, o -> new HashMap<>());
+  //
+  //    vs.
+  //
+  //    mapObject.computeIfAbsent(key, Utils.NEW_HASHMAP_FUN)
+  //
+  //
+  //    The first code fragment is executed as following
+  //
+  //    s.computeIfAbsent(key, new Function() {
+  //      @Override
+  //      public Object apply(String key) {
+  //        return new HashMap<>();
+  //      }
+  //    }
+  //
+  //    So, there are two problems with this
+  //
+  //    A new anonymous inner class is created for that lambda. This one extra
+  //    class becomes a part of your binary a new instance of that class is
+  //    created everytime the computeIfAbsent() method is invoked, irrespective
+  //    of whether the value is absent for that key or not. Now imagine that
+  //    method getting called millions of times and creating millions of such
+  //    objects for no reason
+  //
+  //    OTOH
+  //
+  //    Utils.NEW_HASHMAP_FUN
+  //    Only a single anonymous class is created for the entire codebase
+  //    Only single instance of that object is created in the VM
+  //
+  // See SOLR-14579.
+
   @SuppressWarnings({"rawtypes"})
   public static final Function NEW_HASHMAP_FUN = o -> new HashMap<>();
   @SuppressWarnings({"rawtypes"})
@@ -236,7 +273,7 @@
     }
 
     @Override
-    @SuppressWarnings({"unchecked", "rawtypes"})
+    @SuppressWarnings({"rawtypes"})
     public void handleUnknownClass(Object o) {
       if (o instanceof MapWriter) {
         Map m = ((MapWriter) o).toMap(new LinkedHashMap<>());
@@ -404,7 +441,6 @@
     return getObjectByPath(root, onlyPrimitive, parts);
   }
 
-  @SuppressWarnings({"unchecked"})
   public static boolean setObjectByPath(Object root, String hierarchy, Object value) {
     List<String> parts = StrUtils.splitSmart(hierarchy, '/', true);
     return setObjectByPath(root, parts, value);
@@ -693,7 +729,7 @@
    * @param input the json with new values
    * @return whether there was any change made to sink or not.
    */
-  @SuppressWarnings({"unchecked", "rawtypes"})
+  @SuppressWarnings({"unchecked"})
   public static boolean mergeJson(Map<String, Object> sink, Map<String, Object> input) {
     boolean isModified = false;
     for (Map.Entry<String, Object> e : input.entrySet()) {
@@ -728,13 +764,16 @@
   }
 
   public static String getBaseUrlForNodeName(final String nodeName, String urlScheme) {
+    return getBaseUrlForNodeName(nodeName, urlScheme, false);
+  }
+  public static String getBaseUrlForNodeName(final String nodeName, String urlScheme,  boolean isV2) {
     final int _offset = nodeName.indexOf("_");
     if (_offset < 0) {
       throw new IllegalArgumentException("nodeName does not contain expected '_' separator: " + nodeName);
     }
     final String hostAndPort = nodeName.substring(0, _offset);
     final String path = URLDecoder.decode(nodeName.substring(1 + _offset), UTF_8);
-    return urlScheme + "://" + hostAndPort + (path.isEmpty() ? "" : ("/" + path));
+    return urlScheme + "://" + hostAndPort + (path.isEmpty() ? "" : ("/" + (isV2? "api": path)));
   }
 
   public static long time(TimeSource timeSource, TimeUnit unit) {
diff --git a/solr/solrj/src/java/org/apache/solr/common/util/WrappedSimpleMap.java b/solr/solrj/src/java/org/apache/solr/common/util/WrappedSimpleMap.java
new file mode 100644
index 0000000..e8f58a5
--- /dev/null
+++ b/solr/solrj/src/java/org/apache/solr/common/util/WrappedSimpleMap.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.common.util;
+
+import org.apache.solr.cluster.api.SimpleMap;
+
+import java.util.Map;
+import java.util.function.BiConsumer;
+
+public class WrappedSimpleMap<T>  implements SimpleMap<T> {
+    private final Map<String, T> delegate;
+
+    @Override
+    public T get(String key) {
+        return delegate.get(key);
+    }
+
+    @Override
+    public void forEachEntry(BiConsumer<String, ? super T> fun) {
+        delegate.forEach(fun);
+
+    }
+
+    @Override
+    public int size() {
+        return delegate.size();
+    }
+
+
+    public WrappedSimpleMap(Map<String, T> delegate) {
+        this.delegate = delegate;
+    }
+
+}
diff --git a/solr/solrj/src/resources/apispec/node.Info.json b/solr/solrj/src/resources/apispec/node.Info.json
index 7d0c2c0..9fa3e94 100644
--- a/solr/solrj/src/resources/apispec/node.Info.json
+++ b/solr/solrj/src/resources/apispec/node.Info.json
@@ -1,5 +1,5 @@
 {
-   "description": "Provides information about system properties, threads, logging settings, system details and health (available in Solrcloud mode) for a node.",
+   "description": "Provides information about system properties, threads, logging settings, system details and health (available in SolrCloud mode) for a node.",
   "methods": ["GET"],
   "url": {
     "paths": [
diff --git a/solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig-slave1.xml b/solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig-follower1.xml
similarity index 100%
rename from solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig-slave1.xml
rename to solr/solrj/src/test-files/solrj/solr/collection1/conf/solrconfig-follower1.xml
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttp2SolrClient.java b/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttp2SolrClient.java
index ffe52fe..108298c 100644
--- a/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttp2SolrClient.java
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttp2SolrClient.java
@@ -279,7 +279,7 @@
     }
 
     public String getSolrConfigFile() {
-      return "solrj/solr/collection1/conf/solrconfig-slave1.xml";
+      return "solrj/solr/collection1/conf/solrconfig-follower1.xml";
     }
 
     public String getSolrXmlFile() {
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttpSolrClient.java b/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttpSolrClient.java
index 79381a0..150afc2 100644
--- a/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttpSolrClient.java
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttpSolrClient.java
@@ -280,7 +280,7 @@
     }
 
     public String getSolrConfigFile() {
-      return "solrj/solr/collection1/conf/solrconfig-slave1.xml";
+      return "solrj/solr/collection1/conf/solrconfig-follower1.xml";
     }
 
     public String getSolrXmlFile() {
diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamingTest.java b/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamingTest.java
index 5afee28..b4e0704 100644
--- a/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamingTest.java
+++ b/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamingTest.java
@@ -2716,4 +2716,11 @@
     return pstream;
   }
 
+  public void testCloudSolrStreamWithoutStreamContext() throws Exception {
+    SolrParams sParams = StreamingTest.mapParams("q", "*:*", "fl", "id", "sort", "id asc");
+    try (CloudSolrStream stream = new CloudSolrStream(zkHost, COLLECTIONORALIAS, sParams)) {
+      stream.open();
+    }
+  }
+
 }
diff --git a/solr/test-framework/build.xml b/solr/test-framework/build.xml
deleted file mode 100644
index 242c384..0000000
--- a/solr/test-framework/build.xml
+++ /dev/null
@@ -1,121 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="solr-test-framework" default="default">
-  <description>Solr Test Framework</description>
-
-  <import file="../common-build.xml"/>
-
-  <path id="solr.test.framework.lucene.libs">
-    <pathelement location="${test-framework.jar}"/>
-  </path>
-
-  <path id="classpath">
-    <fileset dir="lib" excludes="${common.classpath.excludes}"/>
-    <path refid="solr.test.framework.lucene.libs" />
-    <path refid="solr.base.classpath"/>
-  </path>
-
-  <!-- Redefine Lucene test-framework compilation here to avoid circular dependency on compile-core -->
-  <target name="compile-test-framework">
-    <ant dir="${common.dir}" target="compile-test-framework" inheritall="false">
-      <propertyset refid="uptodate.and.compiled.properties"/>
-    </ant>
-  </target>
-
-  <target name="compile-core" depends="resolve, clover, compile-solr-core, compile-test-framework">
-    <!-- TODO: why does test-framework override compile-core to use this special classpath? -->
-    <compile srcdir="${src.dir}" destdir="${build.dir}/classes/java">
-      <classpath refid="test.base.classpath"/>
-    </compile>
-
-    <!-- Copy the resources folder (if existent) -->
-    <copy todir="${build.dir}/classes/java">
-      <fileset dir="${resources.dir}" erroronmissingdir="no"/>
-    </copy>
-  </target>
-
-  <!-- redefine the forbidden apis for tests, as we check ourselves -->
-  <target name="-check-forbidden-tests" depends="-init-forbidden-apis,compile-core">
-    <forbidden-apis suppressAnnotation="**.SuppressForbidden" signaturesFile="${common.dir}/tools/forbiddenApis/tests.txt" classpathref="forbidden-apis.allclasses.classpath">
-      <fileset dir="${build.dir}/classes/java"/>
-    </forbidden-apis>
-  </target>
-
-  <!-- Override common-solr.javadocs to include JUnit links -->
-  <!-- and to copy the built javadocs to ${dest}/docs/api/test-framework -->
-  <target name="javadocs"
-          depends="compile-core,jar-test-framework,lucene-javadocs,javadocs-test-framework,define-lucene-javadoc-url,check-javadocs-uptodate" unless="javadocs-uptodate-${name}">
-    <sequential>
-      <mkdir dir="${javadoc.dir}/${name}"/>
-      <!-- NOTE: explicitly not using solr-invoke-javadoc, or attempting to
-     link to lucene-test-framework because if we did javadoc would
-     attempt to link class refs in in org.apache.lucene, causing
-     broken links. (either broken links to things like "Directory" if
-     lucene-test-framework was first, or broken links to things like
-     LuceneTestCase if lucene-core was first)
-      -->
-      <invoke-javadoc destdir="${javadoc.dir}/${name}"
-          title="${Name} ${version} Test Framework API">
-  <sources>
-    <link offline="true" href="${javadoc.link.junit}"
-    packagelistLoc="${javadoc.packagelist.dir}/junit"/>
-    <packageset dir="${src.dir}"/>
-  </sources>
-      </invoke-javadoc>
-      <solr-jarify basedir="${javadoc.dir}/${name}" destfile="${build.dir}/${final.name}-javadoc.jar"/>
-    </sequential>
-  </target>
-  <target name="module-jars-to-solr"
-          depends="-module-jars-to-solr-not-for-package,-module-jars-to-solr-package"/>
-  <target name="-module-jars-to-solr-not-for-package" unless="called.from.create-package">
-    <antcall target="jar-test-framework" inheritall="true"/>
-    <property name="test-framework.uptodate" value="true"/>
-    <mkdir dir="${build.dir}/lucene-libs"/>
-    <copy todir="${build.dir}/lucene-libs" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-      <path refid="solr.test.framework.lucene.libs" />
-    </copy>
-  </target>
-  <target name="-module-jars-to-solr-package" if="called.from.create-package">
-    <antcall target="-unpack-lucene-tgz" inheritall="true"/>
-    <pathconvert property="relative.solr.test.framework.lucene.libs" pathsep=",">
-      <path refid="solr.test.framework.lucene.libs"/>
-      <globmapper from="${common.build.dir}/*" to="*" handledirsep="true"/>
-    </pathconvert>
-    <mkdir dir="${build.dir}/lucene-libs"/>
-    <copy todir="${build.dir}/lucene-libs" preservelastmodified="true" flatten="true" failonerror="true" overwrite="true">
-      <fileset dir="${lucene.tgz.unpack.dir}/lucene-${version}" includes="${relative.solr.test.framework.lucene.libs}"/>
-    </copy>
-  </target>
-
-  <target name="dist" depends="module-jars-to-solr, common-solr.dist">
-    <!-- we're not a contrib, our lucene-libs and go in a special place -->
-    <mkdir  dir="${dist}/test-framework" />
-    <copy todir="${dist}/test-framework">
-      <fileset dir="${build.dir}">
-  <include name="lucene-libs/*.jar" />
-      </fileset>
-      <fileset dir=".">
-  <include name="lib/*" />
-  <include name="README.md" />
-      </fileset>
-    </copy>
-  </target>
-
-  <target name="-check-forbidden-sysout"/>
-</project>
-
diff --git a/solr/test-framework/ivy.xml b/solr/test-framework/ivy.xml
deleted file mode 100644
index 174915b..0000000
--- a/solr/test-framework/ivy.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.    
--->
-<ivy-module version="2.0">
-  <info organisation="org.apache.solr" module="test-framework"/>
-
-  <configurations defaultconfmapping="compile->master;junit4-stdalone->master">
-    <conf name="compile" transitive="false" />
-    <!-- 
-    JUnit4 ANT task only, no ANT.
-    This is used from build scripts for taskdefs.
-    -->
-    <conf name="junit4-stdalone" transitive="false" />
-  </configurations>
-
-  <dependencies>
-    <dependency org="org.apache.ant" name="ant" rev="${/org.apache.ant/ant}" conf="compile" />
- 
-    <dependency org="junit" name="junit" rev="${/junit/junit}" conf="compile;junit4-stdalone" />
-    <dependency org="org.hamcrest" name="hamcrest-core" rev="${/org.hamcrest/hamcrest-core}" conf="compile;junit4-stdalone" />
-    <dependency org="com.carrotsearch.randomizedtesting" name="junit4-ant" rev="${/com.carrotsearch.randomizedtesting/junit4-ant}" conf="compile;junit4-stdalone" />
-    <dependency org="com.carrotsearch.randomizedtesting" name="randomizedtesting-runner" rev="${/com.carrotsearch.randomizedtesting/randomizedtesting-runner}" conf="compile;junit4-stdalone" />
-    <dependency org="io.opentracing" name="opentracing-mock" rev="${/io.opentracing/opentracing-mock}" conf="compile;junit4-stdalone"/>
-
-    <exclude org="*" ext="*" matcher="regexp" type="${ivy.exclude.types}"/> 
-  </dependencies>
-</ivy-module>
diff --git a/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java b/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java
index a425f8c..6ba9548 100644
--- a/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java
+++ b/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java
@@ -284,7 +284,6 @@
     System.setProperty("enable.update.log", usually() ? "true" : "false");
     System.setProperty("tests.shardhandler.randomSeed", Long.toString(random().nextLong()));
     System.setProperty("solr.clustering.enabled", "false");
-    System.setProperty("solr.peerSync.useRangeVersions", String.valueOf(random().nextBoolean()));
     System.setProperty("solr.cloud.wait-for-updates-with-stale-state-pause", "500");
 
     System.setProperty("pkiHandlerPrivateKeyPath", SolrTestCaseJ4.class.getClassLoader().getResource("cryptokeys/priv_key512_pkcs8.pem").toExternalForm());
@@ -342,7 +341,6 @@
       System.clearProperty("enable.update.log");
       System.clearProperty("useCompoundFile");
       System.clearProperty("urlScheme");
-      System.clearProperty("solr.peerSync.useRangeVersions");
       System.clearProperty("solr.cloud.wait-for-updates-with-stale-state-pause");
       System.clearProperty("solr.zkclienttmeout");
       System.clearProperty(ZK_WHITELIST_PROPERTY);
@@ -2986,20 +2984,6 @@
   public static final String UPDATELOG_SYSPROP = "solr.tests.ulog";
 
   /**
-   * randomizes the updateLog between different update log implementations for better test coverage
-   */
-  public static void randomizeUpdateLogImpl() {
-    if (random().nextBoolean()) {
-      System.setProperty(UPDATELOG_SYSPROP, "solr.CdcrUpdateLog");
-    } else {
-      System.setProperty(UPDATELOG_SYSPROP,"solr.UpdateLog");
-    }
-    if (log.isInfoEnabled()) {
-      log.info("updateLog impl={}", System.getProperty(UPDATELOG_SYSPROP));
-    }
-  }
-
-  /**
    * Sets various sys props related to user specified or randomized choices regarding the types 
    * of numerics that should be used in tests.
    *
diff --git a/solr/test-framework/src/java/org/apache/solr/cloud/ConfigRequest.java b/solr/test-framework/src/java/org/apache/solr/cloud/ConfigRequest.java
index 6b4c617..45f7604 100644
--- a/solr/test-framework/src/java/org/apache/solr/cloud/ConfigRequest.java
+++ b/solr/test-framework/src/java/org/apache/solr/cloud/ConfigRequest.java
@@ -53,4 +53,9 @@
   public SolrResponse createResponse(SolrClient client) {
     return new SolrResponseBase();
   }
+
+  @Override
+  public String getRequestType() {
+    return SolrRequest.SolrRequestType.ADMIN.toString();
+  }
 }
diff --git a/solr/webapp/build.xml b/solr/webapp/build.xml
deleted file mode 100644
index f22c3f3..0000000
--- a/solr/webapp/build.xml
+++ /dev/null
@@ -1,69 +0,0 @@
-<?xml version="1.0"?>
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
- -->
-<project name="solr-webapp" default="default">
-  <description>Solr webapp</description>
-
-  <property name="rat.additional-includes" value="**"/>
-  <property name="rat.additional-excludes" value="**/*.gradle,web/img/**,**/rat-report.xml"/>
-
-  <import file="../common-build.xml"/>
-
-  <property name="exclude.from.webapp" value="*slf4j*,log4j-*,*javax.servlet*" />
- 
-  <!-- this module has no javadocs -->
-  <target name="javadocs"/>
-
-  <!-- this module has no jar either -->
-  <target name="jar-core"/>
-
-  <!-- nothing to compile -->
-  <target name="compile-core"/>
-  <target name="compile-test"/>
-  <target name="test"/>
-  <target name="test-nocompile"/>
-
-  <target name="dist"
-          description="Creates the Webapp folder for distribution."
-          depends="dist-core, dist-solrj, lucene-jars-to-solr">
-    <ant dir="${common-solr.dir}" inheritall="false" target="contribs-add-to-webapp"/>
-    <mkdir dir="${server.dir}/solr-webapp/webapp"/>
-    <copy todir="${server.dir}/solr-webapp/webapp">
-      <fileset dir="web" excludes="${exclude.from.webapp}, libs/angular-cookies.js, libs/angular-route.js, libs/angular-sanitize.js, libs/angular-utf8-base.js, libs/angular.js, libs/chosen.jquery.js"/>
-      <fileset dir="${dest}/web" excludes="${exclude.from.war}"/> <!-- contribs' additions -->
-    </copy>
-    <mkdir dir="${server.dir}/solr-webapp/webapp/WEB-INF/lib"/>
-    <copy todir="${server.dir}/solr-webapp/webapp/WEB-INF/lib">
-      <fileset dir="${common-solr.dir}/core/lib" excludes="${exclude.from.webapp},${common.classpath.excludes}"/>
-      <fileset dir="${common-solr.dir}/solrj/lib" excludes="${exclude.from.webapp},${common.classpath.excludes}"/>
-      <fileset dir="${lucene-libs}" excludes="${exclude.from.webapp},${common.classpath.excludes}" />
-      <fileset dir="${dist}" excludes="${exclude.from.webapp},${common.classpath.excludes}">
-        <include name="solr-solrj-${version}.jar" />
-        <include name="solr-core-${version}.jar" />
-      </fileset>
-    </copy>
-  </target>
-
-  <!-- nothing to do -->
-  <target name="-dist-maven"/>
-
-  <!-- nothing to do -->
-  <target name="-install-to-maven-local-repo"/>
-
-  <!-- nothing to do -->
-  <target name="-validate-maven-dependencies"/>
-</project>
diff --git a/solr/webapp/web/css/angular/collections.css b/solr/webapp/web/css/angular/collections.css
index a0c52ff..2645741 100644
--- a/solr/webapp/web/css/angular/collections.css
+++ b/solr/webapp/web/css/angular/collections.css
@@ -228,7 +228,7 @@
 #content #collections #data #alias-data h2 { background-image: url( ../../img/ico/box.png ); }
 #content #collections #data #collection-data h2 { background-image: url( ../../img/ico/box.png ); }
 #content #collections #data #shard-data h2 { background-image: url( ../../img/ico/sitemap.png ); }
-#content #collections #data #shard-data .replica h2 { background-image: url( ../../img/ico/node-slave.png ); }
+#content #collections #data #shard-data .replica h2 { background-image: url( ../../img/ico/node-follower.png ); }
 
 #content #collections #data #index-data
 {
diff --git a/solr/webapp/web/css/angular/dashboard.css b/solr/webapp/web/css/angular/dashboard.css
index 734d62a..1ffe0b6 100644
--- a/solr/webapp/web/css/angular/dashboard.css
+++ b/solr/webapp/web/css/angular/dashboard.css
@@ -144,8 +144,8 @@
 #content #dashboard #system h2 { background-image: url( ../../img/ico/server.png ); }
 #content #dashboard #statistics h2 { background-image: url( ../../img/ico/chart.png ); }
 #content #dashboard #replication h2 { background-image: url( ../../img/ico/node.png ); }
-#content #dashboard #replication.master h2 { background-image: url( ../../img/ico/node-master.png ); }
-#content #dashboard #replication.slave h2 { background-image: url( ../../img/ico/node-slave.png ); }
+#content #dashboard #replication.leader h2 { background-image: url( ../../img/ico/node-leader.png ); }
+#content #dashboard #replication.follower h2 { background-image: url( ../../img/ico/node-follower.png ); }
 #content #dashboard #instance h2 { background-image: url( ../../img/ico/server.png ); }
 #content #dashboard #collection h2 { background-image: url( ../../img/ico/book-open-text.png ); }
 #content #dashboard #shards h2 { background-image: url( ../../img/ico/documents-stack.png ); }
diff --git a/solr/webapp/web/css/angular/dataimport.css b/solr/webapp/web/css/angular/dataimport.css
deleted file mode 100644
index ad37896..0000000
--- a/solr/webapp/web/css/angular/dataimport.css
+++ /dev/null
@@ -1,371 +0,0 @@
-/*
-
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-
-*/
-
-#content #dataimport
-{
-  background-image: url( ../../img/div.gif );
-  background-position: 21% 0;
-  background-repeat: repeat-y;
-}
-
-#content #dataimport #frame
-{
-  float: right;
-  width: 78%;
-}
-
-#content #dataimport #form
-{
-  float: left;
-  width: 20%;
-}
-
-#content #dataimport #form #navigation
-{
-  border-right: 0;
-}
-
-#content #dataimport #form #navigation a
-{
-  background-image: url( ../../img/ico/status-offline.png );
-}
-
-#content #dataimport #form #navigation .current a
-{
-  background-image: url( ../../img/ico/status.png );
-}
-
-#content #dataimport #form form
-{
-  border-top: 1px solid #f0f0f0;
-  margin-top: 10px;
-  padding-top: 5px;
-}
-
-#content #dataimport #form label
-{
-  cursor: pointer;
-  display: block;
-  margin-top: 5px;
-}
-
-#content #dataimport #form input,
-#content #dataimport #form select,
-#content #dataimport #form textarea
-{
-  margin-bottom: 2px;
-  width: 100%;
-}
-
-#content #dataimport #form input
-{
-  width: 98%;
-}
-
-#content #dataimport #form button
-{
-  margin-top: 10px;
-}
-
-#content #dataimport #form .execute span
-{
-  background-image: url( ../../img/ico/document-import.png );
-}
-
-#content #dataimport #form .refresh-status span
-{
-  background-image: url( ../../img/ico/arrow-circle.png );
-}
-
-#content #dataimport #form .refresh-status span.success
-{
-  background-image: url( ../../img/ico/tick.png );
-}
-
-#content #dataimport #form #start
-{
-  float: left;
-  width: 47%;
-}
-
-#content #dataimport #form #rows
-{
-  float: right;
-  width: 47%;
-}
-
-#content #dataimport #form .checkbox input
-{
-  margin-bottom: 0;
-  width: auto;
-}
-
-#content #dataimport #form #auto-refresh-status
-{
-  margin-top: 20px;
-}
-
-#content #dataimport #form #auto-refresh-status a
-{
-  background-image: url( ../../img/ico/ui-check-box-uncheck.png );
-  background-position: 0 50%;
-  color: #4D4D4D;
-  display: block;
-  padding-left: 21px;
-}
-
-#content #dataimport #form #auto-refresh-status a.on,
-#content #dataimport #form #auto-refresh-status a:hover
-{
-  color: #333;
-}
-
-#content #dataimport #form #auto-refresh-status a.on
-{
-  background-image: url( ../../img/ico/ui-check-box.png );
-}
-
-#content #dataimport #current_state
-{
-  padding: 10px;
-  margin-bottom: 20px;
-}
-
-#content #dataimport #current_state .last_update,
-#content #dataimport #current_state .info
-{
-  display: block;
-  padding-left: 21px;
-}
-
-#content #dataimport #current_state .last_update
-{
-  color: #4D4D4D;
-  font-size: 11px;
-}
-
-#content #dataimport #current_state .info
-{
-  background-position: 0 1px;
-  position: relative;
-}
-
-#content #dataimport #current_state .info .details span
-{
-  color: #c0c0c0;
-}
-
-#content #dataimport #current_state .info .abort-import
-{
-  position: absolute;
-  right: 0px;
-  top: 0px;
-}
-
-#content #dataimport #current_state .info .abort-import span
-{
-  background-image: url( ../../img/ico/cross.png );
-}
-
-#content #dataimport #current_state .info .abort-import.success span
-{
-  background-image: url( ../../img/ico/tick.png );
-}
-
-#content #dataimport #current_state.indexing
-{
-  background-color: #f9f9f9;
-}
-
-#content #dataimport #current_state.indexing .info
-{
-  background-image: url( ../../img/ico/hourglass.png );
-}
-
-#content #dataimport #current_state.indexing .info .abort-import
-{
-  display: block;
-}
-
-#content #dataimport #current_state.success
-{
-  background-color: #e6f3e6;
-}
-
-#content #dataimport #current_state.success .info
-{
-  background-image: url( ../../img/ico/tick-circle.png );
-}
-
-#content #dataimport #current_state.success .info strong
-{
-  color: #080;
-}
-
-#content #dataimport #current_state.aborted
-{
-  background-color: #f3e6e6;
-}
-
-#content #dataimport #current_state.aborted .info
-{
-  background-image: url( ../../img/ico/slash.png );
-}
-
-#content #dataimport #current_state.aborted .info strong
-{
-  color: #800;
-}
-
-#content #dataimport #current_state.failure
-{
-  background-color: #f3e6e6;
-}
-
-#content #dataimport #current_state.failure .info
-{
-  background-image: url( ../../img/ico/cross-button.png );
-}
-
-#content #dataimport #current_state.failure .info strong
-{
-  color: #800;
-}
-
-#content #dataimport #current_state.idle
-{
-  background-color: #e6e6ff;
-}
-
-#content #dataimport #current_state.idle .info
-{
-  background-image: url( ../../img/ico/information.png );
-}
-
-#content #dataimport #error,
-#content #dataimport #deprecation_message
-{
-  background-color: #f00;
-  background-image: url( ../../img/ico/construction.png );
-  background-position: 10px 50%;
-  color: #fff;
-  font-weight: bold;
-  margin-bottom: 20px;
-  padding: 10px;
-  padding-left: 35px;
-}
-
-#content #dataimport .block h2
-{
-  border-color: #c0c0c0;
-  padding-left: 5px;
-  position: relative;
-}
-
-#content #dataimport .block.hidden h2
-{
-  border-color: #fafafa;
-}
-
-#content #dataimport .block h2 a.toggle
-{
-  background-image: url( ../../img/ico/toggle-small.png );
-  background-position: 0 50%;
-  padding-left: 21px;
-}
-
-#content #dataimport .block.hidden h2 a.toggle
-{
-  background-image: url( ../../img/ico/toggle-small-expand.png );
-}
-
-#content #dataimport #config h2 a.r
-{
-  background-position: 3px 50%;
-  display: block;
-  float: right;
-  margin-left: 10px;
-  padding-left: 24px;
-  padding-right: 3px;
-}
-
-#content #dataimport #config h2 a.reload_config
-{
-  background-image: url( ../../img/ico/arrow-circle.png );
-}
-
-#content #dataimport #config h2 a.reload_config.success
-{
-  background-image: url( ../../img/ico/tick.png );
-}
-
-#content #dataimport #config h2 a.reload_config.error
-{
-  background-image: url( ../../img/ico/slash.png );
-}
-
-#content #dataimport #config h2 a.debug_mode
-{
-  background-image: url( ../../img/ico/hammer.png );
-  color: #4D4D4D;
-}
-
-#content #dataimport #config.debug_mode h2 a.debug_mode
-{
-  background-color: #ff0;
-  background-image: url( ../../img/ico/hammer-screwdriver.png );
-  color: #333;
-}
-
-#content #dataimport #config .content
-{
-  padding: 5px 2px;
-}
-
-#content #dataimport #dataimport_config .loader
-{
-  background-position: 0 50%;
-  padding-left: 21px;
-}
-
-#content #dataimport #dataimport_config .formatted
-{
-  border: 1px solid #fff;
-  display: block;
-  padding: 2px;
-}
-
-#content #dataimport .debug_mode #dataimport_config .editable
-{
-  display: block;
-}
-
-#content #dataimport #dataimport_config .editable textarea
-{
-  font-family: monospace;
-  height: 120px;
-  min-height: 60px;
-  width: 100%;
-}
-
-#content #dataimport #debug_response em
-{
-  color: #4D4D4D;
-  font-style: normal;
-}
diff --git a/solr/webapp/web/css/angular/menu.css b/solr/webapp/web/css/angular/menu.css
index 87b5169..fce8eb3 100644
--- a/solr/webapp/web/css/angular/menu.css
+++ b/solr/webapp/web/css/angular/menu.css
@@ -268,7 +268,7 @@
 #menu #cloud.global p a { background-image: url( ../../img/ico/network-cloud.png ); }
 #menu #cloud.global .tree a { background-image: url( ../../img/ico/folder-tree.png ); }
 #menu #cloud.global .nodes a { background-image: url( ../../img/solr-ico.png ); }
-#menu #cloud.global .zkstatus a { background-image: url( ../../img/ico/node-master.png ); }
+#menu #cloud.global .zkstatus a { background-image: url( ../../img/ico/node-leader.png ); }
 #menu #cloud.global .graph a { background-image: url( ../../img/ico/molecule.png ); }
 
 .sub-menu .ping.error a
@@ -292,7 +292,6 @@
 .sub-menu .ping a { background-image: url( ../../img/ico/system-monitor.png ); }
 .sub-menu .logging a { background-image: url( ../../img/ico/inbox-document-text.png ); }
 .sub-menu .plugins a { background-image: url( ../../img/ico/block.png ); }
-.sub-menu .dataimport a { background-image: url( ../../img/ico/document-import.png ); }
 .sub-menu .segments a { background-image: url( ../../img/ico/construction.png ); }
 
 
diff --git a/solr/webapp/web/css/angular/replication.css b/solr/webapp/web/css/angular/replication.css
index 4eb6088..863f11b 100644
--- a/solr/webapp/web/css/angular/replication.css
+++ b/solr/webapp/web/css/angular/replication.css
@@ -61,17 +61,17 @@
   border-bottom: 0;
 }
 
-#content #replication .masterOnly,
-#content #replication .slaveOnly
+#content #replication .leaderOnly,
+#content #replication .followerOnly
 {
 }
 
-#content #replication.master .masterOnly
+#content #replication.leader .leaderOnly
 {
   display: block;
 }
 
-#content #replication.slave .slaveOnly
+#content #replication.follower .followerOnly
 {
   display: block;
 }
@@ -300,7 +300,7 @@
   text-align: left;
 }
 
-#content #replication.slave #details table .slaveOnly
+#content #replication.follower #details table .followerOnly
 {
   display: table-row;
 }
diff --git a/solr/webapp/web/img/ico/node-slave.png b/solr/webapp/web/img/ico/node-follower.png
similarity index 100%
rename from solr/webapp/web/img/ico/node-slave.png
rename to solr/webapp/web/img/ico/node-follower.png
Binary files differ
diff --git a/solr/webapp/web/img/ico/node-master.png b/solr/webapp/web/img/ico/node-leader.png
similarity index 100%
rename from solr/webapp/web/img/ico/node-master.png
rename to solr/webapp/web/img/ico/node-leader.png
Binary files differ
diff --git a/solr/webapp/web/index.html b/solr/webapp/web/index.html
index db50db7..2eba8b6 100644
--- a/solr/webapp/web/index.html
+++ b/solr/webapp/web/index.html
@@ -30,7 +30,6 @@
   <link rel="stylesheet" type="text/css" href="css/angular/cores.css?_=${version}">
   <link rel="stylesheet" type="text/css" href="css/angular/collections.css?_=${version}">
   <link rel="stylesheet" type="text/css" href="css/angular/dashboard.css?_=${version}">
-  <link rel="stylesheet" type="text/css" href="css/angular/dataimport.css?_=${version}">
   <link rel="stylesheet" type="text/css" href="css/angular/files.css?_=${version}">
   <link rel="stylesheet" type="text/css" href="css/angular/index.css?_=${version}">
   <link rel="stylesheet" type="text/css" href="css/angular/java-properties.css?_=${version}">
@@ -79,7 +78,6 @@
   <script src="js/angular/controllers/core-overview.js"></script>
   <script src="js/angular/controllers/collection-overview.js"></script>
   <script src="js/angular/controllers/analysis.js"></script>
-  <script src="js/angular/controllers/dataimport.js"></script>
   <script src="js/angular/controllers/documents.js"></script>
   <script src="js/angular/controllers/files.js"></script>
   <script src="js/angular/controllers/query.js"></script>
@@ -193,7 +191,6 @@
                 <li class="overview" ng-show="currentCollection.type === 'collection'" ng-class="{active:page=='collection-overview'}"><a href="#/{{currentCollection.name}}/collection-overview"><span>Overview</span></a></li>
                 <li class="overview" ng-show="currentCollection.type === 'alias'" ng-class="{active:page=='alias-overview'}"><a href="#/{{currentCollection.name}}/alias-overview"><span>Overview</span></a></li>
                 <li class="analysis" ng-show="!isMultiDestAlias(currentCollection)" ng-class="{active:page=='analysis'}"><a href="#/{{currentCollection.name}}/analysis"><span>Analysis</span></a></li>
-                <li class="dataimport" ng-show="!isMultiDestAlias(currentCollection)" ng-class="{active:page=='dataimport'}"><a href="#/{{currentCollection.name}}/dataimport"><span>Dataimport</span></a></li>
                 <li class="documents" ng-show="!isMultiDestAlias(currentCollection)" ng-class="{active:page=='documents'}"><a href="#/{{currentCollection.name}}/documents"><span>Documents</span></a></li>
                 <li class="files" ng-show="!isMultiDestAlias(currentCollection)" ng-class="{active:page=='files'}"><a href="#/{{currentCollection.name}}/files"><span>Files</span></a></li>
                 <li class="query" ng-class="{active:page=='query'}"><a href="#/{{currentCollection.name}}/query"><span>Query</span></a></li>
@@ -218,7 +215,6 @@
               <ul>
                 <li class="overview" ng-class="{active:page=='overview'}"><a href="#/{{currentCore.name}}/core-overview"><span>Overview</span></a></li>
                 <li ng-hide="isCloudEnabled" class="analysis" ng-class="{active:page=='analysis'}"><a href="#/{{currentCore.name}}/analysis"><span>Analysis</span></a></li>
-                <li ng-hide="isCloudEnabled" class="dataimport" ng-class="{active:page=='dataimport'}"><a href="#/{{currentCore.name}}/dataimport"><span>Dataimport</span></a></li>
                 <li ng-hide="isCloudEnabled" class="documents" ng-class="{active:page=='documents'}"><a href="#/{{currentCore.name}}/documents"><span>Documents</span></a></li>
                 <li ng-hide="isCloudEnabled" class="files" ng-class="{active:page=='files'}"><a href="#/{{currentCore.name}}/files"><span>Files</span></a></li>
                 <li class="ping" ng-class="{active:page=='ping'}"><a ng-click="ping()"><span>Ping</span><small class="qtime" ng-show="showPing"> (<span>{{pingMS}}ms</span>)</small></a></li>
diff --git a/solr/webapp/web/js/angular/app.js b/solr/webapp/web/js/angular/app.js
index d7d0cef..70a2a26 100644
--- a/solr/webapp/web/js/angular/app.js
+++ b/solr/webapp/web/js/angular/app.js
@@ -130,14 +130,6 @@
         templateUrl: 'partials/analysis.html',
         controller: 'AnalysisController'
       }).
-      when('/:core/dataimport', {
-        templateUrl: 'partials/dataimport.html',
-        controller: 'DataImportController'
-      }).
-      when('/:core/dataimport/:handler*', {
-        templateUrl: 'partials/dataimport.html',
-        controller: 'DataImportController'
-      }).
       when('/:core/documents', {
         templateUrl: 'partials/documents.html',
         controller: 'DocumentsController'
@@ -168,14 +160,6 @@
         templateUrl: 'partials/replication.html',
         controller: 'ReplicationController'
       }).
-      when('/:core/dataimport', {
-        templateUrl: 'partials/dataimport.html',
-        controller: 'DataImportController'
-      }).
-      when('/:core/dataimport/:handler*', {
-        templateUrl: 'partials/dataimport.html',
-        controller: 'DataImportController'
-      }).
       when('/:core/schema', {
         templateUrl: 'partials/schema.html',
         controller: 'SchemaController'
diff --git a/solr/webapp/web/js/angular/controllers/core-overview.js b/solr/webapp/web/js/angular/controllers/core-overview.js
index 0e2b3d2..3f07c24 100644
--- a/solr/webapp/web/js/angular/controllers/core-overview.js
+++ b/solr/webapp/web/js/angular/controllers/core-overview.js
@@ -33,8 +33,8 @@
   $scope.refreshReplication = function() {
     Replication.details({core: $routeParams.core},
       function(data) {
-        $scope.isSlave = data.details.isSlave == "true";
-        $scope.isMaster = data.details.isMaster == "true";
+        $scope.isFollower = data.details.isFollower == "true";
+        $scope.isLeader = data.details.isLeader == "true";
         $scope.replication = data.details;
       },
       function(error) {
diff --git a/solr/webapp/web/js/angular/controllers/dataimport.js b/solr/webapp/web/js/angular/controllers/dataimport.js
deleted file mode 100644
index c31b6f0..0000000
--- a/solr/webapp/web/js/angular/controllers/dataimport.js
+++ /dev/null
@@ -1,302 +0,0 @@
-/*
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-var dataimport_timeout = 2000;
-
-solrAdminApp.controller('DataImportController',
-    function($scope, $rootScope, $routeParams, $location, $timeout, $interval, $cookies, Mbeans, DataImport, Constants) {
-        $scope.resetMenu("dataimport", Constants.IS_COLLECTION_PAGE);
-
-        $scope.refresh = function () {
-            Mbeans.info({core: $routeParams.core, cat: 'QUERY'}, function (data) {
-                var mbeans = data['solr-mbeans'][1];
-                $scope.handlers = [];
-                for (var key in mbeans) {
-                    if (mbeans[key]['class'] !== key && mbeans[key]['class'] === 'org.apache.solr.handler.dataimport.DataImportHandler') {
-                        $scope.handlers.push(key);
-                    }
-                }
-                $scope.hasHandlers = $scope.handlers.length > 0;
-
-                if (!$routeParams.handler) {
-                    $location.path("/" + $routeParams.core + "/dataimport/" + $scope.handlers[0]);
-                } else {
-                    $scope.currentHandler = $routeParams.handler;
-                }
-            });
-
-            $scope.handler = $routeParams.handler;
-            if ($scope.handler && $scope.handler[0]=="/") {
-                $scope.handler = $scope.handler.substr(1);
-            }
-            if ($scope.handler) {
-                DataImport.config({core: $routeParams.core, name: $scope.handler}, function (data) {
-                    try {
-                        $scope.config = data.config;
-                        var xml = $.parseXML(data.config);
-                        $scope.entities = [];
-                        $('document > entity', xml).each(function (i, element) {
-                            $scope.entities.push($(element).attr('name'));
-                        });
-                        $scope.refreshStatus();
-                    } catch (err) {
-                        console.log(err);
-                    }
-                });
-            }
-            $scope.lastUpdate = "unknown";
-            $scope.lastUpdateUTC = "";
-        };
-
-        $scope.toggleDebug = function () {
-            $scope.isDebugMode = !$scope.isDebugMode;
-            if ($scope.isDebugMode) {
-                // also enable Debug checkbox
-                $scope.form.showDebug = true;
-            }
-            $scope.showConfiguration = true;
-        }
-
-        $scope.toggleConfiguration = function () {
-            $scope.showConfiguration = !$scope.showConfiguration;
-        }
-
-        $scope.toggleRawStatus = function () {
-            $scope.showRawStatus = !$scope.showRawStatus;
-        }
-
-        $scope.toggleRawDebug = function () {
-            $scope.showRawDebug = !$scope.showRawDebug;
-        }
-
-        $scope.reload = function () {
-            DataImport.reload({core: $routeParams.core, name: $scope.handler}, function () {
-                $scope.reloaded = true;
-                $timeout(function () {
-                    $scope.reloaded = false;
-                }, 5000);
-                $scope.refresh();
-            });
-        }
-
-        $scope.form = {
-            command: "full-import",
-            verbose: false,
-            clean: false,
-            commit: true,
-            showDebug: false,
-            custom: "",
-            core: $routeParams.core
-        };
-
-        $scope.submit = function () {
-            var params = {};
-            for (var key in $scope.form) {
-                if (key == "showDebug") {
-                    if ($scope.form.showDebug) {
-                        params["debug"] = true;
-                    }
-                } else {
-                    params[key] = $scope.form[key];
-                }
-            }
-            if (params.custom.length) {
-                var customParams = $scope.form.custom.split("&");
-                for (var i in customParams) {
-                    var parts = customParams[i].split("=");
-                    params[parts[0]] = parts[1];
-                }
-            }
-            delete params.custom;
-
-            if ($scope.isDebugMode) {
-                params.dataConfig = $scope.config;
-            }
-
-            params.core = $routeParams.core;
-            params.name = $scope.handler;
-
-            DataImport.post(params, function (data) {
-                $scope.rawResponse = JSON.stringify(data, null, 2);
-                $scope.refreshStatus();
-            });
-        };
-
-        $scope.abort = function () {
-            $scope.isAborting = true;
-            DataImport.abort({core: $routeParams.core, name: $scope.handler}, function () {
-                $timeout(function () {
-                    $scope.isAborting = false;
-                    $scope.refreshStatus();
-                }, 4000);
-            });
-        }
-
-        $scope.refreshStatus = function () {
-
-            console.log("Refresh Status");
-
-            $scope.isStatusLoading = true;
-            DataImport.status({core: $routeParams.core, name: $scope.handler}, function (data) {
-                if (data[0] == "<") {
-                    $scope.hasHandlers = false;
-                    return;
-                }
-
-                var now = new Date();
-                $scope.lastUpdate = now.toTimeString().split(' ').shift();
-                $scope.lastUpdateUTC = now.toUTCString();
-                var messages = data.statusMessages;
-                var messagesCount = 0;
-                for( var key in messages ) { messagesCount++; }
-
-                if (data.status == 'busy') {
-                    $scope.status = "indexing";
-
-                    $scope.timeElapsed = data.statusMessages['Time Elapsed'];
-                    $scope.elapsedSeconds = parseSeconds($scope.timeElapsed);
-
-                    var info = $scope.timeElapsed ? 'Indexing since ' + $scope.timeElapsed : 'Indexing ...';
-                    $scope.info = showInfo(messages, true, info, $scope.elapsedSeconds);
-
-                } else if (messages.RolledBack) {
-                    $scope.status = "failure";
-                    $scope.info = showInfo(messages, true);
-                } else if (messages.Aborted) {
-                    $scope.status = "aborted";
-                    $scope.info = showInfo(messages, true, 'Aborting current Import ...');
-                } else if (data.status == "idle" && messagesCount != 0) {
-                    $scope.status = "success";
-                    $scope.info = showInfo(messages, true);
-                } else {
-                    $scope.status = "idle";
-                    $scope.info = showInfo(messages, false, 'No information available (idle)');
-                }
-
-                delete data.$promise;
-                delete data.$resolved;
-
-                $scope.rawStatus = JSON.stringify(data, null, 2);
-
-                $scope.isStatusLoading = false;
-                $scope.statusUpdated = true;
-                $timeout(function () {
-                    $scope.statusUpdated = false;
-                }, dataimport_timeout / 2);
-            });
-        };
-
-        $scope.updateAutoRefresh = function () {
-            $scope.autorefresh = !$scope.autorefresh;
-            $cookies.dataimport_autorefresh = $scope.autorefresh ? true : null;
-            if ($scope.autorefresh) {
-                $scope.refreshTimeout = $interval($scope.refreshStatus, dataimport_timeout);
-                var onRouteChangeOff = $scope.$on('$routeChangeStart', function() {
-                    $interval.cancel($scope.refreshTimeout);
-                    onRouteChangeOff();
-                });
-
-            } else if ($scope.refreshTimeout) {
-                $interval.cancel($scope.refreshTimeout);
-            }
-            $scope.refreshStatus();
-        };
-
-        $scope.refresh();
-
-});
-
-var showInfo = function (messages, showFull, info_text, elapsed_seconds) {
-
-    var info = {};
-    if (info_text) {
-        info.text = info_text;
-    } else {
-        info.text = messages[''] || '';
-        // format numbers included in status nicely
-        /* @todo this pretty printing is hard to work out how to do in an Angularesque way:
-        info.text = info.text.replace(/\d{4,}/g,
-            function (match, position, string) {
-                return app.format_number(parseInt(match, 10));
-            }
-        );
-        */
-
-        var time_taken_text = messages['Time taken'];
-        info.timeTaken = parseSeconds(time_taken_text);
-    }
-    info.showDetails = false;
-
-    if (showFull) {
-        if (!elapsed_seconds) {
-            var time_taken_text = messages['Time taken'];
-            elapsed_seconds = parseSeconds(time_taken_text);
-        }
-
-        info.showDetails = true;
-
-        var document_config = {
-            'Requests': 'Total Requests made to DataSource',
-            'Fetched': 'Total Rows Fetched',
-            'Skipped': 'Total Documents Skipped',
-            'Processed': 'Total Documents Processed'
-        };
-
-        info.docs = [];
-        for (var key in document_config) {
-            var value = parseInt(messages[document_config[key]], 10);
-            var doc = {desc: document_config[key], name: key, value: value};
-            if (elapsed_seconds && key != 'Skipped') {
-                doc.speed = Math.round(value / elapsed_seconds);
-            }
-            info.docs.push(doc);
-        }
-
-        var dates_config = {
-            'Started': 'Full Dump Started',
-            'Aborted': 'Aborted',
-            'Rolledback': 'Rolledback'
-        };
-
-        info.dates = [];
-        for (var key in dates_config) {
-            var value = messages[dates_config[key]];
-            if (value) {
-                value = value.replace(" ", "T")+".000Z";
-                console.log(value);
-                var date = {desc: dates_config[key], name: key, value: value};
-                info.dates.push(date);
-            }
-        }
-    }
-    return info;
-}
-
-var parseSeconds = function(time) {
-    var seconds = 0;
-    var arr = new String(time || '').split('.');
-    var parts = arr[0].split(':').reverse();
-
-    for (var i = 0; i < parts.length; i++) {
-        seconds += ( parseInt(parts[i], 10) || 0 ) * Math.pow(60, i);
-    }
-
-    if (arr[1] && 5 <= parseInt(arr[1][0], 10)) {
-        seconds++; // treat more or equal than .5 as additional second
-    }
-    return seconds;
-}
diff --git a/solr/webapp/web/js/angular/controllers/replication.js b/solr/webapp/web/js/angular/controllers/replication.js
index 9f7ac3e..6e5fd50 100644
--- a/solr/webapp/web/js/angular/controllers/replication.js
+++ b/solr/webapp/web/js/angular/controllers/replication.js
@@ -26,12 +26,12 @@
                 var timeout;
                 var interval;
                 if ($scope.interval) $interval.cancel($scope.interval);
-                $scope.isSlave = (response.details.isSlave === 'true');
-                if ($scope.isSlave) {
-                    $scope.progress = getProgressDetails(response.details.slave);
-                    $scope.iterations = getIterations(response.details.slave);
-                    $scope.versions = getSlaveVersions(response.details);
-                    $scope.settings = getSlaveSettings(response.details);
+                $scope.isFollower = (response.details.isFollower === 'true');
+                if ($scope.isFollower) {
+                    $scope.progress = getProgressDetails(response.details.follower);
+                    $scope.iterations = getIterations(response.details.follower);
+                    $scope.versions = getFollowerVersions(response.details);
+                    $scope.settings = getFollowerSettings(response.details);
                     if ($scope.settings.isReplicating) {
                         timeout = $timeout($scope.refresh, 1000);
                     } else if(!$scope.settings.isPollingDisabled && $scope.settings.pollInterval) {
@@ -41,9 +41,9 @@
                         timeout = $timeout($scope.refresh, 1000*(1+$scope.settings.tick));
                     }
                 } else {
-                    $scope.versions = getMasterVersions(response.details);
+                    $scope.versions = getLeaderVersions(response.details);
                 }
-                $scope.master = getMasterSettings(response.details, $scope.isSlave);
+                $scope.leader = getLeaderSettings(response.details, $scope.isFollower);
 
                 var onRouteChangeOff = $scope.$on('$routeChangeStart', function() {
                     if (interval) $interval.cancel(interval);
@@ -85,7 +85,7 @@
     return progress;
 };
 
-var getIterations = function(slave) {
+var getIterations = function(follower) {
 
     var iterations = [];
 
@@ -93,17 +93,17 @@
         return list.filter(function(e) {return e.date == date});
     };
 
-    for (var i in slave.indexReplicatedAtList) {
-        var date = slave.indexReplicatedAtList[i];
+    for (var i in follower.indexReplicatedAtList) {
+        var date = follower.indexReplicatedAtList[i];
         var iteration = {date:date, status:"replicated", latest: false};
-        if (date == slave.indexReplicatedAt) {
+        if (date == follower.indexReplicatedAt) {
             iteration.latest = true;
         }
         iterations.push(iteration);
     }
 
-    for (var i in slave.replicationFailedAtList) {
-        var failedDate = slave.replicationFailedAtList[i];
+    for (var i in follower.replicationFailedAtList) {
+        var failedDate = follower.replicationFailedAtList[i];
         var matchingIterations = find(iterations, failedDate);
         if (matchingIterations[0]) {
             iteration = matchingIterations[0];
@@ -112,7 +112,7 @@
             iteration = {date: failedDate, status:"failed", latest:false};
             iterations.push(iteration);
         }
-        if (failedDate == slave.replicationFailedAt) {
+        if (failedDate == follower.replicationFailedAt) {
             iteration.latest = true;
         }
     }
@@ -120,37 +120,37 @@
     return iterations;
 };
 
-var getMasterVersions = function(data) {
-    versions = {masterSearch:{}, master:{}};
+var getLeaderVersions = function(data) {
+    versions = {leaderSearch:{}, leader:{}};
 
-    versions.masterSearch.version = data.indexVersion;
-    versions.masterSearch.generation = data.generation;
-    versions.masterSearch.size = data.indexSize;
+    versions.leaderSearch.version = data.indexVersion;
+    versions.leaderSearch.generation = data.generation;
+    versions.leaderSearch.size = data.indexSize;
 
-    versions.master.version = data.master.replicableVersion || '-';
-    versions.master.generation = data.master.replicableGeneration || '-';
-    versions.master.size = '-';
+    versions.leader.version = data.leader.replicableVersion || '-';
+    versions.leader.generation = data.leader.replicableGeneration || '-';
+    versions.leader.size = '-';
 
     return versions;
 };
 
-var getSlaveVersions = function(data) {
-    versions = {masterSearch: {}, master: {}, slave: {}};
+var getFollowerVersions = function(data) {
+    versions = {leaderSearch: {}, leader: {}, follower: {}};
 
-    versions.slave.version = data.indexVersion;
-    versions.slave.generation = data.generation;
-    versions.slave.size = data.indexSize;
+    versions.follower.version = data.indexVersion;
+    versions.follower.generation = data.generation;
+    versions.follower.size = data.indexSize;
 
-    versions.master.version = data.slave.masterDetails.replicableVersion || '-';
-    versions.master.generation = data.slave.masterDetails.replicableGeneration || '-';
-    versions.master.size = '-';
+    versions.leader.version = data.follower.leaderDetails.replicableVersion || '-';
+    versions.leader.generation = data.follower.leaderDetails.replicableGeneration || '-';
+    versions.leader.size = '-';
 
-    versions.masterSearch.version = data.slave.masterDetails.indexVersion;
-    versions.masterSearch.generation = data.slave.masterDetails.generation;
-    versions.masterSearch.size = data.slave.masterDetails.indexSize;
+    versions.leaderSearch.version = data.follower.leaderDetails.indexVersion;
+    versions.leaderSearch.generation = data.follower.leaderDetails.generation;
+    versions.leaderSearch.size = data.follower.leaderDetails.indexSize;
 
-    versions.changedVersion = data.indexVersion !== data.slave.masterDetails.indexVersion;
-    versions.changedGeneration = data.generation !== data.slave.masterDetails.generation;
+    versions.changedVersion = data.indexVersion !== data.follower.leaderDetails.indexVersion;
+    versions.changedGeneration = data.generation !== data.follower.leaderDetails.generation;
 
     return versions;
 };
@@ -181,13 +181,13 @@
     return seconds;
 }
 
-var getSlaveSettings = function(data) {
+var getFollowerSettings = function(data) {
     var settings = {};
-    settings.masterUrl = data.slave.masterUrl;
-    settings.isPollingDisabled = data.slave.isPollingDisabled == 'true';
-    settings.pollInterval = data.slave.pollInterval;
-    settings.isReplicating = data.slave.isReplicating == 'true';
-    settings.nextExecutionAt = data.slave.nextExecutionAt;
+    settings.leaderUrl = data.follower.leaderUrl;
+    settings.isPollingDisabled = data.follower.isPollingDisabled == 'true';
+    settings.pollInterval = data.follower.pollInterval;
+    settings.isReplicating = data.follower.isReplicating == 'true';
+    settings.nextExecutionAt = data.follower.nextExecutionAt;
 
     if(settings.isReplicating) {
         settings.isApprox = true;
@@ -195,7 +195,7 @@
     } else if (!settings.isPollingDisabled && settings.pollInterval) {
         if( settings.nextExecutionAt ) {
             settings.nextExecutionAtEpoch = parseDateToEpoch(settings.nextExecutionAt);
-            settings.currentTime = parseDateToEpoch(data.slave.currentDate);
+            settings.currentTime = parseDateToEpoch(data.follower.currentDate);
 
             if( settings.nextExecutionAtEpoch > settings.currentTime) {
                 settings.isApprox = false;
@@ -206,15 +206,15 @@
     return settings;
 };
 
-var getMasterSettings = function(details, isSlave) {
-    var master = {};
-    var masterData = isSlave ? details.slave.masterDetails.master : details.master;
-    master.replicationEnabled = masterData.replicationEnabled == "true";
-    master.replicateAfter = masterData.replicateAfter.join(", ");
+var getLeaderSettings = function(details, isFollower) {
+    var leader = {};
+    var leaderData = isFollower ? details.follower.leaderDetails.leader : details.leader;
+    leader.replicationEnabled = leaderData.replicationEnabled == "true";
+    leader.replicateAfter = leaderData.replicateAfter.join(", ");
 
-    if (masterData.confFiles) {
-        master.files = [];
-        var confFiles = masterData.confFiles.split(',');
+    if (leaderData.confFiles) {
+        leader.files = [];
+        var confFiles = leaderData.confFiles.split(',');
         for (var i=0; i<confFiles.length; i++) {
             var file = confFiles[i];
             var short = file;
@@ -222,14 +222,14 @@
             if (file.indexOf(":")>=0) {
                 title = file.replace(':', ' » ');
                 var parts = file.split(':');
-                if (isSlave) {
+                if (isFollower) {
                     short = parts[1];
                 } else {
                     short = parts[0];
                 }
             }
-            master.files.push({title:title, name:short});
+            leader.files.push({title:title, name:short});
         }
     }
-    return master;
+    return leader;
 }
diff --git a/solr/webapp/web/js/angular/services.js b/solr/webapp/web/js/angular/services.js
index 8b371b6..51dde42 100644
--- a/solr/webapp/web/js/angular/services.js
+++ b/solr/webapp/web/js/angular/services.js
@@ -173,21 +173,6 @@
       "field": {params: {"analysis.showmatch": true}}
     });
   }])
-.factory('DataImport',
-  ['$resource', function($resource) {
-    return $resource(':core/:name', {core: '@core', name: '@name', indent:'on', wt:'json', _:Date.now()}, {
-      "config": {params: {command: "show-config"}, headers: {doNotIntercept: "true"},
-                 transformResponse: function(data) {
-                    return {config: data};
-                 }
-                },
-      "status": {params: {command: "status"}, headers: {doNotIntercept: "true"}},
-      "reload": {params: {command: "reload-config"}},
-      "post": {method: "POST",
-                headers: {'Content-type': 'application/x-www-form-urlencoded'},
-                transformRequest: function(data) { return $.param(data) }}
-    });
-  }])
 .factory('Ping',
   ['$resource', function($resource) {
     return $resource(':core/admin/ping', {wt:'json', core: '@core', ts:Date.now(), _:Date.now()}, {
diff --git a/solr/webapp/web/partials/core_overview.html b/solr/webapp/web/partials/core_overview.html
index f1826f6..0c3b8e3 100644
--- a/solr/webapp/web/partials/core_overview.html
+++ b/solr/webapp/web/partials/core_overview.html
@@ -99,8 +99,8 @@
       <h2>
         <span class="is-replicating">
           Replication
-          <span ng-show="isSlave"> (Slave)</span>
-          <span ng-show="isMaster"> (Master)</span>
+          <span ng-show="isFollower"> (Follower)</span>
+          <span ng-show="isLeader"> (Leader)</span>
         </span>
       </h2>
 
@@ -126,45 +126,45 @@
           </thead>
           <tbody>
 
-            <tr class="masterSearch" ng-show="isMaster">
+            <tr class="leaderSearch" ng-show="isLeader">
 
-              <th>Master (Searching)</th>
+              <th>Leader (Searching)</th>
               <td class="version"><div>{{replication.indexVersion}}</div></td>
               <td class="generation"><div>{{replication.generation}}</div></td>
               <td class="size"><div>{{replication.indexSize || '-'}}</div></td>
 
             </tr>
 
-            <tr class="master" ng-show="isMaster">
+            <tr class="leader" ng-show="isLeader">
 
-              <th>Master (Replicable)</th>
-              <td class="version"><div>{{replication.master.replicableVersion || '-'}}</div></td>
-              <td class="generation"><div>{{replication.master.replicableGeneration || '-'}}</div></td>
+              <th>Leader (Replicable)</th>
+              <td class="version"><div>{{replication.leader.replicableVersion || '-'}}</div></td>
+              <td class="generation"><div>{{replication.leader.replicableGeneration || '-'}}</div></td>
               <td class="size"><div>-</div></td>
 
             </tr>
 
-            <tr class="master" ng-show="isSlave">
+            <tr class="leader" ng-show="isFollower">
 
-              <th>Master (Replicable)</th>
-              <td class="version"><div>{{replication.master.replicableVersion || '-'}}</div></td>
-              <td class="generation"><div>{{replication.master.replicableGeneration || '-'}}</div></td>
+              <th>Leader (Replicable)</th>
+              <td class="version"><div>{{replication.leader.replicableVersion || '-'}}</div></td>
+              <td class="generation"><div>{{replication.leader.replicableGeneration || '-'}}</div></td>
               <td class="size"><div>-</div></td>
 
             </tr>
 
-            <tr class="masterSearch" ng-show="isSlave">
+            <tr class="leaderSearch" ng-show="isFollower">
 
-              <th>Master (Searching)</th>
-              <td class="version"><div>{{replication.slave.masterDetails.indexVersion}}</div></td>
-              <td class="generation"><div>{{replication.slave.masterDetails.generation}}</div></td>
-              <td class="size"><div>{{replication.slave.masterDetails.indexSize || '-'}}</div></td>
+              <th>Leader (Searching)</th>
+              <td class="version"><div>{{replication.follower.leaderDetails.indexVersion}}</div></td>
+              <td class="generation"><div>{{replication.follower.leaderDetails.generation}}</div></td>
+              <td class="size"><div>{{replication.follower.leaderDetails.indexSize || '-'}}</div></td>
 
             </tr>
 
-            <tr class="slave slaveOnly" ng-show="isSlave">
+            <tr class="follower followerOnly" ng-show="isFollower">
 
-              <th>Slave (Searching)</th>
+              <th>Follower (Searching)</th>
               <td class="version"><div>{{replication.indexVersion}}</div></td>
               <td class="generation"><div>{{replication.generation}}</div></td>
               <td class="size"><div>{{replication.indexSize || '-'}}</div></td>
diff --git a/solr/webapp/web/partials/dataimport.html b/solr/webapp/web/partials/dataimport.html
deleted file mode 100644
index a27be07..0000000
--- a/solr/webapp/web/partials/dataimport.html
+++ /dev/null
@@ -1,210 +0,0 @@
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-<div id="dataimport" class="clearfix">
-
-  <div ng-show="!hasHandlers">The solrconfig.xml file for this index does not have an operational DataImportHandler defined!</div>
-  <div id="frame" ng-show="hasHandlers">
-    <div id="deprecation_message">The Data Import Handler is deprecated as of Solr 8.6 and may be removed in a future release. A community supported package for may be used instead (See SOLR-14066 for details).</div>
-
-    <div id="error" ng-show="error"></div>
-
-    <div id="current_state" class="{{status}}">
-
-      <p class="last_update">Last Update: <abbr title="{{lastUpdateUTC}}">{{lastUpdate}}</abbr></p>
-      <div class="info">
-
-        <strong>{{info.text}}<span ng-show="info.timeTaken"> (Duration: {{info.timeTaken | readableSeconds }})</span>
-        </strong>
-        <div class="details" ng-show="info.showDetails">
-          <div class="docs">
-              <span ng-repeat="doc in info.docs">
-                  <abbr style="display:inline" title="{{ doc.desc }}">{{ doc.name }}</abbr>: {{doc.value | number}}<!-- remove whitespace!
-                  -->&nbsp;<span style="display:inline" ng-show="doc.speed">{{ doc.speed | number}}/s</span><!-- remove whitespace!
-                  --><span style="display:inline" ng-show="!$last">, </span>
-              </span>
-          </div>
-          <div class="dates">
-              <span ng-repeat="date in info.dates">
-                  <abbr title="{{ date.desc }}">{{ date.name }}</abbr>:
-                  <abbr class="time">{{ date.value | timeago }}</abbr>
-              </span>
-          </div>
-        </div>
-
-        <button class="abort-import" ng-class="{warn:!isAborting, success: isAborting}" ng-click="abort()" ng-show="isRunning">
-            <span ng-show="isAborting">Aborting Import</span>
-            <span ng-show="!isAborting">Abort Import</span>
-        </button>
-
-      </div>
-
-    </div>
-
-    <div class="block" id="raw_output" >
-
-      <h2>
-        <a class="toggle" ng-click="toggleRawStatus()"><span>Raw Status-Output</span></a>
-      </h2>
-
-      <div class="message-container" ng-show="showRawStatus">
-          <div class="message"></div>
-      </div>
-
-      <div class="content" ng-show="showRawStatus">
-
-        <div id="raw_output_container"><pre class="syntax language-json"><code ng-bind-html="rawStatus | highlight:'json' | unsafe"></code></pre></div>
-
-      </div>
-
-    </div>
-
-    <div class="block" id="config" ng-class="{debug_mode:isDebugMode}">
-
-      <h2 class="clearfix">
-        <a class="toggle" ng-click="toggleConfiguration()"><span>Configuration</span></a>
-        <a class="r reload_config" ng-class="{success:reloaded}" ng-click="reload()" title="Reload Configuration">Reload</a>
-        <a class="r debug_mode" ng-click="toggleDebug()">Debug-Mode</a>
-      </h2>
-
-      <div class="message-container" ng-show="showConfiguration">
-          <div class="message"></div>
-      </div>
-
-      <div class="content" ng-show="showConfiguration">
-        <div id="dataimport_config">
-
-          <div class="formatted" ng-show="!isDebugMode">
-
-            <pre class="syntax language-xml"><code ng-bind-html="config | highlight:'xml'| unsafe"></code></pre>
-
-          </div>
-
-          <div class="editable" ng-show="isDebugMode">
-
-            <textarea ng-model="config"></textarea>
-
-          </div>
-
-        </div>
-
-      </div>
-
-    </div>
-
-    <div class="block" id="debug_response" ng-show="form.showDebug">
-
-      <h2>
-        <a class="toggle" ng-click="toggleRawDebug()"><span>Raw Debug-Response</span></a>
-      </h2>
-
-      <div class="message-container" ng-show="showRawDebug">
-          <div class="message"></div>
-      </div>
-
-      <div class="content" ng-show="showRawDebug">
-          <span ng-hide="rawResponse">
-              <em>No Request executed</em>
-          </span>
-          <span ng-show="rawResponse">
-            <pre class="syntax language-json"><code ng-bind-html="rawResponse | highlight:'json' | unsafe"></code></pre>
-          </span>
-      </div>
-
-    </div>
-
-  </div>
-
-  <div id="form" ng-show="hasHandlers">
-
-    <div id="navigation">
-
-      <ul>
-          <li ng-class="{current: currentHandler == handler}" ng-repeat="handler in handlers">
-              <a href="#/{{form.core}}/dataimport/{{handler}}">{{handler}}</a>
-          </li>
-      </ul>
-
-    </div>
-
-    <form action="#" method="get">
-
-      <label for="command">
-        <a rel="help">Command</a>
-      </label>
-      <select name="command" id="command" ng-model="form.command">
-        <option>full-import</option>
-        <option>delta-import</option>
-      </select>
-
-      <label for="verbose" class="checkbox">
-        <input type="checkbox" name="verbose" id="verbose" ng-model="form.verbose">
-        Verbose
-      </label>
-
-      <label for="clean" class="checkbox">
-        <input type="checkbox" name="clean" id="clean" ng-model="form.clean">
-        Clean
-      </label>
-
-      <label for="commit" class="checkbox">
-        <input type="checkbox" name="commit" id="commit" ng-model="form.commit">
-        Commit
-      </label>
-
-      <label for="optimize" class="checkbox">
-        <input type="checkbox" name="optimize" id="optimize" ng-model="form.optimize">
-        Optimize
-      </label>
-
-      <label for="debug" class="checkbox">
-        <input type="checkbox" name="debug" id="debug" ng-model="form.showDebug">
-        Debug
-      </label>
-
-      <label for="entity">
-        <a rel="help">Entity</a>
-      </label>
-      <select ng-model="form.entity" id="entity">
-        <option value=""></option>
-        <option ng-repeat="entity in entities">{{entity}}</option>
-      </select>
-
-      <label for="start">
-        <a rel="help">Start</a>,
-        <a rel="help">Rows</a>
-      </label>
-      <div class="clearfix">
-        <input type="text" id="start" placeholder="0" ng-model="form.start">
-        <input type="text" id="rows" placeholder="10" ng-model="form.rows">
-      </div>
-
-      <label for="custom_parameters">
-        <a rel="help">Custom Parameters</a>
-      </label>
-      <input type="text" id="custom_parameters" ng-model="form.custom" placeholder="key1=val1&amp;key2=val2">
-    </form>
-      <button class="execute" type="submit" ng-click="submit()">
-          <span ng-show="isDebugMode">Execute with this Configuration →</span>
-          <span ng-show="!isDebugMode">Execute</span>
-      </button>
-      <button class="refresh-status" ng-click="refreshStatus()" ng-class="{loader: isStatusLoading, success: statusUpdated}"><span>Refresh Status</span></button>
-
-    <p id="auto-refresh-status"><a ng-click="updateAutoRefresh()" ng-class="{on:autorefresh}">Auto-Refresh Status</a></p>
-
-  </div>
-
-</div>
diff --git a/solr/webapp/web/partials/replication.html b/solr/webapp/web/partials/replication.html
index b3d6684..7315f9e 100644
--- a/solr/webapp/web/partials/replication.html
+++ b/solr/webapp/web/partials/replication.html
@@ -84,7 +84,7 @@
         
     </div>
 
-    <div id="iterations" class="slaveOnly block clearfix" ng-show="isSlave">
+    <div id="iterations" class="followerOnly block clearfix" ng-show="isFollower">
 
       <div class="label"><span class="">Iterations:</span></div>
       <div class="iterations" ng-show="iterations && showIterations">
@@ -118,47 +118,47 @@
         </thead>
         <tbody>
 
-          <tr class="masterSearch">
+          <tr class="leaderSearch">
 
-            <th>Master (Searching)</th>
+            <th>Leader (Searching)</th>
             <td class="version" ng-class="{diff:versions.changedVersion}">
-                <div>{{versions.masterSearch.version}}</div>
+                <div>{{versions.leaderSearch.version}}</div>
             </td>
             <td class="generation" ng-class="{diff:versions.changedGeneration}">
-                <div>{{versions.masterSearch.generation}}</div>
+                <div>{{versions.leaderSearch.generation}}</div>
             </td>
             <td class="size">
-                <div>{{versions.masterSearch.size}}</div>
+                <div>{{versions.leaderSearch.size}}</div>
             </td>
 
           </tr>
 
-          <tr class="master">
+          <tr class="leader">
 
-            <th>Master (Replicable)</th>
+            <th>Leader (Replicable)</th>
             <td class="version" ng-class="{diff:versions.changedVersion}">
-                <div>{{versions.master.version}}</div>
+                <div>{{versions.leader.version}}</div>
             </td>
             <td class="generation" ng-class="{diff:versions.changedGeneration}">
-                <div>{{versions.master.generation}}</div>
+                <div>{{versions.leader.generation}}</div>
             </td>
             <td class="size">
-                <div>{{versions.master.size}}</div>
+                <div>{{versions.leader.size}}</div>
             </td>
 
           </tr>
 
-          <tr class="slave slaveOnly" ng-show="isSlave">
+          <tr class="follower followerOnly" ng-show="isFollower">
 
-            <th>Slave (Searching)</th>
+            <th>Follower (Searching)</th>
             <td class="version" ng-class="{diff:versions.changedVersion}">
-                <div>{{versions.slave.version}}</div>
+                <div>{{versions.follower.version}}</div>
             </td>
             <td class="generation" ng-class="{diff:versions.changedGeneration}">
-                <div>{{versions.slave.generation}}</div>
+                <div>{{versions.follower.generation}}</div>
             </td>
             <td class="size">
-                <div>{{versions.slave.size}}</div>
+                <div>{{versions.follower.size}}</div>
             </td>
 
           </tr>
@@ -169,14 +169,14 @@
 
     </div>
 
-    <div id="settings" class="settings block clearfix slaveOnly" ng-show="isSlave">
+    <div id="settings" class="settings block clearfix followerOnly" ng-show="isFollower">
 
       <div class="label"><span>Settings:</span></div>
       <ul>
-        <li class="masterUrl" ng-show="settings.masterUrl">
+        <li class="leaderUrl" ng-show="settings.leaderUrl">
             <dl class="clearfix">
-                <dt>master url:</dt>
-                <dd>{{settings.masterUrl}}</dd>
+                <dt>leader url:</dt>
+                <dd>{{settings.leaderUrl}}</dd>
             </dl>
         </li>
         <li class="isPollingDisabled"><dl class="clearfix">
@@ -189,21 +189,21 @@
         
     </div>
 
-    <div id="master-settings" class="settings block clearfix">
+    <div id="leader-settings" class="settings block clearfix">
 
-      <div class="label"><span>Settings (Master):</span></div>
+      <div class="label"><span>Settings (Leader):</span></div>
       <ul>
         <li class="replicationEnabled"><dl class="clearfix">
           <dt>replication enable:</dt>
-            <dd class="ico" ng-class="{'ico-0':!master.replicationEnabled, 'ico-1':master.replicationEnabled}">&nbsp;</dd>
+            <dd class="ico" ng-class="{'ico-0':!leader.replicationEnabled, 'ico-1':leader.replicationEnabled}">&nbsp;</dd>
         </dl></li>
         <li class="replicateAfter"><dl class="clearfix">
           <dt>replicateAfter:</dt>
-            <dd>{{master.replicateAfter}}</dd>
+            <dd>{{leader.replicateAfter}}</dd>
         </dl></li>
-        <li class="confFiles" ng-show="master.files"><dl class="clearfix">
+        <li class="confFiles" ng-show="leader.files"><dl class="clearfix">
           <dt>confFiles:</dt>
-            <dd><span ng-repeat="file in master.files"><attr title="{{file.title}}">{{file.name}}</attr>{{ $last ? '' :', '}}</span></dd>
+            <dd><span ng-repeat="file in leader.files"><attr title="{{file.title}}">{{file.name}}</attr>{{ $last ? '' :', '}}</span></dd>
         </dl></li>
       </ul>
         
@@ -213,7 +213,7 @@
 
   <div id="navigation">
 
-    <div class="timer" ng-show="isSlave && !settings.isPollingDisabled &&!settings.isReplicating">
+    <div class="timer" ng-show="isFollower && !settings.isPollingDisabled &&!settings.isReplicating">
 
       <p>Next Run: <span class="approx" ng-show="settings.isApprox">~</span><span class="tick">{{settings.tick | readableSeconds}}</span></p>
       <small ng-show="settings.nextExecutionAt">{{settings.nextExecutionAt}}</small>
@@ -221,7 +221,7 @@
 
     <button class="refresh-status" ng-click="refresh()"><span>Refresh Status</span></button>
 
-    <div class="slaveOnly" ng-show="isSlave">
+    <div class="followerOnly" ng-show="isFollower">
       <button class="optional replicate-now primary" ng-click="execute('fetchindex')" ng-show="!settings.isReplicating"><span>Replicate now</span></button>
       <button class="optional abort-replication warn" ng-click="execute('abortfetch')" ng-show="settings.isReplicating"><span>Abort Replication</span></button>
 
@@ -229,9 +229,9 @@
       <button class="optional enable-polling" ng-click="execute('enablepoll')" ng-show="settings.isPollingDisabled"><span>Enable Polling</span></button>
     </div>
 
-    <div class="masterOnly" ng-show="!isSlave">
-      <button class="optional disable-replication warn" ng-click="execute('disablereplication')" ng-show="master.replicationEnabled"><span>Disable Replication</span></button>
-      <button class="optional enable-replication warn" ng-click="execute('enablereplication')" ng-show="!master.replicationEnabled"><span>Enable Replication</span></button>
+    <div class="leaderOnly" ng-show="!isFollower">
+      <button class="optional disable-replication warn" ng-click="execute('disablereplication')" ng-show="leader.replicationEnabled"><span>Disable Replication</span></button>
+      <button class="optional enable-replication warn" ng-click="execute('enablereplication')" ng-show="!leader.replicationEnabled"><span>Enable Replication</span></button>
     </div>
     
   </div>
diff --git a/versions.lock b/versions.lock
index eea9b9c..31b2ae5 100644
--- a/versions.lock
+++ b/versions.lock
@@ -43,8 +43,6 @@
 com.pff:java-libpst:0.8.1 (1 constraints: 0b050436)
 com.rometools:rome:1.12.2 (1 constraints: 3805313b)
 com.rometools:rome-utils:1.12.2 (1 constraints: 3805313b)
-com.sun.mail:gimap:1.5.1 (1 constraints: 09050036)
-com.sun.mail:javax.mail:1.5.1 (2 constraints: 830d2844)
 com.tdunning:t-digest:3.1 (1 constraints: a804212c)
 com.vaadin.external.google:android-json:0.0.20131108.vaadin1 (1 constraints: 34092a9e)
 commons-cli:commons-cli:1.4 (1 constraints: a9041e2c)
@@ -77,10 +75,9 @@
 io.prometheus:simpleclient_common:0.2.0 (2 constraints: e8159ecb)
 io.prometheus:simpleclient_httpserver:0.2.0 (1 constraints: 0405f135)
 io.sgr:s2-geometry-library-java:1.0.0 (1 constraints: 0305f035)
-javax.activation:activation:1.1.1 (3 constraints: 1017445c)
 javax.servlet:javax.servlet-api:3.1.0 (3 constraints: 75209943)
 joda-time:joda-time:2.9.9 (1 constraints: 8a0972a1)
-junit:junit:4.12 (2 constraints: 3e1e6104)
+junit:junit:4.12 (1 constraints: db04ff30)
 net.arnx:jsonic:1.2.7 (2 constraints: db10d4d1)
 net.hydromatic:eigenbase-properties:1.1.5 (1 constraints: 0905f835)
 net.jcip:jcip-annotations:1.0 (1 constraints: 560ff165)
@@ -189,7 +186,7 @@
 org.eclipse.jetty.http2:http2-server:9.4.27.v20200227 (1 constraints: 7b071d7d)
 org.gagravarr:vorbis-java-core:0.8 (1 constraints: ac041f2c)
 org.gagravarr:vorbis-java-tika:0.8 (1 constraints: ac041f2c)
-org.hamcrest:hamcrest-core:1.3 (2 constraints: 730ad9bf)
+org.hamcrest:hamcrest:2.2 (1 constraints: a8041f2c)
 org.jdom:jdom2:2.0.6 (1 constraints: 0a05fb35)
 org.jruby:dirgra:0.3 (1 constraints: 1b098b8e)
 org.jruby:jruby:9.2.6.0 (1 constraints: 490d7d28)
@@ -216,7 +213,6 @@
 [Test dependencies]
 com.sun.jersey:jersey-servlet:1.19 (1 constraints: df04fa30)
 net.bytebuddy:byte-buddy:1.9.3 (2 constraints: 2510faaf)
-org.apache.derby:derby:10.9.1.0 (1 constraints: 9b054946)
 org.apache.hadoop:hadoop-hdfs:3.2.0 (1 constraints: 07050036)
 org.apache.hadoop:hadoop-minikdc:3.2.0 (1 constraints: 07050036)
 org.apache.kerby:kerb-admin:1.0.1 (1 constraints: 0405f135)
diff --git a/versions.props b/versions.props
index 35f9bbf..8c25e30 100644
--- a/versions.props
+++ b/versions.props
@@ -21,7 +21,6 @@
 com.pff:java-libpst=0.8.1
 com.rometools:*=1.12.2
 com.sun.jersey:*=1.19
-com.sun.mail:*=1.5.1
 com.tdunning:t-digest=3.1
 com.vaadin.external.google:android-json=0.0.20131108.vaadin1
 commons-beanutils:commons-beanutils=1.9.3
@@ -37,7 +36,6 @@
 io.opentracing:*=0.33.0
 io.prometheus:*=0.2.0
 io.sgr:s2-geometry-library-java=1.0.0
-javax.activation:activation=1.1.1
 javax.servlet:javax.servlet-api=3.1.0
 junit:junit=4.12
 net.arnx:jsonic=1.2.7
@@ -58,7 +56,6 @@
 org.apache.commons:commons-math3=3.6.1
 org.apache.commons:commons-text=1.6
 org.apache.curator:*=2.13.0
-org.apache.derby:derby=10.9.1.0
 org.apache.hadoop:*=3.2.0
 org.apache.htrace:htrace-core4=4.1.0-incubating
 org.apache.httpcomponents:httpclient=4.5.10
@@ -89,7 +86,7 @@
 org.eclipse.jetty.http2:*=9.4.27.v20200227
 org.eclipse.jetty:*=9.4.27.v20200227
 org.gagravarr:*=0.8
-org.hamcrest:*=1.3
+org.hamcrest:*=2.2
 org.hsqldb:hsqldb=2.4.0
 org.jdom:jdom2=2.0.6
 org.jsoup:jsoup=1.12.1